Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

CHAPTER

1

First-Order

Differential Equations

1.1

Dynamical Systems: Modeling

Constants of Proportionality

1.

dA

kA

dt

= (k < 0) 2.

dA

kA

dt

= (k < 0)

3. (20, 000 )

dP

kP P

dt

= − 4.

dA kA

dt t

=

5.

dG kN

dt A

=

A Walking Model

6. Because d t = υ where d = distance traveled, υ = average velocity, and t = time elapsed, we have

the model for the time elapsed as simply the equation t

d

=

υ

. Now, if we measure the distance

traveled as 1 mile and the average velocity as 3 miles/hour, then our model predicts the time to be

t

d

= =

υ

1

3

hr , or 20 minutes. If it actually takes 20 minutes to walk to the store, the model is

perfectly accurate. This model is so simple we generally don’t even think of it as a model.

A Falling Model

7. (a) Galileo has given us the model for the distance s t ( )a ball falls in a vacuum as a function

of time t: On the surface of the earth the acceleration of the ball is a constant, so

d s

dt

g

2

2

= , where g ≈ 32 2 . ft sec

2

. Integrating twice and using the conditions s 0 0 ( ) = ,

ds

dt

0

0

( )

= , we find

s t gt ( ) =

1

2

2

s t gt ( ) =

1

2

2

.

2 CHAPTER 1 First-Order Differential Equations

(b) We find the time it takes for the ball to fall 100 feet by solving for t the equation

100

1

2

161

2 2

= = gt t . , which gives t = 2 49 . seconds. (We use 3 significant digits in the

answer because g is also given to 3 significant digits.)

(c) If the observed time it takes for a ball to fall 100 feet is 2.6 seconds, but the model

predicts 2.49 seconds, the first thing that might come to mind is the fact that Galileo’s

model assumes the ball is falling in a vacuum, so some of the difference might be due to

air friction.

The Malthus Rate Constant k

8. (a) Replacing

e

0 03

103045

.

. ≈

in Equation (3) gives

y

t

= ( ) 09 103045 . . ,

which increases roughly 3% per year.

(b)

1860 1820

4

1800

6

8

2

1840

t

y

10

1880

3

5

7

1

9

Malthus

World population

(c) Clearly, Malthus’ rate estimate was far too high. The world population indeed rises, as

does the exponential function, but at a far slower rate.

If y t e

rt

( ) = 09 . , you might try solving y e

r

200 09 60

200

( ) = = . . for r. Then

200

6

09

1897 r = ≈ ln

.

.

so

r ≈ ≈

1897

200

00095

.

. ,

which is less than 1%.

Population Update

9. (a) If we assume the world’s population in billions is currently following the unrestricted

growth curve at a rate of 1.7% and start with the UN figure for 2000, then

0.017

0

6.056

kt t

y e e = ,

SECTION 1.1 Dynamical Systems: Modeling 3

and the population in the years 2010 t = ( ) 10 , 2020 t = ( ) 20 , and 2030 t = ( ) 30 , would be, respec-

tively, the values

( )

( )

( )

0.017 10

0.017 20

0.017 30

6.056 7.176

6.056 8.509

6.056 10.083.

e

e

e

=

≈

≈

These values increasingly exceed the United Nations predictions so the U.N. is assuming

a growth rate less than 1.7%.

(b) 2010:

10

6.056 6.843

r

e =

10

6.843

1.13

6.056

10 ln(1.13) 0.1222

1.2%

r

e

r

r

= =

= =

=

2020:

10

6843 7568

r

e =

10

7.578

1.107

6.843

10 ln(1.107) 0.102

1.0%

r

e

r

r

= =

= =

=

2030:

10

7.578 8.199

r

e =

10

8.199

1.082

7.578

10 ln(1.082) 0.079

0.8%

r

e

r

r

= =

= =

=

The Malthus Model

10. (a) Malthus thought the human population was increasing exponentially e

kt

, whereas the

food supply increases arithmetically according to a linear function a bt + . This means the

number of people per food supply would be in the ratio

e

a bt

kt

+ ( )

, which although not a

pure exponential function, is concave up. This means that the rate of increase in the

number of persons per the amount of food is increasing.

(b) The model cannot last forever since its population approaches infinity; reality would

produce some limitation. The exponential model does not take under consideration

starvation, wars, diseases, and other influences that slow growth.

4 CHAPTER 1 First-Order Differential Equations

(c) A linear growth model for food supply will increase supply without bound and fails to

account for technological innovations, such as mechanization, pesticides and genetic

engineering. A nonlinear model that approaches some finite upper limit would be more

appropriate.

(d) An exponential model is sometimes reasonable with simple populations over short

periods of time, e.g., when you get sick a bacteria might multiply exponentially until your

body’s defenses come into action or you receive appropriate medication.

Discrete-Time Malthus

11. (a) Taking the 1798 population as y

0

09 = . (0.9 billion), we have the population in the years

1799, 1800, 1801, and 1802, respectively

y

y

y

y

1

2

2

3

3

4

4

103 09 0927

103 09 0956

103 09 0983

103 09 1023

= ( ) =

= ( ) ( ) =

= ( ) ( ) =

= ( ) ( ) =

. . .

. . .

. . .

. . . .

(b) In 1990 we have t =192, hence

y

192

192

103 09 262 = ( ) ( ) ≈ . . (262 billion).

(c) The discrete model will always give a value lower than the continuous model. Later,

when we study compound interest, you will learn the exact relationship between discrete

compounding (as in the discrete-time Malthus model) and continuous compounding (as

described by ′ = y ky).

Verhulst Model

12.

dy

dt

y k cy = − ( ). The constant k affects the initial growth of the population whereas the constant c

controls the damping of the population for larger y. There is no reason to suspect the two values

would be the same and so a model like this would seem to be promising if we only knew their

values. From the equation ′ = − ( ) y y k cy , we see that for small y the population closely obeys

′ = y ky, but reaches a steady state ′ = ( ) y 0 when y

k

c

= .

Suggested Journal Entry

13. Student Project

SECTION 1.2 Solutions and Direction Fields 5

1.2

Solutions and Direction Fields

Verification

1. If y t = 2 2 tan , then ′ = y t 4 2

2

sec . Substituting ′ y and y into ′ = + y y

2

4 yields a trigonometric

identity

4 2 4 2 4

2 2

sec tan t t ( ) ≡ ( ) + .

2. Substituting

y t t

y t

= +

′ = +

3

3 2

2

into ′ = + y

t

y t

1

yields the identity

3 2

1

3

2

+ ≡ + + t

t

t t t a f .

3. Substituting

y t t

y t t t

=

′ = +

2

2

ln

ln

into ′ = + y

t

y t

2

yields the identity

2

2

2

t t t

t

t t t ln ln + ≡ + b g .

4. If y e ds e e ds

s t

t

t s

t

= =

− −

−

z z

2

0

2 2

0

2 2

2 2

b g

, then, using the product rule and the fundamental theorem of

calculus, we have

′ = + = +

− − −

z z

y e e te e ds te e ds

t t t s

t

t s

t

2 2 2 2

0

2 2

0

2 2 2 2 2 2

4 1 4 .

Substituting ′ y and y into ′ − y ty 4 yields

1 4 4

2 2 2 2

0 0

2 2 2 2

+ −

− −

z z

te e ds te e ds

t s t s

t t

,

which is 1 as the differential equation requires.

6 CHAPTER 1 First-Order Differential Equations

IVPs

5. Here

y e e

y e e

t t

t t

= −

′ = − +

− −

− −

1

2

1

2

3

3

3

.

Substituting into the differential equation

′ + =

−

y y e

t

3

we get

− +

F

H

G

I

K

J

+ −

F

H

G

I

K

J

− − − −

1

2

3 3

1

2

3 3

e e e e

t t t t

,

which is equal to e

t −

as the differential equation requires. It is also a simple matter to see that

y 0

1

2

( ) = − , and so the initial condition is also satisfied.

6. Another direct substitution

Applying Initial Conditions

7. If y ce

t

=

2

, then we have ′ = y cte

t

2

2

and if we substitute y and ′ y into ′ = y ty 2 , we get the

identity 2 2

2 2

cte t ce

t t

≡

d i

. If y 0 2 ( ) = , then we have ce c

0

2

2 ≡ = .

8. We have

y e t ce

y e t e t ce

t t

t t t

= +

′ = − +

cos

cos sin

and substituting y and ′ y into ′ − y y yields

e t e t ce e t ce

t t t t t

cos sin cos − + − + b g b g,

which is −e t

t

sin . If y 0 1 ( ) = − , then − = + 1 0

0 0

e ce cos yields c = −2.

SECTION 1.2 Solutions and Direction Fields 7

Using the Direction Field

9.

′ = y y 2

2 –2

–2

2

y

t

Solutions are y ce

t

=

2

.

10.

′ = − y

t

y

–2

2

y

2 –2

t

Solutions are y c t = −

2

.

11.

′ = − y t y

–2

2

y

2 –2

t

Solutions are y t ce

t

= − +

−

1 .

Linear Solution

12. It appears from the direction field that there is a straight-line solution passing through (0, –1) with

slope 1, i.e., the line y t = −1. Computing ′ = y 1, we see it satisfies the DE ′ = − y t y because

1 1 ≡ − − ( ) t t .

8 CHAPTER 1 First-Order Differential Equations

Stability

13.

1 0 y y ′ = − =

When y = 1, the direction field shows a stable equilibrium solution.

For y > 1, slopes are negative; for y < 1, slopes are positive.

14.

( 1) 0 y y y ′ = + =

When y = 0, an unstable equilibrium solution exists, and when y = −1, a stable equilibrium

solution exists.

For y = 3, 3(4) 12 y′ = =

y = 1, 1(2) 2 y′ = =

y =

1

2

− ,

1 1 1

2 2 4

y

⎛ ⎞⎛ ⎞

′ = − = −

⎜ ⎟⎜ ⎟

⎝ ⎠⎝ ⎠

y = −2, ( 2)( 1) 2 y′ = − − =

y = 4, ( 4)( 3) 12 y′ = − − =

SECTION 1.2 Solutions and Direction Fields 9

15.

2 2

(1 ) y t y ′ = −

Two equilibrium solutions:

y = 1 is stable

y = −1 is unstable

Between the equilibria the slopes (positive) are shallower as they are located further from the

horizontal axis.

Outside the equilibria the slopes are negative and become steeper as they are found further from

the horizontal axis.

All slopes become steeper as they are found further from the vertical axis.

Match Game

16. (C) Because the slope is always the same

17. (D) Because the slope is always the value of y

18. (F) Because F is the only direction field that has vertical slopes when t = 0 and zero slopes

when y = 0

19. (B) Because it is the only direction field that has all zero slopes when t = 0

20. (E) The slope is always positive and equal to the square of the distance from the origin.

21. (A) Because it is undefined when t = 0 and the directional field has slopes that are

independent of y, with the same sign as that of t

10 CHAPTER 1 First-Order Differential Equations

Concavity

22.

2

4 y y ′ = −

2 2 ( 2)( 2) y yy y y y ′′ ′ = = + −

When y = 0, we find inflection points for

solutions.

Equilibrium solutions occur when y = 2 (unstable)

or when y = −2 (stable).

Solutions are

concave up for y > 2, and ( 2, 0) y ∈ − ;

concave down for y < −2, and (0, 2) y ∈

Horizontal axis is locus of inflection points;

shaded regions are where solutions are

concave down.

23.

2

y y t ′ = +

2

2 2 0 y y t y t t ′′ ′ = + = + + =

When

2

2 , 0, so y t t y′′ = − − =

we have a locus of inflection points.

Solutions are concave up above the parabola of

inflection points, concave down below.

Parabola is locus of inflection points;

shaded regions are where solutions are

concave down.

SECTION 1.2 Solutions and Direction Fields 11

24.

2

y y t ′ = −

3

2 1

2 2 1 0

y yy

y yt

′′ ′ = −

= − − =

When

3

2

2 1 1

,

2 2

y

t y

y y

−

= = − then 0 y′′ =

and we have a locus of inflection points.

The locus of inflection points has two branches:

Above the upper branch, and to the right of the

lower branch, solutions are concave up.

Below the upper branch but outside the lower

branch, solutions are concave down.

Bold curves are the locus of inflection

points; shaded regions are where solutions

are concave down.

Asymptotes

25.

2

y y ′ =

Because y′ depends only on y, isoclines will be horizontal lines, and solutions will be horizontal

translates.

Slopes get steeper ever more quickly as distance from the x-axis increases.

If the y-axis extends high enough, you may suspect (correctly) that undefined solutions will each

have a (different) vertical asymptote. When slopes are increasing quickly, it’s a good idea to

check how fast. The direction field will give good intuition, if you look far enough.

Compare with y y ′ = for a case where the solutions do not have asymptotes.

12 CHAPTER 1 First-Order Differential Equations

26.

1

y

ty

′ =

The DE is undefined for t = 0 or y = 0, so solutions do not cross either axis.

However, as solutions approach or depart from the horizontal axis, they asymptotically approach

a vertical slope.

Every solution has a vertical asymptote

when it is close to the horizontal axis.

27.

2

y t ′ =

There are no asymptotes.

As t → ∞ (or t → −∞) slopes get steeper and steeper, but they do not actually approach vertical

for any finite value of t.

No asymptote

SECTION 1.2 Solutions and Direction Fields 13

28.

2 y t y ′ = +

Solutions to this DE have an oblique asymptote–

they all curve away from it as t → ∞, moving

down then up on the right, simply down on the

left. The equation of this asymptote can be at least

approximately read off the graphs as y = −2t − 2.

In fact, you can verify that this line satisfies the

DE, so this asymptote is also a solution.

Oblique Asymptote

29.

2 y ty t ′ = − +

Here we have a horizontal asymptote,

at t =

1

2

.

Horizontal asymptote

30.

2

1

ty

y

t

′ =

−

At t = 1 and t = −1 the DE is undefined. The

direction field shows that as y → 0 from either

above or below, solutions asymptotically

approach vertical slope. However, y = 0 is a

solution to the DE, and the other solutions do not

cross the horizontal axis for t ≠ ±1. (See Picard’s

Theorem Sec. 1.5.)

Vertical asymptotes for t → 1 or t → −1

14 CHAPTER 1 First-Order Differential Equations

Isoclines

31.

′ = y t .

The isoclines are vertical lines t c = , as

follows for c = 0, ±1, ±2 shown in the

figure.

–2

2

y

2 –2

t

32.

′ = − y y .

Here the slope of the solution is negative when

y > 0 and positive for y < 0. The isoclines for

c = −1, 0, 1 are shown in the figure.

–2

2

y

2 –2

t

slopes –1

slopes 0

slopes 1

33.

′ = y y

2

.

Here the slope of the solution is always ≥ 0.

The isoclines where the slope is c > 0 are the

horizontal lines y c = ± ≥ 0. In other words

the isoclines where the slope is 4 are y = ±2.

The isoclines for c = 0, 2, and 4 are shown in

the figure.

–2

2

y

2 –2

t

slopes 4

slopes 4

slopes 2

slopes 0

slopes 2

34.

′ = − y ty .

Setting − = ty c, we see that the points where the

slope is c are along the curve y

c

t

= − , t ≠ 0 or

hyperbolas in the ty plane.

For 1 c = , the isocline is the hyperbola y

t

= −

1

.

For 1 c = − , the isocline is the hyperbola y

t

=

1

.

2 –2

t

–2

2

y

slopes –1

slopes 0

slopes 1

slopes 1

slopes –1

SECTION 1.2 Solutions and Direction Fields 15

When t = 0 the slope is zero for any y; when y = 0 the slope is zero for any t, and y = 0 is in fact

a solution. See figure for the direction field for this equation with isoclines for c = 0, ±1.

35.

′ = − y t y 2 . The isocline where ′ = y c is the

straight line y t c = − 2 . The isoclines with slopes

c = −4, –2, 0, 2, 4 are shown from left to right

(see figure).

–2

2

y

2 –2

t

36.

′ = − y y t

2

. The isocline where ′ = y c is a parab-

ola that opens to the right. Three isoclines, with

slopes c = 2, 0, –2, are shown from left to right

(see figure).

–2

2

y

slopes –2

slopes 2

slopes 0

2 –2

t

37.

cos y y ′ =

0 when y = odd multiples of

2

π

y c ′ = = 1 when y = 0, 2π, 4π, …

−1 when y = π, 3π, …

Additional observations:

1 y′ ≤ for all y.

When y =

4

π

, this information produces a slope

field in which the constant solutions, at

y = (2 1)

2

n

π

+ , act as horizontal asymptotes.

16 CHAPTER 1 First-Order Differential Equations

38.

sin y t ′ =

0 when t = 0, π, 2π, …

y c ′ = = 1 when t =

3 3

, , ,...

2 2 2

π π π

−

−1 when t =

3

, ,...

2 2

π π

−

The direction field indicates oscillatory periodic

solutions, which you can verify as y = −cost.

39.

cos( ) y y t ′ = −

0 when y − t =

3

, , ,...

2 2 2

π π π

−

or y = t ± (2 1)

2

n

π

+

y c ′ = = 1 when y − t = 0, 2π, …

or y = t ± 2nπ

−1 when y − t = −π, π, 3π, …

or y = t ± (2n + 1)π

All these isoclines (dashed) have slope 1, with

different y-intercepts.

The isoclines for solution slopes 1 are also

solutions to the DE and act as oblique asymptotes

for the other solutions between them (which, by

uniqueness, do not cross. See Section 1.5).

SECTION 1.2 Solutions and Direction Fields 17

Periodicity

40.

cos10 y t ′ =

0 when 10t = (2 1)

2

n

π ⎛ ⎞

± +

⎜ ⎟

⎝ ⎠

y c ′ = = 1 when 10t = ±2nπ

−1 when 10t = ±(2n + 1)π

y′ is always between +1 and −1.

All solutions are periodic oscillations, with period

2

10

π

.

Zooming in Zooming out

41.

2 sin y t ′ = −

If t = nπ, then y′ = 2.

If t =

3 5

, , ,..., then 1

2 2 2

y

π π π

′ − = .

All slopes are between 1 and 3.

Although there is a periodic pattern to the

direction field, the solutions are quite irregular

and not periodic.

If you zoom out far enough, the oscillations of

the solutions look somewhat more regular, but

are always moving upward. See Figures.

18 CHAPTER 1 First-Order Differential Equations

Zooming out Zooming further out

42.

cos y y ′ = −

If y = (2 1) , then 0 and

2

n y

π

′ ± + = these horizontal lines are equilibrium solutions.

For y = ±2nπ, y′ = −1

For y = ±(2n + 1)π, y′ = 1.

Slope y′ is always between −1 and 1, and solutions between the constant solutions cannot cross

them, by uniqueness.

To further check what happens in these cases we have added an isocline at y =

4

π

, where

cos

4

y

π ⎛ ⎞

′ =

⎜ ⎟

⎝ ⎠

≈ −0.7.

Solutions are not periodic, but there is a periodicity to the direction field, in the vertical direction

with period 2π. Furthermore, we observe that between every adjacent pair of constant solutions,

the solutions are horizontal translates.

SECTION 1.2 Solutions and Direction Fields 19

43.

cos10 0.2 y t ′ = +

For 10t = (2 1)

2

n

π

± +

y′ = 0.2, t ≈ 0.157 ±

10

nπ

For 10t = ±2nπ,

y′ = 1.2, t ≈ ±

2

10

nπ

For 10t = ±(2n + 1)π

y′ = −0.8, t ≈ 0.314 ±

2

10

nπ

To get y′ = 0 we must have cos 10t = −0.2

Or 10t = ±(1.77 + 2nπ)

The solutions oscillate in a periodic fashion, but

at the same time they move ever upward. Hence

they are not strictly periodic. Compare with

Problem 40.

Direction field and solutions

over a larger scale.

Direction field (augmented and improved in lower half), with rough sketch solution.

20 CHAPTER 1 First-Order Differential Equations

44. cos( ) y y t ′ = −

See Problem #39 for the direction field and sample solutions.

The solutions are not periodic, though there is a periodic (and diagonal) pattern to the overall

direction field.

45.

(cos ) y y t y ′ = −

Slopes are 0 whenever y = cos t or y = 0

Slopes are negative outside of both these isoclines;

Slopes are positive in the regions trapped by the two isoclines.

If you try to sketch a solution through this configuration, you will see it goes downward a lot

more of the time than upward.

For y > 0 the solutions wiggle downward but never cross the horizontal axis—they get sent

upward a bit first.

For y < 0 solutions eventually get out of the upward-flinging regions and go forever downward.

The solutions are not periodic, despite the periodic function in the DE.

SECTION 1.2 Solutions and Direction Fields 21

46.

sin 2 cos y t t ′ = +

If t = ±2nπ, then y′ = 0.

If t = (2 1) , then 0

2

n y

π

′ ± + = .

If t = ±(2n + 1)π, then y′ = −1.

Isoclines are vertical lines, and solutions are vertical translates.

From this information it seems likely that solutions will oscillate with period 2π, rather like

Problem 40. But beware—this is not the whole story. For y′ = sin 2t + cos t, slopes will not

remain between ±1.

e.g.,

For t =

9

, ,...,

4 4

π π

y′ ≈ 1 + 0.7 = 1.7.

For t =

3 11

, ,...,

4 4

π π

y′ ≈ −1 − 0.7 = −1.7.

For t =

5 13

, ,...,

4 4

π π

y′ ≈ 1 − 0.7 = 0.3

For t =

7 15

, ,...,

4 4

π π

y′ ≈ −1 + 0.7 = −0.3

The figures on the next page are crucial to seeing what is going on.

Adding these isoclines and slopes shows there are more wiggles in the solutions.

There are additional isoclines of zero slope where

sin 2

2sin cos

t

t t

= −cos t,

i.e., where sin t =

1

2

− and

t =

5 7 11

, , , ...

6 6 6 6

π π π π

− −

There is a symmetry to the slope marks about every vertical line where t = (2 1)

2

n

π

± + ; these are

some of the isoclines of zero slope.

Solutions are periodic, with period 2π.

See figures on next page.

22 CHAPTER 1 First-Order Differential Equations

(46. continued)

Direction field, sketched with ever increasing detail as you move down the graph.

Direction field and solutions by computer.

SECTION 1.2 Solutions and Direction Fields 23

Symmetry

47.

2

y y ′ =

Note that y′ depends only on y, so isoclines are

horizontal lines.

Positive and negative values of y give the same slopes.

Hence the slope values are symmetric about the

horizontal axis, but the resulting picture is not.

The figures are given with Problem 25 solutions.

The only symmetry visible in the direction field is point symmetry, about the origin (or any point

on the t-axis).

48.

2

y t ′ =

Note that y′ depends only on t, so isoclines are vertical lines.

Positive and negative values of t give the same slope, so the slope values are repeated symmetrically

across the vertical axis, but the resulting direction field does not have visual symmetry.

The only symmetry visible in the direction field is point symmetry through the origin (or any

point on the y-axis).

24 CHAPTER 1 First-Order Differential Equations

49.

y t ′ = −

Note that y′ depends only on t, so isoclines are vertical lines.

For t > 0, slopes are negative;

For t < 0, slopes are positive.

The result is pictorial symmetry of the vector field about the vertical axis.

50.

y y ′ = −

Note that y′ depends only on y, so isoclines are horizontal lines.

For y > 0, slopes are negative.

For y < 0, slopes are positive.

As a result, the direction field is reflected across the horizontal axis.

SECTION 1.2 Solutions and Direction Fields 25

51.

2

1

( 1)

y

t

′ =

+

Note that y′ depends only on t, so isoclines will be vertical lines.

Slopes are always positive, so they will be repeated, not reflected, across t = −1, where the DE is

not defined.

If t = 0 or −2, slope is 1.

If t = 1 or −3, slope is

1

.

4

If t = 2 or −4, slope is

1

.

9

The direction field has point symmetry through the point (−1, 0), or any point on the line t = −1.

52.

2

y

y

t

′ =

Positive and negative values for y give the same slopes,

2

y

t

, so you can plot them for a single

positive y-value and then repeat them for the negative of that y-value.

Note: Across the horizontal axis, this fact does not give symmetry to the direction field or solutions.

However because the sign of t gives the sign of the slope,

2

y

t

, the result is a pictorial symmetry

about the vertical axis. See figures on the next page.

It is sufficient therefore to calculate slopes for the first quadrant only, that is,

reflect them about the y-axis, repeat them about the t-axis.

If y = 0, y′ = 0.

If y = ±1,

1

y

t

′ = .

If y = ±2

4

y

t

′ = .

26 CHAPTER 1 First-Order Differential Equations

Second-Order Equations

53. (a) Direct substitution of y, ′ y , and ′′ y into the differential equation reduces it to an identity.

(b) Direct computation

(c) Direct computation

(d) Substituting

y t Ae Be

y t Ae Be

t t

t t

( ) = +

′( ) = −

−

−

2

2

2

into the initial conditions gives

y A B

y A B

0 2

0 2 5

( ) = + =

′( ) = − = − .

Solving these equations, gives A = −1, B = 3, so y e e

t t

= − +

− − 2

3 .

Long-Term Behavior

54.

y t y ′ = +

(a) There are no constant solutions; zero slope

requires y = −t, which is not constant.

(b) There are no points where the DE, or its

solutions, are undefined.

(c) We see one straight line solution that appears

to have slope m = −1 and y-intercept b = −1.

Indeed, y = −t − 1 satisfies the DE.

(d) All solutions above y = −t − 1 are concave up;

those below are concave down. This

observation is confirmed by the sign of

1 1 . y y t y ′′ ′ = + = + +

In shaded region, solutions are concave

down.

SECTION 1.2 Solutions and Direction Fields 27

(e) As t → ∞, solutions above y = −t − 1 approach

∞; those below approach −∞.

(f) As t → −∞, going backward in time, all

solutions are seen to emanate from ∞.

(g) The only asymptote, which is oblique, appears

if we go backward in time—then all solutions

are ever closer to y = −t − 1.

There are no periodic solutions.

55.

y t

y

y t

−

′ =

+

(a) There are no constant solutions, but

solutions will have zero slope along y = t.

(b) The DE is undefined along y = −t.

(c) There are no straight line solutions.

(d)

2

( )( 1) ( )( 1)

( )

y t y y t y

y

y t

′ ′ + − − − +

′′ =

+

2 2

3

2

Simplify using 1

2

and 1 , so that

( )

2 .

( )

t

y t y t

y

y t

y

y t y t

y

y t

t y

y

y t

−

− − −

′ − =

+

− + +

′ + =

+

+

′′ = −

+

Never zero

Hence y′′ is < 0 for y + t > 0, so solutions are concave down for y > −t

> 0 for y + t < 0, so solutions are concave up for y < −t

(e) As t → ∞, all solutions approach y = −t.

(f) As t → −∞, we see that all solutions emanate from y = −t.

(g) All solutions become more vertical (at both ends) as they approach y = −t.

There are no periodic solutions.

In shaded region, solutions are concave

down.

28 CHAPTER 1 First-Order Differential Equations

56.

1

y

ty

′ =

(a) There are no constant solutions, or even

zero slopes, because

1

ty

is never zero.

(b) The DE is undefined for t = 0 or for y = 0,

so solutions will not cross either axis.

(c) There are no straight line solutions.

(d) Solutions will be concave down above the

t-axis, concave up below the t-axis.

From

1

y

ty

′ = , we get

2 2

1 1

. y y

ty t y

−

′′ ′ = −

This simplifies to

( )

2

2 3

1

1 , y y

t y

′′ = − + which is never zero,

so there are no inflection points.

In shaded region, solutions are concave

down.

(e) As t → ∞, solutions in upper quadrant →∞

solutions in the lower quadrant →−∞

(f) As t → −∞, we see that solutions in upper quadrant emanate from +∞, those in lower

quadrant emanate from −∞.

(g) In the left and right half plane, solutions asymptotically approach vertical slopes as y → 0.

There are no periodic solutions.

57.

1

y

t y

′ =

−

(a) There are no constant solutions, nor even any

point with zero slope.

(b) The DE is undefined along y = t.

(c) There appears to be one straight line solution

with slope 1 and y-intercept −1; indeed y = t − 1

satisfies the DE.

y′ = 1 when y = t − 1. Straight line solution

In shaded region, solutions are concave down.

SECTION 1.2 Solutions and Direction Fields 29

(d)

2 3

(1 ) ( 1)

( ) ( )

y y t

y

t y t y

′ − − −

′′ = − =

− −

y′′ > 0 when y > t − 1 and y< t

0 when 1 and

1 and

y y t y t

y t y t

′′ < < − > ⎫

⎬

> − >

⎭

Solutions concave up

Solutions concave down

(e) As t → ∞, solutions below y = t − 1 approach ∞;

solutions above y = t − 1 approach y = t ever more vertically.

(f) As t → −∞, solutions above y = t emanate from ∞;

solutions below y = t emanate from −∞.

(g) In backwards time the line y = t − 1 is an oblique asymptote.

There are no periodic solutions.

58.

2

1

y

t y

′ =

−

(a) There are no constant solutions.

(b) The DE is undefined along the parabola

y = t

2

, so solutions will not cross this locus.

(c) We see no straight line solutions.

(d) We see inflection points and changes in

concavity, so we calculate

2 2

(2 )

( )

t y

y

t y

′ −

′′ = −

−

= 0 when 2 y t ′ =

From DE

2

1

2 y t

t y

′ = =

−

when

2

1

2

y t

t

= − , drawn as a thicker dashed

curve with two branches.

In shaded region, solutions are concave

down. The DE is undefined on the

boundary of the parabola. The dark curves

are not solutions, but locus of inflection

points

Inside the parabola

2

y t > , so 0 y′ < and solutions are decreasing, concave down for solutions

below the left branch of 0 y′′ = .

Outside the parabola

2

y t < , 0 y′ > , solutions are increasing; and concave down below the right

branch of 0. y′′ =

(e) As t → ∞, slopes → 0 and solutions → horizontal asymptotes.

(f) As t → −∞, solutions are seen to emanate from horizontal asymptotes.

(g) As solutions approach y = t

2

, their slopes approach vertical.

There are no periodic solutions.

30 CHAPTER 1 First-Order Differential Equations

59.

2

1

y

y

t

′ = −

(a) There are no constant solutions.

(b) The DE is not defined for t = 0; solutions

do not cross the y-axis.

(c) The only straight path in the direction

field is along the y-axis, where t = 0. But

the DE is not defined there, so there is no

straight line solution.

(d) Concavity changes when

2

2

2 2

2

(2 2 ) 0,

yy t y y

y y y t

t t

′ −

′′ = = − − =

that is, when y = 0 or along the parabola

2

1 1

16 4

t y

⎛ ⎞ ⎛ ⎞

− = −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

(obtained by solving the second factor of

y′′ for t and completing the square).

In shaded region, solutions are concave

down. The horizontal axis is not a solution,

just a locus of inflection points.

(e) As t → ∞, most solutions approach −∞. However in the first quadrant solutions above the

parabola where 0 y′′ = fly up toward +∞. The parabola is composed of two solutions that

act as a separator for behaviors of all the other solutions.

(f) In the left half plane solutions emanate from ∞.

In the right half plane, above the lower half of the parabola where 0 y′′ = , solutions seem

to emanate from the upper y-intercept of the parabola; below the parabola they emanate

from −∞.

(g) The negative y-axis seems to be an asymptote for solutions in the left-half-plane, and in

backward time for solutions in the lower right half plane.

There are no periodic solutions.

SECTION 1.2 Solutions and Direction Fields 31

Logistic Population Model

60.

We find the constant solutions by setting ′ = y 0

and solving for y. This gives ky y 1 0 − ( ) = , hence

the constant solutions are y t ( ) ≡ 0, 1. Notice from

the direction field or from the sign of the

derivative that solutions starting at 0 or 1 remain

at those values, and solutions starting between 0

and 1 increase asymptotically to 1, solutions

starting larger than 1 decrease to 1 asymptoti-

cally. The following figure shows the direction

field of ′ = − ( ) y y y 1 and some sample solutions.

3 0

t 0

2

y

1

stable equilibrium

unstable equilibrium

Logistic model

Autonomy

61. (a) Autonomous:

2

#9 2

#13 1

#14 ( 1)

#16 1

#17

#32

#33

#37 cos

y y

y y

y y y

y

y y

y y

y y

y y

′ =

′ = −

′ = +

′ =

′ =

′ = −

′ =

′ =

The others are nonautonomous.

(b) Isoclines for autonomous equations consist of horizontal lines.

Comparison

62.

(i) ′ = y y

2

–2

2

y

semistable equilibrium

2 –2

t

32 CHAPTER 1 First-Order Differential Equations

(ii) ′ = + ( ) y y 1

2

–2

2

y

2 –2

t

semistable equilibrium

(iii) ′ = + y y

2

1

–2

2

y

2 –2

t

Equations (a) and (b) each have a

constant solution that is unstable for higher

values and stable for lower y values, but these

equilibria occur at different levels. Equation (c)

has no equilibrium at all.

All three DEs are autonomous, so

within each graph solutions from left to right

are always horizontal translates.

(a) For y > 0 we have

y y y

2 2

2

1 1 < + < + ( ) .

For the three equations ′ = y y

2

, ′ = + y y

2

1, and ′ = + ( ) y y 1

2

, all with y 0 1 ( ) = ;

the solution of ′ = + ( ) y y 1

2

will be the largest and the solution of ′ = y y

2

will be

the smallest.

(b) Because y t

t

( ) =

−

1

1

is a solution of the initial-value problem ′ = y y

2

, y 0 1 ( ) = ,

which blows up at t =1. We then know that the solution of ′ = + y y

2

1, y 0 1 ( ) =

will blow up (approach infinity) somewhere between 0 and 1. When we solve

this problem later using the method of separation of variables, we will find out

where the solution blows up.

SECTION 1.2 Solutions and Direction Fields 33

Coloring Basins

63.

′ = − ( ) y y y 1 . The constant solutions are found by

setting ′ = y 0, giving y t ( ) ≡ 0, 1. Either by look-

ing at the direction field or by analyzing the sign

of the derivative, we conclude the constant solu-

tion y t ( ) ≡ 1 has a basin of attraction of 0, ∞ ( ),

and y t ( ) ≡ 0 has a basin attraction of the single

value {0}. When the solutions have negative in-

itial conditions, the solutions approach –∞.

–1

2

y

4

t

64.

′ = − y y

2

4. The constant solutions are the (real)

roots of y

2

4 0 − = , or y = ±2. For y > 2, we have

′ > y 0. We, therefore, conclude solutions with

initial conditions greater than 2 increase; for

− < < 2 2 y we have ′ < y 0, hence solutions with

initial conditions in this range decrease; and for

y < 0, we have ′ > y 0, hence solutions with

initial conditions in this interval increase.

3

t

–3

3

y

We can therefore, conclude that the constant solution y = 2 has a basin of attraction of the single

value {2}, whereas the constant solution y = −2 has the basin of attraction of −∞ ( ) , 2

65.

′ = − ( ) − ( ) y y y y 1 2 . Analyzing the sign of the

derivative in each of the intervals −∞ ( ) , 0 , 0 1 , ( ) ,

1 2 , ( ), 2, ∞ ( ) , we conclude that the constant

solutions y t ( ) ≡ 0, 1, 2 have the following basins

of attraction: y t ( ) ≡ 0 has the single point {0}

basin of attraction; y t ( ) ≡ 1 has the basin of at-

traction 0 2 , ( ); and y t ( ) ≡ 2 has the single value

{2} basin of attraction.

–1

3

y

2

t

34 CHAPTER 1 First-Order Differential Equations

66.

′ = − ( ) y y 1

2

. Because the derivative ′ y is always

zero or positive, we conclude the constant solu-

tion y t ( ) ≡ 1 has basin of attraction the interval

−∞ ( , 1 .

2

t 0

2

y

Computer or Calculator

The student can refer to Problems 69–73 as examples when working Problems 67, 68, and 74.

67. ′ = y

y

2

. Student Project 68. ′ = + y y t 2 . Student Project

69.

′ = y ty. The direction field shows one constant

solution y t ( ) ≡ 0, which is unstable (see figure).

For negative t solutions approach zero slope, and

for positive t solutions move away from zero

slope.

–2

2

y

2 –2

t

unstable equilibrium

70.

′ = + y y t

2

. We see that eventually all solutions

approach plus infinity. In backwards time most

solutions approach the top part of this parabola.

There are no constant or periodic solutions to this

equation. You might also note that the isocline

y t

2

0 + = is a parabola sitting on its side for

t < 0. In backwards time most solutions approach

the top part of this parabola.

–2

2

y

2 –2

t

71.

′ = y t cos2 . The direction field indicates that the

equation has periodic solutions with the period

roughly 3. This estimate is fairly accurate be-

cause y t t c ( ) = +

1

2

2 sin has period π.

–2

2

y

2 –2

t

SECTION 1.2 Solutions and Direction Fields 35

72.

′ = ( ) y ty sin . We have a constant solution y t ( ) ≡ 0

and there is a symmetry between solutions above

and below the t-axis. Note: This equation does

not have a closed form solution.

–2

2

y

4 –4

t

73.

′ = − y y sin . We can see from the direction field

that y = ± ± 0 2 , , , π π … are constant solutions

with 0 2 4 , , , ± ± π π … being stable and

± ± π π , , 3 … unstable. The solutions between

the equilibria have positive or negative slopes

depending on the y interval. From left to right

these solutions are horizontal translates.

–5

5

y

5 –5

t

unstable equilibrium

unstable equilibrium

stable equilibrium

stable equilibrium

stable equilibrium

74. ′ = + y y t 2 . Student Project

Suggested Journal Entry I

75. Student Project

Suggested Journal Entry II

76. Student Project

36 CHAPTER 1 First-Order Differential Equations

1.3

Separation of Variables: Quantitative Analysis

Separable or Not

1. ′ = + y y 1 . Separable;

dy

y

dt

1+

= ; constant solution y ≡ −1.

2. ′ = − y y y

3

. Separable;

dy

y y

dt

−

=

3

; constant solutions y t ( ) ≡ ± 0 1 , .

3. ′ = + ( ) y t y sin . Not separable; no constant solutions.

4. ′ = ( ) y ty ln . Not separable; no constant solutions.

5. ′ = y e e

t y

. Separable; e dy e dt

y t −

= ; no constant solutions.

6. ′ =

+

+ y

y

ty

y

1

. Not separable; no constant solutions.

7. ′ =

+

y

e e

y

t y

1

. Separable; e y dy e dt

y t −

+ ( ) = 1 ; no constant solutions.

8. ′ = + = + ( ) y t y t t y

t

ln ln

2 2 2

2 1 b g . Separable;

dy

y

t dt

2 1

2

ln +

= ; constant solution y t e ( ) ≡

−1 2

.

9. ′ = + y

y

t

t

y

. Not separable; no constant solutions.

10. ′ =

+

y

y

t

1

2

. Separable;

dy

y

dt t

1

2

+

= ; no constant solution.

Solving by Separation

11. ′ = y

t

y

2

. Separating variables, we get ydy t dt =

2

. Integrating each side gives the implicit solution

1

2

1

3

2 3

y t c = + .

Solving for y yields branches so we leave the solution in implicit form.

12. ty y ′ = − 1

2

. The equilibrium solutions are 1 y = ± .

Separating variables, we get

dy

y

dt

t

1

2

−

= .

Integrating gives the implicit solution

sin ln

−

= +

1

y t c.

Solving for y gives the explicit solution

y t c = + ( ) sin ln .

SECTION 1.3 Separation of Variables: Quantitative Analysis 37

13. ′ =

+

−

y

t

y y

2

4 3

7

4

. Separating variables we get the equation

y y dy t dt

4 3 2

4 7 − = + b g b g .

Integrating gives the implicit solution

1

5

1

3

7

5 4 3

y y t t c − = + + .

We cannot find an explicit solution for y.

14. ty y ′ = 4 . The equilibrium solution is y = 0.

Separating variables we get

dy

y

dt

t

= 4 .

Integrating gives the implicit solution

ln ln y t c = + 4 .

Solving for y gives the explicit solution

y Ct =

4

where C is an arbitrary constant.

15. cos

dy

y t

dt

= y = 0 is an equilibrium solution.

For y ≠ 0, cos

dy

t dt

y

=

∫ ∫

ln y = sin t + c

1

1

ln sin y c t

e e e = , so that

sint

y Ce = , where

1

c

C e = ± .

16. 4t dy = (y

2

+ ty

2

)dt y(1) = 1

2

1 1

4 1

dy t

dt dt

t t y

+

= = +

∫ ∫ ∫

−4y

−1

= ln t + t + C

For y(1) = 1, we obtain C = −5, so that

y =

4

ln 5 t t

−

+ −

38 CHAPTER 1 First-Order Differential Equations

17. ′ =

−

y

t

y

1 2

, y 1 2 ( ) = − . Separating variables gives

ydy t dt = − ( ) 1 2 .

Integrating gives the implicit solution

1

2

2 2

y t t c = − + .

Substituting in the initial condition y 1 2 ( ) = − gives c = 2. Hence, the implicit solution is given by

y t t

2 2

2 2 4 = − + .

Solving for y we get

y t t t ( ) = − − + + 2 2 4

2

.

Note that we take the negative square root so the initial condition is satisfied.

18. ′ = − y y

2

4, y 0 0 ( ) = . Separating variables gives

dy

y

dt

2

4 −

= .

Rewriting this expression as a partial fraction decomposition (see Appendix PF), we get

1

4 2

1

1

4 2 y y

dy dt

− ( )

−

+ ( )

L

N

M

O

Q

P

= .

Integrating we get

ln ln y y t c − − + = + 2 2 4

or

y

y

e e

c t

−

+

=

2

2

4

.

Hence, the implicit solution is

y

y

e e ke

c t t

−

+

= ± =

2

2

4 4

where k is an arbitrary constant. Solving for y, we get the general solution

y t

ke

ke

t

t

( ) =

+

−

2 1

1

4

4

b g

.

Substituting in the initial condition y 0 0 ( ) = gives k = −1.

SECTION 1.3 Separation of Variables: Quantitative Analysis 39

19. ′ =

+

y

t

y

2

1 2

, y 2 0 ( ) = . Separating variables

1 2 2 + ( ) = y dy t dt .

Integrating gives the implicit solution

y y t c + = +

2 2

.

Substituting in the initial condition y 2 0 ( ) = gives c = −4. Solving for y the preceding quadratic

equation in y we get

y

t

=

− + + − 1 1 4 4

2

2

a f

.

20. ′ = −

+

+

y

y

t

1

1

2

2

, y 0 1 ( ) = − . Separating variables, we get the equation

dy

y

dt

t 1 1

2 2

+

= −

+

.

Integrating gives

tan tan

− −

= − +

1 1

y t c.

Substituting in the initial condition y 0 1 ( ) = − gives c = − ( ) = −

−

tan

1

1

4

π

. Solving for y gives

y t = − −

F

H

I

K

−

tan tan

1

4

π

.

Integration by Parts

21. ′ = y y t cos ln

2

b g . The equilibrium solutions are (2 1)

2

y n

π

= + .

Separating variables we get

dy

y

tdt

cos

ln

2

= .

Integrating, we find

dy

y

t dt c

ydy t dt c

y t t t c

y t t t c

cos

ln

sec ln

tan ln

tan ln .

2

2

1

z z

z z

= +

= +

= − +

= − + ( )

−

40 CHAPTER 1 First-Order Differential Equations

22. ′ = − y t t

2

5 2 b gcos . Separating variables we get

dy t t dt = −

2

5 2 b gcos .

Integrating, we find

y t t dt c

t t dt t dt c

t t t t t c

= − +

= − +

= − + − +

z

z z

2

2

2

5 2

2 5 2

1

4

2 1 2

1

2

2

5

2

2

b g

b g

cos

cos cos

sin cos sin .

23. ′ =

+

y t e

y t 2 2

. Separating variables we get

dy

e

t e dt

y

t

=

2 2

.

Integrating, we find

e dy t e dt c

e t t e e c

e t t e e c

y t

y t t

y t t

−

−

−

z z

= +

− = − + +

= − − + +

L

N

M

O

Q

P

2 2

2 2 2

2 2 2

1

2

1

4

1

2

1

4

b g

b g .

Solving for y, we get

y t t e e c

t t

= − − − −

L

N

M

O

Q

P

ln

1

2

1

4

2 2 2

b g .

24. ′ =

−

y t ye

t

. The equilibrium solution is y = 0.

Separating variables we get

dy

y

te dt

t

=

−

.

Integrating, we find

dy

y

te dt c

y te e c

y Qe

t

t t

t e

t

z z

= +

= − − +

=

−

− −

− + ( )

−

ln

.

1

Equilibria and Direction Fields

25. (C) 26. (B) 27. (E) 28. (F) 29. (A) 30. (D)

SECTION 1.3 Separation of Variables: Quantitative Analysis 41

Finding the Nonequilibrium Solutions

31. ′ = − y y 1

2

We note first that y = ±1 are equilibrium solutions. To find the nonconstant solutions we divide

by 1

2

− y and rewrite the equation in differential form as

dy

y

dt

1

2

−

= .

By a partial fraction decomposition (see Appendix PF), we have

dy

y y

dy

y

dy

y

dt

1 1 2 1 2 1 − ( ) + ( )

=

− ( )

+

+ ( )

= .

Integrating, we find

− − + + = +

1

2

1

1

2

1 ln ln y y t c

where c is any constant. Simplifying, we get

− − + + = +

+

−

= +

+ ( )

− ( )

=

ln ln

ln

1 1 2 2

1

1

2 2

1

1

2

y y t c

y

y

t c

y

y

ke

t

where k is any nonzero real constant. If we now solve for y, we find

y

ke

ke

t

t

=

−

+

2

2

1

1

.

32. ′ = − y y y 2

2

We note first that y = 0, 2 are equilibrium solutions. To find the nonconstant solutions, we divide

by 2

2

y y − and rewrite the equation in differential form as

dy

y y

dt

2 − ( )

= .

By a partial fraction decomposition (see Appendix PF),

dy

y y

dy

y

dy

y

dt

2 2 2 2 − ( )

= +

− ( )

= .

Integrating, we find

1 1

ln ln 2

2 2

y y t c − − = +

42 CHAPTER 1 First-Order Differential Equations

where c is any real constant. Simplifying, we get

( )

2

ln ln 2 2 2

ln 2 2

2

2

t

y y t c

y

t c

y

y y Ce

− − = +

= +

−

− =

where C is any positive constant.

2

2

t

y

ke

y

=

−

where k is any nonzero real constant. If we solve for y, we get

y

ke

ke

t

t

=

+

2

1

2

2

.

33. ′ = − ( ) + ( ) y y y y 1 1

We note first that y = 0, ±1 are equilibrium solutions. To find the nonconstant solutions, we

divide by y y y − ( ) + ( ) 1 1 and rewrite the equation in differential form as

dy

y y y

dt

− ( ) + ( )

=

1 1

.

By finding a partial fraction decomposition, (see Appendix PF)

dy

y y y

dy

y

dy

y

dy

y

dt

− ( ) + ( )

= − +

− ( )

+

+ ( )

=

1 1 2 1 2 1

.

Integrating, we find

− + − + + = +

− + − + + = +

ln ln

ln ln ln

y y y t c

y y y t c

1

2

1

1

2

1

2 1 1 2 2

or

ln

.

y y

y

t c

y y

y

ke

t

− ( ) + ( )

= +

− ( ) + ( )

=

1 1

2 2

1 1

2

2

2

Multiplying each side of the above equation by y

2

gives a quadratic equation in y, which can be

solved, getting

y

ke

t

= ±

+

1

1

2

a f

.

Initial conditions will tell which branch of this solution would be used.

SECTION 1.3 Separation of Variables: Quantitative Analysis 43

34. ′ = − ( ) y y 1

2

We note that y = 1 is a constant solution. Seeking nonconstant solutions, we divide by y − ( ) 1

2

getting

dy

y

dt

− ( )

=

1

2

. This can be integrated to get

−

−

= +

− =

− +

= +

− +

1

1

1

1

1

1

y

t c

y

t c

y

t c

.

Help from Technology

35.

′ = y y, y 1 1 ( ) = , y − ( ) = − 1 1

The solution of ′ = y y, y 1 1 ( ) = is y e

t

=

−1

. The

solution of ′ = y y, y − ( ) = − 1 1 is y e

t

= −

+1

. These

solutions are shown in the figure.

2

t

–3

3

y

–2

36.

′ = y t cos , y 1 1 ( ) = , y − ( ) = − 1 1

The solution of the initial-value problem

′ = y t cos , y 1 1 ( ) =

is y t t ( ) = + − ( ) sin sin 1 1 . The solution of

′ = y t cos , y − ( ) = − 1 1

is y t = − + − ( ) sin sin 1 1 . The solutions are shown

in the figure.

–2

2

y

6 –6

t

44 CHAPTER 1 First-Order Differential Equations

37.

dy

dt

t

y t

=

+

2 2

1

, y 1 1 ( ) = , y − ( ) = − 1 1

Separating variables and integrating we find the

implicit solution

y dy

t

t

dt c

2

2

1

z z

=

+

+

or

1

3

1

3 2

y t c = + + .

–2

2

y

2 –2

t

′ =

+

y

t

y t

2 2

1

Subsituting y 1 1 ( ) = , we find c = −

1

3

2 . For y − ( ) = − 1 1 we find c = − −

1

3

2 . These two curves

are shown in the figure.

38.

′ = y y t cos , y 1 1 ( ) = , y − ( ) = − 1 1

Separating variables we get

dy

y

t dt = cos .

Integrating, we find the implicit solution

ln sin y t c = + .

–8

2

y

6 –6

t

With y 1 1 ( ) = , we find c = − ( ) sin 1 . With y − ( ) = − 1 1, we find c = ( ) sin 1 . These two implicit solu-

tion curves are shown imposed on the direction field (see figure).

39.

′ =

+ ( )

y

t y

y

2 1

, y 1 1 ( ) = , y − ( ) = − 1 1

Separating variables and assuming y ≠ −1, we

find

y

y

dy t dt

+

=

1

2

or

y

y

dy t dt c

+

= +

z z

1

2 .

–2

2

y

2 –2

t

′ =

+ ( )

y

t y

y

2 1

Integrating, we find the implicit solution

y y t c − + = + ln 1

2

.

For y 1 1 ( ) = , we get 1 2 1 − = + ln c or c = −ln2. For y − ( ) = − 1 1 we can see even more easily that

y ≡ −1 is the solution. These two solutions are plotted on the direction field (see figure). Note that

SECTION 1.3 Separation of Variables: Quantitative Analysis 45

the implicit solution involves branching. The initial condition y 1 1 ( ) = lies on the upper branch,

and the solution through that point does not cross the t-axis.

40.

′ = ( ) y ty sin , y 1 1 ( ) = , y − ( ) = − 1 1

This equation is not separable and has no closed

form solution. However, we can draw its direc-

tion field along with the requested solutions (see

figure).

6 –6

t

–2

2

y

Making Equations Separable

41. Given

′ =

+

= + y

y t

t

y

t

1 ,

we let

y

v

t

= and get the separable equation 1

dv

v t v

dt

+ = + . Separating variables gives

dt

dv

t

= .

Integrating gives

ln v t c = +

and

y t t ct = + ln .

42. Letting

y

v

t

= , we write

2 2

1 y t y t

y v

yt t y v

+

′ = = + = + .

But y tv = so y v tv ′ ′ = + . Hence, we have

1

v tv v

v

′ + = +

or

1

tv

v

′ =

or

dt

vdv

t

= .

46 CHAPTER 1 First-Order Differential Equations

Integrating gives the implicit solution

2

1

ln

2

v t c = +

or

2ln v t c = ± + .

But

y

v

t

= , so

y t t c = ± + 2ln .

The initial condition y 1 2 ( ) = − requires the negative square root and gives c = 4. Hence,

y t t t ( ) = − + 2 4 ln .

43. Given

4 4 3

3 3 3

2 2 1

2

y t y t

y v

t ty y v

+

′ = = + = + .

with the new variable

y

v

t

= . Using y v tv ′ ′ = + and separating variables gives

4

3

3

4

1

1

v

v

dt dv dv

v

t v

+

= =

+

.

Integrating gives the solution

( )

4

1

ln ln 1

4

t v c = + +

or

ln ln t

y

t

c =

F

H

I

K

+

L

N

M

O

Q

P

+

1

4

1

4

.

44. Given

2 2 2

2

2 2

1 1

y ty t y y

y v v

t t t

+ +

′ = = + + = + +

with the new variable

y

v

t

= . Using y v tv ′ ′ = + and separating variables, we get

2

1

dv dt

t v

=

+

.

Integrating gives the implicit solution

1

ln tan t v c

−

= + .

Solving for v gives

( )

tan ln v t c = + . Hence, we have the explicit solution

y t t c = + ( ) tan ln .

SECTION 1.3 Separation of Variables: Quantitative Analysis 47

Another Conversion to Separable Equations

45.

2

( ) y y t ′ = + Let u = y + t. Then

2

2

1 1, and ,

1

du dy du

u dt

dt dt u

= + = + =

+

∫ ∫

so

1

tan

tan( )

tan( ) so tan( )

u t c

u t c

y t t c y t c t

−

= +

= +

+ = + = + −

46.

1

1

t y

dy

e

dt

+ −

= − Let u = t + y − 1. Then

1 1 1, and ,

u u

du dy

e e du dt

dt dt

−

= + = + − =

∫ ∫

so

−e

−u

+ c = t , or t + e

−t−y+1

= c.

Thus, 1 ln , and 1 ln . t y c t y t c t − − + = − = − − −

Autonomous Equations

47. (a) Problems 1, 2 and 18 are autonomous:

3

2

#1 1

#2

#18 4

y y

y y y

y y

′ = +

′ = −

′ = −

All the others are nonautonomous.

(b) The isoclines of an autonomous equation are horizontal lines (i.e., if you follow along a

horizontal line y k = in the ty plane, the slopes of the line elements do not change).

Another way to say this is that solutions for y t ( ) through any y all have the same slope.

Orthogonal Families

48. (a) Starting with f x y c , ( ) = , we differentiate implicitly getting the equation

∂

∂

+

∂

∂

=

f

x

dx

f

y

dy 0

Solving for ′ = y

dy

dx

, we have

dy

dx

f

x

f

y

= −

∂

∂

∂

∂

.

These slopes are the slopes of the tangent lines.

48 CHAPTER 1 First-Order Differential Equations

(b) Taking the negative reciprocal of the slopes of the tangents, the orthogonal curves satisfy

dy

dx

f

y

f

x

=

∂

∂

∂

∂

.

(c) Given f x y x y , ( ) = +

2 2

, we have

∂

∂

=

f

y

y 2 and

∂

∂

=

f

x

x 2 ,

so our equation is

dy

dx

y

x

= . Hence, from part (b) the orthogonal trajectories satisfy the

differential equation

dy

dx

f

f

y

x

y

x

= = ,

which is a separable equation having solution y kx = .

More Orthogonal Trajectories

49. For the family y cx =

2

we have f x y

y

x

, ( ) =

2

so

f

y

x

x

= −

2

3

, f

x

y

=

1

2

,

and the orthogonal trajectories satisfy

dy

dx

f

f

x

y

y

x

= = −

2

or

2ydy xdx = − .

x

1

y

3 –3

2

3

–1

–2

–3

–1 –2 1 2

Orthogonal trajectories

Integrating, we have

y x K

2 2

1

2

= − +

or

x y C

2 2

2 + = .

Hence, this last equation gives a family of ellipses that are all orthogonal to members of the

family y cx =

2

. Graphs of the orthogonal families are shown in the figure.

SECTION 1.3 Separation of Variables: Quantitative Analysis 49

50. For the family y

c

x

=

2

we have f x y x y , ( ) =

2

so

f xy

x

= 2 , f x

y

=

2

and the orthogonal trajectories satisfy

dy

dx

f

f

x

y

y

x

= =

2

or, in differential form, 2y dy x dx = . Integrating,

we have

y x C

2 2

1

2

= + or 2

2 2

y x K − = .

x

1

y

3 –3

2

3

–1

–2

–3

–1 –2 1 2

Orthogonal trajectories

Hence, the preceding equations give a family of hyperbolas that are orthogonal to the original

family of hyperbolas y

c

x

=

2

. Graphs of the two orthogonal families of hyperbolas are shown.

51. xy c = . Here f x y xy , ( ) = so f y

x

= , f x

y

= . The

orthogonal trajectories satisfy

dy

dx

f

f

x

y

y

x

= =

or, in differential form, y dy x dx = . Integrating,

we have the solution

y x C

2 2

− = .

Hence, the preceding family of hyperbolas are

orthogonal to the hyperbolas xy c = . Graphs of

the orthogonal families are shown.

t

5

y

5 –5

–5

Orthogonal hyperbolas

Calculator or Computer

52. y c = . We know the orthogonal trajectories of

this family of horizontal lines is the family of

vertical lines x C = (see figure).

x

1

y

3 –3

2

3

–1

–2

–3

–1 –2 1 2

Orthogonal trajectories

50 CHAPTER 1 First-Order Differential Equations

53. 4

2 2

x y c + = . Here

f x y x y , ( ) = + 4

2 2

and f x

x

= 8 , f y

y

= 2 , so the orthogonal trajecto-

ries satisfy

dy

dx

f

f

y

x

y

x

y

x

= = =

2

8 4

or

4dy

y

dx

x

= , which has the implicit solution the

family y Cx

4

= where C is any constant different

from zero. These orthogonal families are shown

in the figure.

x

1

y

2 –2

2

–1

–2

–1 1

Orthogonal trajectories

54. x cy

2 3

4 = . Here

f x y

x

y

, ( ) =

2

3

4

and f

x

y

x

=

2

3

, f

x

y

y

= −

3

4

2

4

. The differential equa-

tion of the orthogonal family is

dy

dx

f

f

x

y

y

x

= =

−3

2

x

1

y

2 –2

2

–1

–2

–1 1

Orthogonal trajectories

or 2 3 ydy xdx = − , which has the general solution 2 3

2 2

y x C + = , where C is any real constant.

These orthogonal families are shown in the figure.

55. x y cy

2 2

+ = . Here f x y

x y

y

, ( ) =

+

2 2

, so

f

x

y

x

=

2

, f

y x

y

y

=

−

2 2

2

.

The differential equations of the orthogonal family are

dy

dx

f

f

y x

xy

y

x

y x

y

x

y

= = =

−

−

2 2

2

2

2 2

2

.

We are unable to solve this equation analytically, so we use a different approach inspired by

looking at the graph of the original family, which consists of circles passing through the origin

with centers on the y-axis.

SECTION 1.3 Separation of Variables: Quantitative Analysis 51

Completing the square of the original equation, we can write x y cy

2 2

+ = as

x y

c c

2

2

2

2 4

+ −

F

H

I

K

= , which confirms the description and locates the centers at 0

2

,

c

F

H

I

K

.

We propose that the orthogonal family to the original family consists of another set of

circles, x

C

y

C

−

F

H

I

K

+ =

2 4

2

2

2

centered at

C

2

0 ,

F

H

I

K

and passing through the origin.

To verify this conjecture we rewrite this

equation for the second family of circles as

x y Cx

2 2

+ = , which gives C g x y

x y

x

= ( ) =

+

,

2 2

or g

x y

x

x

=

−

2 2

2

, g

y

x

y

=

2

. Hence the proposed

second family satisfies the equation

dy

dx

g

g

xy

x y

y

x

= =

−

2

2 2

,

x

y

2

–2

–2 2

Orthogonal circles

which indeed shows that the slopes are perpendicular to those of the original family derived

above. Hence the original family of circles (centered on the y-axis) and the second family of

circles (centered on the x-axis) are indeed orthogonal. These families are shown in the figure.

The Sine Function

56. The general equation is

y y

2

2

1 + ′ ( ) =

or

dy

dx

y = ± − 1

2

.

Separating variables and integrating, we get

± = +

−

sin

1

y x c or y x c x c = ± + = ± + sin sin b g c h b g .

This is the most general solution. Note that cos x is included because cos sin x x = −

F

H

I

K

π

2

.

52 CHAPTER 1 First-Order Differential Equations

Disappearing Mothball

57. (a) We have

dV

dt

kA = − , where V is the volume, t is time, A is the surface area, and k is a

positive constant. Because V r =

4

3

3

π and A r = 4

2

π , the differential equation becomes

4 4

2 2

π π r

dr

dt

k r = −

or

dr

dt

k = − .

Integrating, we find r t kt c ( ) = − + . At t = 0, r =

1

2

; hence c =

1

2

. At t = 6, r =

1

4

; hence

k =

1

24

, and the solution is

r t t ( ) = − +

1

24

1

2

,

where t is measured in months and r in inches. Because we can’t have a negative radius

or time, 0 12 ≤ ≤ t .

(b) Solving − + =

1

24

1

2

0 t gives t = 12 months or one year.

Four-Bug Problem

58. (a) According to the hint, the distance between the bugs is shrinking at the rate of 1 inch per

second, and the hint provides an adequate explanation why this is so. Because the bugs

are L inches apart, they will collide in L seconds. Because their motion is constantly

towards each other and they start off in symmetric positions, they must collide at a point

equidistant from all the bugs (i.e., the center of the carpet).

(b) The bugs travel at 1 inch per second for L seconds, hence the bugs travel L inches each.

(c)

rd

θ

0

Q

r

P

A

r ,

θ

r dr +

dr

B

This sketch of text Figure 1.3.8(b) shows a typical bug at

P r = ( ) , θ and its subsequent position A r dr d

x

+ + ( ) , θ θ

as it heads toward the next bug at Q r = +

F

H

I

K

, θ

π

2

. Note

that dr is negative, and consider that dθ is a very small

angle, exaggerated in the drawing.

SECTION 1.3 Separation of Variables: Quantitative Analysis 53

Consider the small shaded triangle ABP. For small dθ :

• angle BAP is approximately a right angle,

• angle APB OQP = = angle

π

4

,

• side BP lies along QP.

Hence triangle ABP is similar to triangle OQP, which is a right isosceles triangle,

so − ≈ dr rdθ .

Solving this separable DE gives r ce =

−θ

, and the initial condition r 0 1 ( ) = gives

c = 1. Hence our bug is following the path r e =

−θ

, and the other bugs’ paths simply shift

θ by

π

e

for each successive bug.

Radiant Energy

59. Separating variables, we can write

dT

T M

kdt

4 4

−

= − . We then write

1 1 1

2

1

2

4 4 2 2 2 2 2 2 2 2 2

T M T M T M M T M M T M −

=

+ −

=

−

−

+

2

b gb g b g b g

.

Integrating

1 1

2

2 2 2 2

2

T M T M

dT kM dt

−

−

+

R

S

T

U

V

W

= − ,

we find the implicit solution

1

2

1

2

1 2

M

M T

M T M

T

M

kM t c ln tan

−

+

−

F

H

I

K

= − +

−

or in the more convenient form

ln arctan

M T

M T

T

M

kM t C

+

−

+

F

H

I

K

= + 2 4

3

.

Suggested Journal Entry

60. Student Project

54 CHAPTER 1 First-Order Differential Equations

1.4

Euler’s Method: Numerical Analysis

Easy by Calculator

t

y

y

′ = , ( ) 0 1 y =

1. (a) Using step size 0.1 we enter

0

t and

0

y , then calculate row by row to fill in the following

table:

Euler’s Method ( ) = 0.1 h

n 1 n n

t t h

−

= +

1 1 n n n

y y hy

− −

′ = + n

n

n

t

y

y

′ =

0 0 1

0

0

1

=

1 0.1 1

0.1

0.1

1

=

2 0.2 1.01

0.2

0.1980

1.01

=

3 0.3 1.0298

0.3

0.2913

1.0298

=

The requested approximations at 0.2 t = and 0.3 t = are ( )

2

0.2 1.01 y ≈ ,

( )

3

0.3 1.0298 y ≈ .

(b) Using step size 0.05, we recalculate as in (a), but we now need twice as many steps. We

get the following results.

Euler’s Method ( ) = 0.05 h

n n

t

n

y

n

y′

0 0 1 0

1 0.05 1 0.05

2 0.1 1.0025 0.0998

3 0.15 1.0075 0.1489

4 0.2 1.0149 0.1971

5 0.25 1.0248 0.2440

6 0.3 1.03698 0.2893

The approximations at 0.2 t = and 0.3 t = are now ( )

4

0.2 1.0149 y ≈ , ( )

6

0.3 1.037 y ≈ .

SECTION 1.4 Euler’s Method: Numerical Analysis 55

(c) Solving the IVP

t

y

y

′ = , ( ) 0 1 y = by separation of variables, we get y dy t dt = .

Integration gives

2 2

1 1

2 2

y t c = + .

The initial condition ( ) 0 1 y = gives

1

2

c = and the implicit solution

2 2

1 y t − = . Solving

for y gives the explicit solution

( )

2

1 y t t = + .

To four decimal place accuracy, the exact solutions are ( ) 0.2 1.0198 y = and

( ) 0.3 y = 1.0440. Hence, the errors in Euler approximation are

( ) ( )

( ) ( )

( ) ( )

( ) ( )

2

3

4

6

0.1: error 0.2 0.2 1.0198 1.0100 0.0098,

error 0.3 0.3 1.0440 1.0298 0.0142,

0.05: error 0.2 0.2 1.0198 1.0149 0.0050,

error 0.3 0.3 1.0440 1.0370 0.007

h y y

y y

h y y

y y

= = − = − =

= − = − =

= = − = − =

= − = − =

Euler approximations are both high, but the smaller stepsize gives smaller error.

Calculator Again y ty ′ = , ( ) 0 1 y =

2. (a) For each value of h we calculate a table as in Problem 1, with y ty ′ = . The results are

summarized as follows.

Euler’s Method

Comparison of Step Sizes

=1 h = 0.5 h = 0.25 h = 0.125 h

t y ≈ t y ≈ t y ≈ t y ≈

0 1 0 1 0 1 0 1

1 1 0.5 1 0.25 1 0.125 1

1 1.25 0.50 1.062 0.250 1.0156

0.75 1.195 0.375 1.0474

1 1.419 0.50 1.0965

0.625 1.1650

0.750 1.2560

0.875 1.3737

1 1.5240

56 CHAPTER 1 First-Order Differential Equations

(b) Solve the IVP y ty ′ = , ( ) 0 1 y = by separating variables to get

dy

tdt

y

= . Integration yields

2

ln

2

t

y c = + , or

2

2 t

y Ce = . Using the initial condition ( ) 0 1 y = gives the exact solution

( )

2

/ 2 t

y t e = , so ( )

1 2

1 1.6487 y e = ≈ . Comparing with the Euler approximations gives

1: error 1.6487 1 0.6487

0.5: error 1.6487 1.25 0.3987

0.25: error 1.6487 1.419 0.2297

0.125: error 1.6487 1.524 0.1247

h

h

h

h

= = − =

= = − =

= = − =

= = − =

Computer Help Advisable

3.

2

3 y t y ′ = − , ( ) 0 1 y = ; [ ] 0, 1 . Using a spreadsheet and Euler’s method we obtain the following

values:

Spreadsheet Instructions for Euler’s Method

A B C D

1 n n

t

1 1 n n n

y y hy

− −

′ = +

2

3

n n

t y −

2 0 0 1 3 2 ^ 2 t B C = ∗ ∗ −

3

2 1 A = + 2 .1 B = + 2 .1 2 C D = + ∗

Using step size 0.1 h = and Euler’s method we obtain the following results.

Euler’s Method ( ) = 0.1 h

t y ≈ t y ≈

0 1 0.6 0.6822

0.1 0.9 0.7 0.7220

0.2 0.813 0.8 0.7968

0.3 0.7437 0.9 0.9091

0.4 0.6963 1.0 1.0612

0.5 0.6747

Smaller steps give higher approximate values ( )

n n

y t . The DE is not separable so we have no

exact solution for comparison.

SECTION 1.4 Euler’s Method: Numerical Analysis 57

4.

2 y

y t e

−

′ = + , ( ) 0 0 y = ; [ ] 0, 2

Using step size 0.01 h = , and Euler’s method we obtain the following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.01 h

t y ≈ t y ≈

0 0 1.2 1.2915

0.2 0.1855 1.4 1.6740

0.4 0.3568 1.6 2.1521

0.6 0.5355 1.8 2.7453

0.8 0.7395 2.0 3.4736

1.0 0.9858

Smaller steps give higher approximate values ( )

n n

y t . The DE is not separable so we have no

exact solution for comparison.

5. y t y ′ = + , ( ) 1 1 y = ; [ ] 1, 5

Using step size 0.01 h = and Euler’s method we obtain the following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.01 h

t y t y

1 1 3.5 6.8792

1.5 1.8078 4 8.5696

2 2.8099 4.5 10.4203

2.5 3.9942 5 12.4283

3 5.3525

Smaller steps give higher ( )

n n

y t . The DE is not separable so we have no exact solution for

comparison.

58 CHAPTER 1 First-Order Differential Equations

6.

2 2

y t y ′ = − , ( ) 0 1 y = ; [ ] 0, 5

Using step size 0.01 h = and Euler’s method we obtain following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.01 h

t y t y

0 1 3 2.8143

0.5 0.6992 3.5 3.3464

1 0.7463 4 3.8682

1.5 1.1171 4.5 4.3843

2 1.6783 5 4.8967

2.5 2.2615

Smaller steps give higher approximate values ( )

n n

y t . The DE is not separable so we have no

exact solution for comparison.

7. y t y ′ = − , ( ) 0 2 y =

Using step size 0.05 h = and Euler’s method we obtain the following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.05 h

t y ≈ t y ≈

0 2 0.6 1.2211

0.1 1.8075 0.7 1.1630

0.2 1.6435 0.8 1.1204

0.3 1.5053 0.9 1.0916

0.4 1.3903 1 1.0755

0.5 1.2962

Smaller steps give higher ( )

n n

y t . The DE is not separable so we have no exact solution for

comparison.

SECTION 1.4 Euler’s Method: Numerical Analysis 59

8.

t

y

y

′ = − , ( ) 0 1 y =

Using step size 0.1 h = and Euler’s method we obtain the following results.

Euler’s Method ( ) = 0.1 h

t y ≈ t y ≈

0 1 0.6 0.8405

0.1 1.0000 0.7 0.7691

0.2 0.9900 0.8 0.6781

0.3 0.9698 0.9 0.5601

0.4 0.9389 1 0.3994

0.5 0.8963

The analytical solution of the initial-value problem is

( )

2

1 y t t = − ,

whose value at 1 t = is ( ) 1 0 y = . Hence, the absolute error at 1 t = is 0.3994. (Note, however, that

the solution to this IVP does not exist for 1. t > ) You can experiment yourself to see how this

error is diminished by decreasing the step size or by using a more accurate method like the

Runge-Kutta method.

9.

sin y

y

t

′ = , ( ) 2 1 y =

Using step size 0.05 h = and Euler’s method we obtain the following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.05 h

t y ≈ t y ≈

2 1 2.6 1.2366

2.1 1.0418 2.7 1.2727

2.2 1.0827 2.8 1.3079

2.3 1.1226 2.9 1.3421

2.4 1.1616 3 1.3755

2.5 1.1995

Smaller stepsize predicts lower value.

60 CHAPTER 1 First-Order Differential Equations

10. y ty ′ = − ,

0

1 y =

Using step size 0.01 h = and Euler’s method we obtain the following results. (Table shows only

selected values.)

Euler’s Method ( ) = 0.01 h

t y ≈ t y ≈

0 1 0.6 0.8375

0.1 0.9955 0.7 0.7850

0.2 0.9812 0.8 0.7284

0.3 0.9574 0.9 0.6692

0.4 0.9249 1 0.6086

0.5 0.8845

Smaller step size predicts lower value. The analytical solution of the initial-value problem is

( )

2

2 t

y t e

−

=

whose exact value at 1 t = is ( ) 1 0.6065 y = . Hence, the absolute error at 1 t = is

error 0.6065 0.6086 0.0021 = − = .

SECTION 1.4 Euler’s Method: Numerical Analysis 61

Stefan’s Law Again

( )

4 4

0.05 3

dT

T

dt

= − , ( ) 0 4 T = .

11. (a) Euler’s Method

= 0.25 h = 0.1 h

n n

t

n

T

n n

t

n

T

0 0.00 4.0000 0 0.00 4.0000

1 0.25 1.8125 1 0.10 3.1250

2 0.50 2.6901 2 0.20 3.0532

3 0.75 3.0480 3 0.30 3.0237

4 1.00 2.9810 4 0.40 3.0107

5 0.50 3.0049

6 0.60 3.0023

7 0.70 3.0010

8 0.80 3.0005

9 0.90 3.0002

10 1.00 3.0001

(b) The graph shows that the larger step

approximation (black dots) overshoots

the mark but recovers, while the smaller

step approximation (white dots) avoids

that problem.

(c) There is an equilibrium solution at 3 T = ,

which is confirmed both by the direction

field and the slope

dT

dt

. This is an exact

solution that both Euler approximations

get very close to by the time 1 t = .

1 0

t 1

5

T

3

62 CHAPTER 1 First-Order Differential Equations

Nasty Surprise

12.

2

y y ′ = , ( ) 0 1 y =

Using Euler’s method with 0.25 h = we obtain the following values.

Euler’s Method ( ) = 0.25 h

t y ≈

′

2

y = y

0 1 1

0.25 1.25 1.5625

0.50 1.6406 2.6917

0.75 2.3135 5.3525

1.00 3.6517

Euler’s method estimates the solution at 1 t = to be 3.6517, whereas from the analytical solution

( )

1

1

y t

t

=

−

, or from the direction field, we can see that the solution blows up at 1. So Euler’s

method gives an approximation far too small.

Approximating e

13. y y ′ = , ( ) 0 1 y =

Using Euler’s method with different step sizes h, we have estimated the solution of this IVP at

1 t = . The true value of

t

y e = for 1 t = is 2.7182818 e ≈ … .

Euler’s Method

h

( ) 1 y ≈ ( ) − 1 e y

0.5 2.25 0.4683

0.1 2.5937 0.1245

0.05 2.6533 0.0650

0.025 2.6850 0.0332

0.01 2.7048 0.0135

0.005 2.7115 0.0068

0.0025 2.7149 0.0034

0.001 2.7169 0.0013

SECTION 1.4 Euler’s Method: Numerical Analysis 63

We now use the fourth-order Runge-Kutta method with the same values of h, getting the

following values.

Runge-Kutta Method

h y(1)

( ) − 1 e y

0.5 2.717346191 0.00093

0.1 2.718279744

5

0.21 10

−

×

0.05 2.718281693

6

0.13 10

−

×

0.025 2.718281820

8

0.87 10

−

×

0.01 2.718281828

11

0.22 10

−

×

Note that even with a large step size of 0.5 h = the Runge-Kutta method gives ( ) 1 y correct to

within 0.001, which is better than Euler’s method with stepsize 0.001 h = .

Double Trouble or Worse

14.

1 3

y y = , ( ) 0 0 y =

(a) The solution starting at the initial point ( ) 0 0 y = never gets off the ground (i.e., it returns

all zero values for

n

y ). For this IVP, ( ) 6 0

n

y = .

(b) Starting with ( ) 0 0.01 y = , the solution increases. We have given a few values in the

following table and see that ( ) 6 7.9134

n

y ≈ .

Euler’s Method ′ =

1 3

y y , ( ) = 0 0.01 y ( = 0.1 h )

t y t y

0 0.01 3.5 3.5187

0.5 0.2029 4 4.3005

1 0.5454 4.5 5.1336

1.5 0.9913 5 6.0151

2 1.5213 5.5 6.9424

2.5 2.1241 6 7.9134

3 2.7918

64 CHAPTER 1 First-Order Differential Equations

(c) The direction field of

1 3

y y ′ = for

0 6 t ≤ ≤ , 0 10 y ≤ ≤

confirms the values found in (b).

0

10

y

6 0

t

3

y y ′ =

Roundoff Problems

15. If a roundoff error of ε occurs in the initial condition, then the solution of the new IVP y y ′ = ,

( ) 0 y A ε = + is

( ) ( )

t t t

y t A e Ae e ε ε = + = + .

The difference between this perturbed solution and

t

Ae is

t

e ε . This difference at various

intervals of time will be

10

20

1 difference

10 difference 22, 026

20 difference 485,165,195 .

t e

t e

t e

ε

ε ε

ε ε

= ⇒ =

= ⇒ = ≈

= ⇒ = =

Hence, the accumulate roundoff error grows at an exponential rate.

Think Before You Compute

16. Because 2 y = and 2 y = − are constant solutions, any initial conditions starting at these values

should remain there. On the other hand, a roundoff error in computations starting near 2 y = − is

not as serious as near 2 y = , because near 2 y = − the perturbed solution will move towards the

stable solution –2.

SECTION 1.4 Euler’s Method: Numerical Analysis 65

Runge-Kutta Method

17. , y t y ′ = + y(0) = 0, h = 1

(a) By Euler’s method,

y

1

= y

0

+

0 0

( ) 0 h t y + =

By 2

nd

order Runge Kutta

y

1

= y

0

+ hk

02

,

k

01

= t

0

+ y

0

= 0

k

02

=

0 0 01

2 2

h h

t y k

⎛ ⎞ ⎛ ⎞

+ + +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

=

1

0

2

+

y

1

= 0 +

1 1

2 2

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

= 0.5

By 4

th

order Runge Kutta.

y

1

= y

0

+ ( )

01 02 03 04

2 2

6

h

k k k k + + +

k

01

= t

0

+ y

0

= 0

k

02

=

0 0 01

1

2 2 2

h h

t y k

⎛ ⎞ ⎛ ⎞

+ + + =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

k

03

=

0 0 02

1 1 1 3

2 2 2 2 2 4

h h

t y k

⎛ ⎞ ⎛ ⎞ ⎛ ⎞

+ + + = + =

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

k

04

= (t

0

+ h) +

0 03

1 3

1

2 2 4

h

y k

⎛ ⎞ ⎛ ⎞⎛ ⎞

+ = +

⎜ ⎟ ⎜ ⎟⎜ ⎟

⎝ ⎠ ⎝ ⎠⎝ ⎠

= 1.375

y

`1

= 0 +

1 1 3

0 2 2 1.375

6 2 4

⎛ ⎞

⎛ ⎞ ⎛ ⎞

+ + +

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

=

1

(3.875)

6

≈ 0.646

(b) Second-order Runge Kutta is much better than Euler for a single step approximation, but

fourth-order RK is almost right on (slightly low).

(c) If y(t) = −t − 1 + e

t

,

then y(1) = −2 + e ≈ 0.718.

66 CHAPTER 1 First-Order Differential Equations

18. , y t y ′ = + y(0) = 0, h = −1

(a) By Euler’s method,

y

1

= y

0

+

0 0

( ) 0 h t y + =

By 2

nd

order Runge Kutta

y

1

= y

0

+ hk

02

,

k

01

= t

0

+ y

0

= 0

k

02

=

0 0 01

2 2

h h

t y k

⎛ ⎞ ⎛ ⎞

+ + +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

=

1

2

−

y

1

= y

0

−

1

1

2

⎛ ⎞

−

⎜ ⎟

⎝ ⎠

= 0.5

By 4

th

order Runge Kutta.

y

1

= y

0

+ ( )

01 02 03 04

2 2

6

h

k k k k + + +

k

01

= t

0

+ y

0

= 0

k

02

=

0 0 01

1

2 2 2

h h

t y k

⎛ ⎞ ⎛ ⎞

+ + + = −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

= −0.5

k

03

=

0 0 02

1 1 1 1

0.25

2 2 2 2 2 4

h h

t y k

⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞

+ + + = − + − − = − = −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠

k

04

= (t

0

+ h) +

0 03

1 1 7

1 0.875

2 2 4 8

h

y k

⎛ ⎞ ⎛ ⎞⎛ ⎞

+ = − + − − = − = −

⎜ ⎟ ⎜ ⎟⎜ ⎟

⎝ ⎠ ⎝ ⎠⎝ ⎠

y

`1

= 0 +

1 1 1 7

0 2 2

6 2 4 8

⎛ ⎞ ⎛ ⎞ ⎛ ⎞

− + − + − + −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎝ ⎠

=

1

( 2.375)

6

− − ≈ 0.396

(b) Second-order Runge Kutta is high though closer than Euler. Fourth order R-K is very

close.

(c) If y(t) = −t − 1 + e

t

,

then y(−1) = e

−1

≈ 0.368.

SECTION 1.4 Euler’s Method: Numerical Analysis 67

Runge-Kutta vs. Euler

19.

2

3 y t y ′ = − , ( ) 0 1 y = ; [0, 1]

Using the fourth-order Runge-Kutta method and 0.1 h = we arrive at the following table of

values.

Runge-Kutta Method, ′ = −

2

3 y t y , ( ) = 0 1 y

t y t y

0 1 0.6 0.7359

0.1 0.9058 0.7 0.7870

0.2 0.8263 0.8 0.8734

0.3 0.7659 0.9 0.9972

0.4 0.7284 1.0 1.1606

0.5 0.7173

We compare this with #3 where Euler’s method gave ( ) 1 1.0612 y ≈ for 0.1 h = . Exact solution

by separation of variables is not possible.

20. y t y ′ = − , ( ) 0 2 y =

Using the fourth-order Runge-Kutta method and 0.1 h = we arrive at the following table of

values.

Runge-Kutta Method, ′ = − y t y , ( ) = 0 2 y

t y t y

0 2 0.6 1.2464

0.1 1.8145 0.7 1.1898

0.2 1.6562 0.8 1.148

0.3 1.5225 0.9 1.1197

0.4 1.4110 1.0 1.1036

0.5 1.3196

We compare this with #7 where Euler’s method gives ( ) 1 1.046 y ≈ for step 0.1 h = ;

( ) 1 1.07545 y ≈ for step 0.05 h = . Exact solution by separation of variables is not possible.

68 CHAPTER 1 First-Order Differential Equations

21.

t

y

y

′ = − , ( ) 0 1 y =

Using the fourth-order Runge-Kutta method and 0.1 h = we arrive at the following table of

values.

Runge-Kutta Method, ′ = −

t

y

y

, ( ) = 0 1 y

t Y t y

0 1 0.6 0.8000

0.1 0.9950 0.7 0.7141

0.2 0.9798 0.8 0.6000

0.3 0.9539 0.9 0.4358

0.4 0.9165 1.0 0.04880

0.5 0.8660

We compare this with #8 where Euler’s method for step 0.1 h = gave ( ) 1 0.3994 y ≈ , and the

exact solution ( )

2

1 y t t = − gave ( ) 1 0 y = . The Runge-Kutta approximate solution is much

closer to the exact solution.

22. y ty ′ = − , ( ) 0 1 y =

Using the 4th-order Runge Kutta method and 0.01 h = to arrive at the following table. (Table

shows only selected values.)

Runge-Kutta Method,

y ′ = − y t , ( ) = 0 1 y

t y t y

0 1 0.6 0.8353

0.1 0.9950 0.7 0.7827

0.2 0.9802 0.8 0.7261

0.3 0.9560 0.9 0.6670

0.4 0.9231 1 0.6065

0.5 0.8825

We compare this with #10 where Euler’s method for step 0.1 h = gave ( ) 1 0.6086 y ≈ , and the

exact solution ( )

2

2 t

y t e

−

= gave ( ) 1 0.6065 y = . The Runge-Kutta approximate solution is exact

within given accuracy.

SECTION 1.4 Euler’s Method: Numerical Analysis 69

Euler’s Errors

23. (a) Differentiating ( ) , y f t y ′ = gives

t y t y

y f f y f f f ′′ ′ = + = + .

Here we assume

t

f ,

y

f and y f ′ = are continuous, so y′′ is continuous as well.

(b) The expression

( ) ( ) ( ) ( )

* 2

1

2

n n n n

y t h y t y t h y t h ′ ′′ + = + +

is simply a statement of Taylor series to first degree, with remainder.

(c) Direct computation gives

2

1

2

n

h

e M

+

≤ .

(d) We can make the local discretization error

n

e in Taylor’s method less than a preassigned

value E by choosing h so it satisfies

2

2

n

Mh

e E ≤ ≤ , where M is the maximum of the

second derivative of y′′ on the interval [ ]

1

,

n n

t t

+

. Hence, if

2E

h

M

≤ , we have the

desired condition

n

e E ≤ .

Three-Term Taylor Series

24. (a) Starting with ( ) , y f t y ′ = , and differentiating with respect to t, we get

( ) ( ) ( ) ( ) ( ) , , , , ,

t y t y

y f t y f t y y f t y f t y f t y ′′ ′ = + = + .

Hence, we have the new rule

( ) ( ) ( ) ( )

2

1

1

, , , , .

2

n n n n t n n y n n n n

y y hf t y h f t y f t y f t y

+

⎡ ⎤ = + + +

⎣ ⎦

(b) The local discretization error has order of the highest power of h in the remainder for the

approximation of

1 n

y

+

, which in this case is 3.

(c) For the equation ( ) ,

t

y f t y

y

′ = = we have ( )

1

,

t

f t y

y

= , ( )

2

,

y

t

f t y

y

= − and so the

preceding three-term Taylor series becomes

2

2

1

3

1 1

2

n n

n n

n n n

t t

y y h h

y y y

+

⎡ ⎤ ⎛ ⎞

= + + −

⎜ ⎟ ⎢ ⎥

⎝ ⎠ ⎣ ⎦

.

Using this formula and a spreadsheet we get the following results.

70 CHAPTER 1 First-Order Differential Equations

Taylor’s Three-Term Series

Approximation of ′ =

t

y

y

, ( ) = 0 1 y

t y t y

0 1 0.6 1.1667

0.1 1.005 0.7 1.2213

0.2 1.0199 0.8 1.2314

0.3 1.0442 0.9 1.3262

0.4 1.0443 1.0 1.4151

0.5 1.1185

The exact solution of the initial-value problem

t

y

y

′ = , ( ) 0 1 y = is ( )

2

1 y t t = + , so we

have ( ) 1 2 1.4142 y = ≈ …. Taylor’s three-term method gave the value 1.4151, which

has an error of

2 1.4151 0.0009 − ≈ .

(d) For the differential equation ( ) , y f t y ty ′ = = we have ( ) ,

t

f t y y = , ( ) ,

y

f t y t = , so the

Euler three-term approximation becomes

2 2

1

1

2

n n n n n n n

y y ht y h y t y

+

⎡ ⎤

= + + −

⎣ ⎦

.

Using this formula and a spreadsheet, we arrive at the following results.

Taylor’s Three-Term Series

Approximation of ′ = y ty , ( ) = 0 1 y

t Y t y

0 1 0.6 1.1962

0.1 1.005 0.7 1.2761

0.2 1.0201 0.8 1.3749

0.3 1.0458 0.9 1.4962

0.4 1.1083 1.0 1.6444

0.5 1.1325

The solution of y ty ′ = , ( ) 0 1 y = is ( )

2

2 t

y t e = , so ( ) 1 1.649 y e = ≈ …. Hence the error

at 1 t = using Taylor’s three-term method is

1.6444 0.0043 e − ≈ .

SECTION 1.4 Euler’s Method: Numerical Analysis 71

Richardson’s Extrapolation

Sharp eyes may have detected the elimination of absolute value signs when equation (7) is rewritten as

equation (9). This is legitimate with no further argument if y′ is positive and monotone increasing, as is

the case in the suggested exercises.

25. y y ′ = , ( ) 0 1 y = .

Our calculations are listed in the following table. Note that we use ( )

R

0.1 y as initial condition

for computing ( )

R

0.2 y .

One-step EulerTwo-step

Euler

Richardson

approx.

( )

∗

=

R

y t

Exact solution

∗

t

( )

∗

, y t h

( )

∗

, y t h

( ) ( )

∗ ∗

− 2 , , y t h y t h

=

t

y e

0.1 1.1 1.1025 1.1050

0.1

1.1052 e =

0.2 1.2155 1.2183 1.2211

0.2

1.2214 e =

26. y ty ′ = , ( ) 0 1 y = .

Our calculations are listed in the following table. Note that we use ( )

R

0.1 y as initial condition

for computing ( )

R

0.2 y .

One-step EulerTwo-step

Euler

Richardson

approx.

( )

∗

=

R

y t

Exact solution

∗

t

( )

∗

, y t h

( )

∗

, y t h

( ) ( )

∗ ∗

− 2 , , y t h y t h =

2

t

y e

0.1 1.0 1.0025 1.005

0.01

1.0101 e =

0.2 1.01505 1.0176 1.02005

0.04

1.0408 e =

27.

2

y y ′ = , ( ) 0 1 y = .

Our calculations are listed in the following table (on the next page). Note that we use ( )

R

0.1 y as

initial condition for computing ( )

R

0.2 y .

72 CHAPTER 1 First-Order Differential Equations

One-step Euler Two-step

Euler

Richardson

approx.

( )

∗

=

R

y t

Exact solution

∗

t

( )

∗

, y t h

( )

∗

, y t h

( ) ( )

∗ ∗

− 2 , , y t h y t h

( ) = − 1 1 y t

0.1 1.1 1.1051 1.1102 1.1111

0.2 1.2335 1.2405 1.2476 1.2500

28. ( ) sin y ty ′ = , ( ) 0 1 y = .

Our calculations are listed in the following table. Note that we use ( )

R

0.1 y as initial condition

for computing ( )

R

0.2 y .

One-step Euler Two-step

Euler

Richardson

approx.

( )

∗

=

R

y t

Exact solution

∗

t

( )

∗

, y t h

( )

∗

, y t h

( ) ( )

∗ ∗

− 2 , , y t h y t h no formula

0.1

1.1 1.0025 1.0050

0.2

1.0150 1.0176 1.0201 1.02013 by

Runge-Kutta

Integral Equation

29. (a) Starting with

( ) ( ) ( )

0

0

,

t

t

y t y f s y s ds = +

∫

we differentiate respect to t, getting ( ) ( )

, y f t y t ′ = . We also have ( )

0 0

y t y = .

Conversely, starting with the initial-value problem

( ) ( )

, y f t y t ′ = , ( )

0 0

y t y =

we integrate getting the solution

( ) ( ) ( )

0

,

t

t

y t f s y s ds c = +

∫

.

Using the initial condition ( )

0 0

y t y = , gives the constant

0

c y = . Hence, the integral

equation is equivalent to IVP.

SECTION 1.4 Euler’s Method: Numerical Analysis 73

(b) The initial-value problem, ( ) y f t ′ = , ( )

0

0 y y = , is transformed into the integral equation

( ) ( )

0

0

t

y t y f s ds = +

∫

.

To find the approximate value of the solution at t T = , we evaluate the preceding integral

at t T = using the Riemann sum with left endpoints, getting

( ) ( )

( ) ( ) ( )

0

0

0

0 .

T

y T y f s ds

y h f f h f T h

= +

⎡ ⎤ ≈ + + + + −

⎣ ⎦

∫

…

If we, however, write the expression as

( ) ( ) ( ) ( )

( ) ( )

( ) ( )

( ) ( ) ( )

( )

0

1

2

3 1

1

0

2

3

.

n

n

n

y T y h f f h f T h

y hf h hf T h

y hf h hf T h

y hf h hf T h y hf T h

y h T h

y

−

−

⎡ ⎤ = + + + + −

⎣ ⎦

= + + + −

= + + + −

= + + + − + + −

= + −

=

…

…

…

…

… …

we get the desired conclusion.

(c) The Riemann sum only holds for integrals of the form ( )

b

a

f t dt

∫

.

Computer Lab: Other Methods

30. Sample study of different numerical methods. We solve the IVP of Problem 5 y t y ′ = + ,

( ) 1 1 y = by several different methods using step size 0.1 h = . The table shows a printout for

selected values of y using one non-Euler method.

Fourth Order Runge-Kutta Method

t Y t y

1 1 3.5 6.8910

1.5 1.8100 4 8.5840

2 2.8144 4.5 10.4373

2.5 4.0010 5 12.4480

3 5.3618

74 CHAPTER 1 First-Order Differential Equations

We can now compare the following approximations for Problem 5:

Euler’s method 0.1 h =

( ) 5 12.2519 y ≈

(answer in text)

Euler’s method 0.01 h =

( ) 5 12.4283 y ≈

(solution in manual)

Runge-Kutta method 0.1 h =

( ) 5 12.4480 y ≈

(above)

We have no exact solution for Problem 5, but you might use step 0.1 h = to approximate ( ) 5 y

by other methods (for example Adams-Bashforth method or Dormand-Prince method) then

explain which method seems most accurate. A graph of the direction field could give insight.

Suggested Journal Entry I

31. Student Project

Suggested Journal Entry II

32. Student Project

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 75

1.5

Picard’s Theorem: Theoretical Analysis

Picard’s Conditions

1. (a) ( ) , 1 y f t y ty ′ = = − , ( ) 0 0 y =

Hence

y

f t = − . The fact that f is

continuous for all t tells us a solution

exists passing through each point in the ty

plane. The further fact that the derivative

y

f is also continuous for all t and y tells

us that the solution is unique. Hence,

there is a unique solution of this equation

passing through ( ) 0 0 y = . The direction

field is shown in the figure.

3 –3

t

–3

3

y

(b) Picard’s conditions hold in entire ty plane.

(c) Not applicable - the answer to part (a) is positive.

2. (a)

2 y

y

t

−

′ = , ( ) 0 1 y =

Here ( )

2

,

y

f t y

t

−

= ,

1

y

f

t

= − . The functions f and

y

f are continuous for 0 t ≠ , so

there is a unique solution passing through any initial point ( )

0 0

y t y = with

0

0 t ≠ . When

0

0 t = the derivative y′ is not only discontinuous, it isn’t defined. No solution of this DE

passes through points ( )

0 0

, t y with

0

0 t = . In particular the DE with IC ( ) 0 1 y = does

not make sense.

(b) Uniqueness/existence in either the right half plane 0 t > or the left half plane 0 t < ; any

rectangle that does not include 0 t = will satisfy Picard’s Theorem.

(c) If we think of DEs as models for physical

phenomena, we might be tempted to

replace

0

t in the IC by a small number

and examine the unique solution, which

we know exists. It would also be useful

to draw the direction field of this equa-

tion and see the big picture. The direction

field is shown in the figure.

3

t

–2

6

y

76 CHAPTER 1 First-Order Differential Equations

3. (a)

4 3

y y ′ = , ( ) 0 0 y =

Here

( )

4 3

1 3

,

4

.

3

y

f t y y

f y

=

=

–4

4

y

4 –4

t

Here f and

y

f are continuous for all t and y, so by Picard’s theorem we conclude that the

DE has a unique solution through any initial condition ( )

0 0

y t y = . In particular, there

will be a unique solution passing through ( ) 0 0 y = , which we know to be ( ) 0 y t ≡ . The

directions field of the equation is shown in the figure.

(b) Picard’s conditions hold in entire ty plane.

(c) Not applicable - the answer to part (a) is positive.

4. (a)

t y

y

t y

−

′ =

+

, ( ) 0 1 y = −

Here both

( )

( )

2

,

2

y

t y

f t y

t y

t

f

t y

−

=

+

= −

+

–4

4

y

4 –4

t

are continuous for t and y except when y t = − . Hence, there is a unique solution passing

through any initial condition ( )

0 0

y t y = as long as

0 0

y t ≠ − . When y t = − the derivative

y′ is not only discontinuous but also not even defined, so there is really no need to resort

to Picard’s theorem to conclude there is no solution passing through such points.

(b), (c) Picard’s conditions hold for the entire ty plane except the line y t = − , so any rectangle

that does not include any part of y t = − satisfies Picard’s Theorem.

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 77

5. (a)

2 2

1

y

t y

′ =

+

, ( ) 0 0 y =

Here both

( )

( )

( )

2 2

2

2 2

1

,

2

,

y

f t y

t y

y

f t y

t y

=

+

= −

+

–2

2

y

2 –2

t

are continuous for all t and y except at the point 0 y t = = . Hence, there is a unique

solution passing through any initial point ( )

0 0

y t y = except ( ) 0 0 y = . In this case f does

not exist, so the IVP does not make sense. The direction field of the equation illustrates

these ideas (see figure).

(b) Picard’s Theorem gives existence/uniqueness for any rectangle that does not include the

origin.

(c) It may be useful to replace the initial condition ( ) 0 0 y = by ( )

0

0 y y = with small but

nonzero

0

y .

6. (a) tan y y ′ = , ( ) 0

2

y

π

=

Here

( )

2

, tan

sec

y

f t y y

f y

=

=

are both continuous except at the points

3

, ,

2 2

y

π π

= ± ± … .

2 –2

t

–3 /2

y

π

3 /2 π

Hence, there exists a unique solution passing through ( )

0 0

y t y = except when

3

, ,

2 2

y

π π

= ± ± … .

The IVP problem passing through

2

π

does not have a solution. It would be useful to look

at the direction field to get an idea of the behavior of solutions for nearby initial points.

The direction field of the equation shows that where Picard’s Theorem does not work the

slope has become vertical (see figure).

78 CHAPTER 1 First-Order Differential Equations

(b) Existence/uniqueness conditions are satisfied over any rectangle with y-values between

two successive odd multiples of

2

π

.

(c) There are no solutions going forward in time from any points near 0,

2

π ⎛ ⎞

⎜ ⎟

⎝ ⎠

.

7. (a) ln 1 y y ′ = − , ( ) 0 2 y =

Here

( ) , ln 1

1

1

y

f t y y

f

y

= −

=

−

are both continuous for all t and y as long

as

1 y ≠ ,

–4

4

y

4 –4

t

where neither is defined. Hence, there is a unique solution passing through any initial

point ( )

0 0

y t y = with

0

1 y ≠ . In particular, there is a unique solution passing through

( ) 0 2 y = . The direction field of the equation illustrates these ideas (see figure).

(b), (c) The Picard Theorem holds for entire ty plane except the line 1 y = .

8. (a)

y

y

y t

′ =

−

, ( ) 1 1 y =

Here

( )

( )

2

,

y

y

f t y

y t

t

f

y t

=

−

= −

−

–4

4

y

4 –4

t

are continuous for all t and y except when y t ≠ where neither function exists. Hence, we

can be assured there is a unique solution passing through ( )

0 0

y t y = except when

0 0

t y = . When

0 0

t y = the derivative isn’t defined, so IVP problems with these IC does

not make sense. Hence the IVP with ( ) 1 1 y = is not defined. See figure for the direction

field of the equation.

(b) The Picard Theorem holds for the entire ty plane except the line y t = , so it holds for any

rectangle that does not include any part of y = t.

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 79

(c) It may be useful to replace the initial condition ( ) 1 1 y = by ( ) 1 1 y ε = + . However, you

should note that the direction field shows that 0 ε > will send solution toward ∞, 0 ε <

will send solution toward zero.

Linear Equations

9. ( ) ( ) y p t y q t ′ + =

For the first-order linear equation, we can write ( ) ( ) y q t p t y ′ = − and so

( ) ( ) ( )

( ) ( )

,

, .

y

f t y q t p t y

f t y p t

= −

= −

Hence, if we assume ( ) p t and ( ) q t are continuous, then Picard’s theorem holds at any point

( )

0 0

y t y = .

Eyeballing the Flows

For the following problems it appears from the figures given in the text that:

10. A unique solution will pass through each point A, B, C, and D and the solutions appear to exist

for all t.

11. A unique solution passes through A and B defined for negative t; no unique solution passes

through C where the derivative is not uniquely defined; a unique solution passes through D for

positive t.

12. Unique solutions exist passing through points B and C on intervals until the solution curve

reaches the t-axis, where finite slope does not exist. Nonunique solutions at A; possibly unique

solutions at D where 0 t y = = .

13. A unique solution will pass through each of the points A, B, C, and D. Solutions appear to exist

for all t.

14. A unique solution will pass through each of the points A, B, C, and D. Solutions appear to exist

for all t.

15. A unique solution will pass through each of the points B, C, and D. Solutions exist only for

A

t t >

or

A

t t < because all solutions appear to leave from or go toward A, where there is no unique

slope.

16. Unique solutions will pass through each of the points A, B, C, and D. Solutions appear to exist for

all t.

80 CHAPTER 1 First-Order Differential Equations

17. A unique solution will pass through each of the points A, B, C, and D. Solutions appear to exist

for all t.

18. A unique solution will pass through each of the points A, B, C, and D. Solutions appear to exist

for all t.

Local Conclusions

19. (a) ( )

2

, f t y y = , 2

y

f y = , ( ) 0 1 y =

are both continuous for all t, y so by

Picard’s theorem there is a unique

solution passing through any point t, y.

Hence the existence and uniqueness

conditions hold for any initial

condition in the entire ty plane.

However, this example exhibits an

(b)

–3

3

t

–3

3

y

t =1

Solution of

2

y y ′ = , ( ) 0 1 y =

important weakness of Picard’s Theorem: For any particular initial condition, the solution

may not exist over the entire plane. In the given IVP the solution exists only for 1 t < .

(c) The separated equation is

2

y dy dt

−

= . Integrating gives the result

1

y t c

−

− = + and

solving for y, we get

1

t c

−

+

. Substituting the initial condition ( ) 0 1 y = , gives 1 c = − .

Hence, we have ( )

1

1

y t

t

=

−

, 1 t < , 0 y > . The interval over which this solution is

defined cannot pass through 1 t = , and the solution with IC ( ) 0 1 y = exists on the interval

( ) , 1 −∞ .

(d) Because Picard’s theorem holds for all t, y we conclude there exists a unique solution to

2

y y ′ = , ( )

0 0

y t y = for any ( )

0 0

, t y . To find the size of the interval of existence, we

must solve the IVP, getting

( )

0

1

0

1

y

y t

t t

= −

− −

.

Hence, the interval over which this solution is defined cannot pass through

0

0

1

t t

y

= + ,

which implies an interval of

0

0

1

, t

y

⎛ ⎞

−∞ +

⎜ ⎟

⎝ ⎠

for positive

0

y and

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 81

0

0

1

, t

y

⎛ ⎞

− ∞

⎜ ⎟

⎜ ⎟

⎝ ⎠

for negative

0

y .

Nonuniqueness

20.

1 3

y y ′ = , ( ) 0 0 y =

Because

1 3

f y = is continuous for all ( ) , t y , Picard’s theorem says that there exists a solution

through any point ( )

0 0

y t y = . However,

2 3

1

3

y

f y

−

= is not continuous when 0 y = so Picard’s

theorem does not guarantee a unique solution through any point where 0 y = .

In fact we can find an infinite number of solutions passing through the origin. We first

separate variables, getting

1 3

y dy dt

−

= , and integrating gives

2 3

3

2

y t c = + .

Picking the initial condition ( ) 0 0 y = , we find 0 c = . Hence, we have found one solution of the

initial-value problem as

( )

3 2

3 2

2

3

y t t

⎛ ⎞

= ±

⎜ ⎟

⎝ ⎠

.

But clearly, ( ) 0 y t ≡ is another solution. In fact,

we can paste these solutions together at 0 t = .

Futhermore, we can also paste together 0 y =

with infinitely many additional solutions, using

any 0 c < , getting an infinite number of solutions

to the initial-value problem as

( )

( )

3 2

3 2

0

2

3

t c

y t

t c t c

⎧ <

⎪

=

⎨

⎛ ⎞

± + ≥

⎪ ⎜ ⎟

⎝ ⎠

⎩

for any 0 c ≤ . A few of these solutions are

plotted (see figure).

t

y

1

3

–3

2 3 4

c = 0

c = –1

c = –2

c = –2

c = –1

c = 0

y = 0

Nonuniqueness of solutions through

( ) 0 0 y =

82 CHAPTER 1 First-Order Differential Equations

More Nonuniqueness

21. y y ′ = , ( ) 0 0 y = ,

0

0 t >

For

0

t t < , the solution is ( ) 0 y t ≡ . For

0

t t > , we have ( )

2

0

1

4

y t t = − .

At

0

t t = the left-hand derivative of ( ) 0 y t ≡ is 0, and the right-hand derivative of

( ) ( )

2

0

1

4

y t t t = − is 0, so they agree.

Seeing vs. Believing

22. No, the solution does not “merge” with y = −1.

Consider

2

3 (1 ) ( , ). y t y f t y ′ = + = Note that y = −1 is an equilibrium solution.

We observe:

1. f(t, y) is continuous for all t and y.

2.

f

y

∂

∂

= 3t

2

is continuous for all t and y

By Picard’s Theorem, we know there is a unique solution through any initial point. Because the

line y = −1 passes through every point with y-coordinate = 1, no other solution can merge with

y = −1 and can only approach y = −1 asymptotically.

Converse of Picard’s Theorem Fails

23. (a) Note that ( , )

dy

y f t y

dt

= = , so that

0

( , )

0

y y

f t y

y y

− < ⎧

=

⎨

≥

⎩

has a partial derivative

1 0

1 0,

y

f

y y

− < ⎧ ∂

=

⎨

> ∂

⎩

that is not continuous at y = 0. Consequently the hypothesis of Picard’s Theorem is not

fulfilled by the DE in any region containing points on the x-axis.

(b) Note that y ≡ 0 is a solution of the IVP

dy

y

dt

= y(0) = 0

When y < 0, the DE becomes , y y ′ = − which has the general solution y = Ce

−t

.

When y ≥ 0, the DE becomes , y y ′ = which has general solution y = Ce

t

.

Note that the only solution that satisfies the IVP occurs when C = 0, which is precisely

the function y ≡ 0, so that is a unique solution.

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 83

Hubbard’s Leaky Bucket

24.

dh

k h

dt

= −

(a) ( ) , f t h k h = − ,

2

f k

h h

∂

= −

∂

Because

f

h

∂

∂

is not continuous at 0 h = , we cannot be sure of unique solutions passing

through any points where ( ) 0 h t = .

(b) Let us assume the bucket becomes empty

at

0

t T t = < . Solving the IVP with

( ) 0 h T = , we find an infinite number of

solutions.

( ) ( )

( )

2 1

for

4

0 for .

h t kT kt t T

h t t T

= − <

= >

t

t

0

h t ( )

h

0

various values T

Each one of these functions describes the bucket emptying. Hence, we don’t know when

the bucket became empty. We show a few such solutions for

0

T t < .

(c) If we start with a full bucket when 0 t = , then (b) gives

( )

2 2

0

1

0

4

h k T h = = .

Hence the time to empty the bucket is

0

2

T h

k

= .

The Melted Snowball

25. (a) We are given

dV

kA

dt

= − , where A is the surface area of the snowball and 0 k > is the

rate at which the snowball decreases in volume. Given the relationships between the

volume of the snowball and its radius r, which is

3

4

3

V r π = , and between the surface

area of the snowball and its radius, given by

2

4 A r π = , we can relate A and V by

2 3

2 3 2 3 3

3

4 36

4

A V V π π

π

⎛ ⎞

= =

⎜ ⎟

⎝ ⎠

.

84 CHAPTER 1 First-Order Differential Equations

(b) Here

( )

2 3

1 3

,

2

.

3

f t V kV

f

kV

V

−

= −

∂

= −

∂

Because the uniqueness condition for Picard’s theorem does not hold when 0 V = , we

cannot conclude that the IVP

2 3

dV

kV

dt

= − , ( )

0

0 V t =

has a unique solution. Hence, we cannot tell when the snowball melted; the backwards

solution is not unique.

(c) Separating

2 3

dV

kV

dt

= − where 0 k > ,

we have

2 3

V dV kdt

−

= − .

Integrating, we find

1 3

3V kt c = − + .

Let

0

T t < be the time the snowball

melted. Then using the initial condition

( ) 0 V T = we find

( )

3

3

t T

V t K

− ⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

t

y

20

10

2 –3

t

0

various values T

2 3

dV

kV

dt

= − . Solutions with ( )

0

0 y t = .

for

3

K k = and t T < . But we know ( ) 0 V t ≡ is also is a solution of this initial-value

problem, so we can piece together the nonzero solutions with the zero solution and get

for

0

T t < the infinite family of solutions

( )

3

3

0 .

t T

K t T

V t

t T

⎧

− ⎛ ⎞

− < ⎪

⎜ ⎟

=

⎨

⎝ ⎠

⎪

≥

⎩

(d) The function ( )

2 3

, f t V V = does not satisfy the uniqueness condition of Picard’s

theorem when 0 V = .

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 85

The Accumulating Raindrop

26. (a) We are given

dV

kA

dt

= , where A is the surface area of the raindrop and 0 k > is the rate

at which the raindrop increases in volume. We substitute into

dV

kA

dt

= the relationships

3

4

3

V r π = ,

2

4 A r π =

for the volume V and area A of a raindrop in terms of its radius r, getting

2 3

2 3 2 3 3

3

4 36

4

A V V π π

π

⎛ ⎞

= =

⎜ ⎟

⎝ ⎠

.

Hence

2 3

dV

kV

dt

= .

(b) Separating variables in the above DE, we have

2 3

V dV kdt

−

= .

Integrating, we find

1 3

3V kt c = + .

Using the initial condition ( )

0

0 V t = , we get the relation

0

c kt = − , and hence

( )

3

0

3

t t

V t K

− ⎛ ⎞

=

⎜ ⎟

⎝ ⎠

where

3

K k = .

But clearly, ( ) 0 V t ≡ is also a solution of this initial-value problem, so we can piece

together the nonzero solutions with the zero solution, to get the infinite family of

solutions

( )

0

3

0

0

0

3

t t

V t

t t

K t t

< ⎧

⎪

=

⎨ − ⎛ ⎞

≥

⎪ ⎜ ⎟

⎝ ⎠

⎩

86 CHAPTER 1 First-Order Differential Equations

Different Translations

27. (a) y y ′ = has an infinite family of solution of the form y = Ce

t

.

(To check: ( ) .

t t

y Ce Ce y ′ ′ = = =

Note that for any real number a,

y =

t a t

e Ce

−

= is a solution for every a ∈ R.

(b) Differentiating s(t) =

2

0

( )

t a

t a t a

< ⎧

⎨

− ≥

⎩

we obtain a continuous derivative

0

( )

2( )

t a

s t

t a t a

< ⎧

′ =

⎨

− ≥

⎩

Note that 2 s s ′ = for both parts of the curve.

(c)

For (a), with y y ′ = , solutions y = Ce

t

gradually approach zero as t → −∞.

For (b), with 2 , s s ′ = solutions

2

0 for

( ) for

t a

y

t a t a

< ⎧

=

⎨

− ≥

⎩

go to zero at t → a.

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 87

Picard Approximations

28.

( )

( ) ( ) ( )

( )

( )

0

2

1 0

0 0

2 3 2

2

0

3 2 4 3 2

3

0

1

1

1 1 1 1

2

1 1

1 1 1

2 6

1 1 1

1 1 1

6 24 3

t t

t

t

y t

y t s y ds s ds t t

y t s s s ds t t t

y t s s s s ds t t t t

=

= + − = + − = + −

⎧ ⎫

⎛ ⎞

= + − + − = − + − +

⎨ ⎬

⎜ ⎟

⎝ ⎠

⎩ ⎭

⎧ ⎫ ⎡ ⎤

= + − − + − + = − + − +

⎨ ⎬

⎢ ⎥

⎣ ⎦ ⎩ ⎭

∫ ∫

∫

∫

29.

( )

( ) ( ) ( )

( ) ( )

( ) ( )

0

1 0

0 0

2

0

2

3

0

1

1 1 1 1

1 1 1

1 1 1

t t

t

t

y t t

y t s y ds s s ds t

y t s s ds t

y t s s ds t t

= −

⎡ ⎤ = + − = + − − = +

⎣ ⎦

⎡ ⎤ = + − + = − +

⎣ ⎦

⎡ ⎤ = + − − = − +

⎣ ⎦

∫ ∫

∫

∫

30.

( )

( ) ( ) ( )

( )

( )

0

2

1 0

0 0

2 3 2

2

0

3 2 4 3 2

3

0

1

1 1

2

1 1 1

1

2 6 2

1 1 1 1 1

1

6 2 24 6 2

t

t t

s t

t

s t

t

s t

y t e

y t s y ds s e ds e t

y t s e s ds e t t

y t s e s s ds e t t t

−

− −

− −

− −

=

= + − = + − = +

⎡ ⎤ ⎛ ⎞

= + − + = − +

⎜ ⎟ ⎢ ⎥

⎝ ⎠ ⎣ ⎦

⎡ ⎤ ⎛ ⎞

= + − − + = + − +

⎜ ⎟ ⎢ ⎥

⎝ ⎠

⎣ ⎦

∫ ∫

∫

∫

31.

( )

( ) ( ) ( )

( ) ( ) ( )

( ) ( )

0

1 0

0 0

0

2

2

0

2 3 2

3

0

1

1 1 1 1 1

1 1 1

1

1 1 1

3

t

t t

t

t

y t t

y t s y ds s s ds ds t

y t s s ds t t

y t s s s ds t t t

= +

⎡ ⎤ = + − = + − + = + − = −

⎣ ⎦

= + − − = − +

⎡ ⎤

= + − − + = − + − +

⎣ ⎦

∫ ∫ ∫

∫

∫

88 CHAPTER 1 First-Order Differential Equations

Computer Lab

32. (a) We show how the computer algebra system Maple can be used to estimate the solution of

#29 y t y ′ = − , ( ) 0 1 y = with starting function ( )

0

1 y t t = − . We leave for the reader the

other starting functions for #28, 30, and 31. In Maple open a new window and type the

int() command. In this problem, because ( ) , f t y t y = − , ( ) 0 1 y = , we can find the

sequence of approximations

( ) ( ) ( ) ( ) ( )

1 0

0 0

, 1

t t

n n n

y t y f s y s ds s y s ds

+

= + = + −

∫ ∫

by typing

( )

( )

( )

( )

( )

( )

0

1 0

2 1

3 2

4 3

5 4

6 5

1;

1 int , ;

1 int , ;

1 int , ;

1 int , ;

1 int , ;

1 int , .

y t

y t y t

y t y t

y t y t

y t y t

y t y t

y t y t

= −

= + −

= + −

= + −

= + −

= + −

= + −

If you then hit the enter key you will see displayed

0

1

2

2

3

3 2

4

4 3 2

5

5 4 3 2

6

1

1

1

1

1

1

3

1 1

1

12 3

1 1 1

1

60 12 3

y t

y t

y t

y t t

y t t t

y t t t t

y t t t t t

= −

= +

= − +

= − +

= − + − +

= − + − +

= − + − + − +

Of course you can find more iterates in the same way. E.g., if

you type

y

7

= 1 + int(t − y

6

, t),

then hit Enter, you will see

6 5 4 3 2

7

1 1 1 1

= 1

360 60 12 3

y t t t t t t − − + − + − +

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 89

To get a plot of

6

y and the solution

( ) 2 1

t

y t e t

−

= + −

(see part (b)) as shown (see figure), type

the Maple command

plot({2*exp(–t)+t–1, y6},

t=0..4, y=0..2).

3 1

0.8

0

1.2

1.6

0.4

2

t

y

2

4

0.6

1

1.4

0.2

1.8

Picard Approximation

Picard’s sixth approximation to

y t y ′ = − , ( ) 0 1 y =

(b) If you recall the Maclaurin series ( ) ( )

2 3

2

1 1

1 2 2 2

2 3!

t

e t t t

−

≈ − + − + − +… and carry out a

little algebra, you will convince yourself that these Picard’s approximations are

converging to the analytical solution ( ) 2 1

t

y t e t

−

= + − . For most initial-value problems,

however, such a nice identification is not possible.

Calculator or Computer

33.

( )

1 4

3 4

,

1

4

y f t y y

f

y

y

−

′ = =

∂

=

∂

Note the direction field is only defined when

0 y ≥ . Picard’s theorem guarantees existence

through any point ( )

0 0

y t y = , but not uniqueness

for points ( )

0 0

y t y = when

0

0 y = . The direction

field shown illustrates these ideas.

–3 3

t

2

y

1 4

y y ′ = ; DE does not exist for 0 y < .

34.

( ) ( )

( )

, sin

cos

y f t y ty

f

t ty

y

′ = =

∂

=

∂

Picard’s theorem guarantees both existence and

uniqueness for any point ( )

0 0

, t y . The direction

field shown also indicates these ideas.

5 –5

t

–2

2

y

90 CHAPTER 1 First-Order Differential Equations

35.

( )

5 3

2 3

,

5

3

y f t y y

f

y

y

′ = =

∂

=

∂

Picard’s theorem guarantees existence and

uniqueness for all initial conditions ( )

0 0

y t y = .

The direction field shown also illustrates this

fact.

2 –2

–2

2

y

t

36.

( ) ( )

( )

1 3

2 3

1/ 3 2/ 3

,

1 1

3 3

y f t y ty

f

ty t t y

y

−

−

′ = =

∂

= =

∂

The function f is continuous for all ( ) , t y , but

y

f is not continuous when 0 y = . Hence, we are

not guaranteed uniqueness through points

( )

0 0

, t y when

0

y is zero. See figure for this

direction field.

3

t

–3

3

y

37.

( ) ( )

( )

1 3

2 3

,

1

3

y f t y y t

f

y t

y

−

′ = = −

∂

= −

∂

The function f is continuous for all ( ) , t y , but

y

f is not continuous when y t = (it doesn’t

exist). Hence, the DE has a solution through

every point ( )

0 0

, t y but Picard’s theorem does

not guarantee uniqueness through points for

which

0 0

y t = . See figure for this direction field.

4

t

–4

4

y

–4

SECTION 1.5 Picard’s Theorem: Theoretical Analysis 91

38.

( )

2

3

, 6

3

y f t y t y

t

f

y t

′ = = −

∂

= −

∂

The function f is continuous except when 0 t = ,

hence there exists a solution through all points

( )

0 0

, t y except possibly when

0

0 t = . Also

y

f is

continuous except when 0 t = , and so the DE has

a unique solution for all initial conditions except

possibly when

0

0 t = . The direction field of the

equation as shown indicates that no solutions

pass through initial points of the form ( )

0

0, y .

–1

1

y

1 –1

t

Suggested Journal Entry

39. Student Project

92

CHAPTER

2

Linearity and

Nonlinearity

2.1

Linear Equations: The Nature of Their

Solutions

Classification

1. First-order, nonlinear

2. First-order, linear, nonhomogeneous, variable coefficients

3. Second-order, linear, homogeneous, variable coefficients

4. Second-order, linear, nonhomogeneous, variable coefficients

5. Third-order, linear, homogeneous, constant coefficients

6. Third-order, linear, nonhomogeneous, constant coefficients

7. Second-order, linear, nonhomogeneous, variable coefficients

8. Second-order, nonlinear

9. Second-order, linear, homogeneous, variable coefficients

10. Second-order, nonlinear

Linear Operation Notation

11. Using the common differential operator notation that D(y) =

dy

dt

, we have the following:

(a) 3 0 y ty y ′′ ′ + − = can be written as L(y) = 0 for L = D

2

+ tD − 3.

(b) y′ + y

2

= 0 is not a linear DE.

(c) y′ + sin y = 1 is not a linear DE.

(d) y′ + t

2

y = 0 can be written as L(y) = 0 for L = D + t

2

.

SECTION 2.1 Linear Equations: The Nature of Their Solutions 93

(e) y′ + (sin t)y = 1 can be written as L(y) = 1 for L = D + sin t.

(f) 3 y y y ′′ ′ − + = sin t can be written as L(y) = sin t for L = D

2

− 3D + 1.

Linear and Nonlinear Operations

12.

( ) 2 L y y y ′ = +

Suppose

1 2

, y y and y are functions of t and c is any constant. Then

( ) ( ) ( )

( ) ( )

( )

( ) ( ) ( )

( ) ( )

1 2 1 2 1 2

1 1 2 2

1 2

2

2 2

2 2

2

L y y y y y y

y y y y

L y y

L cy cy cy cy cy

c y y cL y

′

+ = + + +

′ ′ = + + +

= +

′

′ = + = +

′ = + =

Hence, L is a linear operator.

13.

( )

2

L y y y ′ = +

To show that ( )

2

L y y y ′ = + is not linear we can pick a likely function of t and show that it does

not satisfy one of the properties of linearity, equations (2) or (3). Consider the function y t = and

the constant 5 c = :

( ) ( ) ( )

( ) ( )

2

2

2 2

5 5 5 5 25

5 5 5 5

L t t t t

t t t L t

′

= + = +

⎛ ⎞

′

≠ + = + =

⎜ ⎟

⎝ ⎠

Hence, L is not a linear operator.

14. ( ) 2 L y y ty ′ = +

Suppose

1 2

, y y and y are functions and c is any constant.

( ) ( ) ( )

( ) ( )

( ) ( )

( ) ( ) ( ) ( )

( )

1 2 1 2 1 2

1 1 2 2

1 2

2

2 2

2 2

L y y y y t y y

y ty y ty

L y L y

L cy cy t cy c y ty

cL y

′

+ = + + +

′ ′ = + + +

= +

′

′ = + = +

=

Hence, L is a linear operator. This problem illustrates the fact that the coefficients of a DE can be

functions of t and the operator will still be linear.

94 CHAPTER 2 Linearity and Nonlinearity

15. ( )

t

L y y e y ′ = −

Suppose

1 2

, y y and y are functions of t and c is any constant.

( ) ( ) ( )

( ) ( )

( ) ( )

( ) ( ) ( ) ( )

( )

1 2 1 2 1 2

1 1 2 2

1 2

t

t t

t t

L y y y y e y y

y e y y e y

L y L y

L cy cy e cy c y e y

cL y

′

+ = + − +

′ ′ = − + −

= +

′

′ = − = −

=

Hence, L is a linear operator. This problem illustrates the fact that a linear operator need not have

coefficients that are linear functions of t.

16. ( ) ( ) sin L y y t y ′′ = +

( ) ( ) ( )( )

( ) { } ( ) { }

( ) ( )

( ) ( ) ( )( )

( ) { }

( )

1 2 1 2 1 2

1 1 2 2

1 2

sin

sin sin

sin

sin

L y y y y t y y

y t y y t y

L y L y

L cy cy t cy

c y t y

cL y

′′

+ = + + +

′′ ′′ = + + +

= +

′′

= +

′′ = +

=

Hence, L is a linear operator. This problem illustrates the fact that a linear operator need not have

coefficients that are linear functions of t.

17. ( ) ( )

2

1 L y y y y y ′′ ′ = + − +

( ) ( ) ( )

{ }

( )

( ) { }

( )

2

2

1

1

L cy cy cy y cy

c y y y y

cL y

′′

′ = + − +

′′ ′ ≠ + − +

=

Hence, ( ) L y is not a linear operator.

Pop Quiz

18. ( )

2

1

2 1

2

t

y y y t ce

−

′ + = ⇒ = + 19. ( ) 2 2

t

y y y t ce

−

′ + = ⇒ = +

20. ( )

0.08

0.08 100 1250

t

y y y t ce ′ − = ⇒ = − 21. ( )

3

5

3 5

3

t

y y y t ce ′ − = ⇒ = −

SECTION 2.1 Linear Equations: The Nature of Their Solutions 95

22. 5 1 y y ′ + = , ( ) ( )

5

1

1 0

5

t

y y t ce

−

= ⇒ = + , ( )

5

1

1 0

5

y c e = ⇒ = − . Hence, ( )

( )

( )

5 1

1

1

5

t

y t e

− −

= − .

23. 2 4 y y ′ + = , ( ) ( )

2

0 1 2

t

y y t ce

−

= ⇒ = + , ( ) 0 1 1 y c = ⇒ = − . Hence, ( )

2

2

t

y t e

−

= − .

Superposition Principle

24. If

1

y and

2

y are solutions of ( ) 0 y p t y ′ + = , then

( )

( )

1 1

2 2

0

0.

y p t y

y p t y

′ + =

′ + =

Adding these equations gives

( ) ( )

1 2 1 2

0 y y p t y p t y ′ ′ + + + =

or

( ) ( )( )

1 2 1 2

0 y y p t y y

′

+ + + = ,

which shows that

1 2

y y + is also a solution of the given equation.

If

1

y is a solution, we have

( )

1 1

0 y p t y ′ + =

and multiplying by c we get

( ) ( )

( )

( ) ( )( )

1 1

1 1

1 1

0

0

0,

c y p t y

cy cp t y

cy p t cy

′ + =

′ + =

′

+ =

which shows that

1

cy is also a solution of the equation.

Second-Order Superposition Principle

25. If

1

y and

2

y are solutions of

( ) ( ) 0 y p t y q t y ′′ ′ + + = ,

we have

( ) ( )

( ) ( )

1 1 1

2 2 2

0

0.

y p t y q t y

y p t y q t y

′′ ′ + + =

′′ ′ + + =

Multiplying these equations by

1

c and

2

c respectively, then adding and using properties of the

derivative, we arrive at

( ) ( )( ) ( )( )

1 1 2 2 1 1 2 2 1 1 2 2

0 c y c y p t c y c y q t c y c y

′′ ′

+ + + + + = ,

which shows that

1 1 2 2

c y c y + is also a solution.

96 CHAPTER 2 Linearity and Nonlinearity

Verifying Superposition

26. 9 0; y y ′′ − =

3

1

t

y e = ⇒

3

1

3

t

y e ′ = ⇒

3

1

9 ,

t

y e ′′ = so that

3 3

1 1

9 9 9 0

t t

y y e e ′′ − = − = .

3

2

t

y e

−

= ⇒

3

2

3

t

y e ′ = − ⇒

3

2

9 ,

t

y e ′′ = so that

3 3

2 2

9 9 9 0

t t

y y e e

− −

′′ − = − = .

Let y

3

= c

1

y

1

+ c

2

y

2

= c

1

e

3t

+ c

2

e

−3t

, then

3 3

3 1 2

3 ( 3)

t t

y c e c e

−

′ = + −

3 3

3 1 2

9 9

t t

y c e c e

−

′′ = +

Thus,

3 3 3 3

3 3 1 2 1 2

9 ( 9 9 ) 9( ) 0

t t t t

y y c e c e c e c e

− −

′′ − = + − + =

27. 4 0; y y ′′ + =

For y

1

= sin 2t,

1

2cos 2 y t

′

= ⇒

1

4sin 2 , y t ′′ = − so that

1 1

4 4sin 2 4sin 2 0 y y t t ′′ + = − + = .

For y

2

= cos 2t,

2

2sin 2 y t

′

= − ⇒

2

4cos 2 , y t ′′ = − so that

2 2

4 4cos 2 4cos 2 0 y y t t ′′ + = − + = .

Let y

3

= c

1

sin 2t + c

2

cos 2t, then

3 1 2

2cos 2 ( 2sin 2 ) y c t c t ′ = + −

3 1 2

( 4sin 2 ) ( 4cos 2 ) y c t c t ′′ = − + −

Thus,

3 3 1 2 1 2

4 ( 4sin 2 ) ( 4cos 2 ) 4( sin 2 cos 2 ) 0 y y c t c t c t c t ′′ + = − + − + + = .

28. 2 0; y y y ′′ ′ + − =

For y

1

= e

t/2

,

/ 2

1

1

2

t

y e ′ = ,

/ 2

1

1

4

t

y e ′′ = .

Substituting:

/ 2 / 2 / 2

1 1

2 0

4 2

t t t

e e e

⎛ ⎞

+ − =

⎜ ⎟

⎝ ⎠

.

For y

2

= e

−t

,

2

,

t

y e

−

′ = −

2

t

y e

−

′′ = .

Substituting:

( ) ( )

2

t t t

e e e

− − −

+ − − = 0.

For c

1

and c

2

, let y = c

1

e

t/2

+ c

2

e

−t

.

Substituting:

( )

/ 2 / 2 / 2

1 2 1 2 1 2

1 1

2

4 2

t t t t t t

c e c e c e c e c e c e

− − −

⎛ ⎞ ⎛ ⎞

+ + − − +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

= 0.

SECTION 2.1 Linear Equations: The Nature of Their Solutions 97

29. 5 6 0 y y y ′′ ′ − + =

For y

1

= e

2t

,

2

1

2

t

y e ′ = ,

2

1

4

t

y e ′′ = .

Substituting:

2 2 2

4 5(2 ) 6( ) 0

t t t

e e e − + = .

For y

2

= e

3t

,

3

2

3 ,

t

y e ′ =

3

2

9

t

y e ′′ = .

Substituting:

3 3 3

9 5(3 ) 6 0

t t t

e e e − + = .

For y = c

1

e

2t

+ c

2

e

3t

,

( ) ( )

2 3 2 3 2 3

1 2 1 2 1 2

5 6 4 9 5 2 3 6 0

t t t t t t

y y y c e c e c e c e c e c e ′′ ′ − + = + − + + + = .

30. 6 0 y y y ′′ ′ − − =

For y

1

= e

3t

,

3

1

3 ,

t

y e ′ =

3

1

9

t

y e ′′ =

Substituting:

3 3 ) 3

(9 ) (3 ) 6( ) 0

t t t

e e e − − =

For y

2

= e

−2t

,

2

2

2 ,

t

y e

−

′ = −

2

2

4

t

y e

−

′′ =

Substituting:

2 2 2

(4 ) ( 2 ) 6 0

t t t

e e e

− − −

− − − =

For y = c

1

e

3t

+ c

2

e

−3t

( ) ( ) ( )

3 2 3 2 3 3

1 2 1 1 2

6 9 4 3 2 6 0

t t t t t t

y y y c e c e c e e c e c e

− − −

′′ ′ − − = + − − − + = .

31. 9 0 y y ′′ − =

For y

1

= cosh 3t,

1

3sinh3 y t ′ =

1

9cosh3 y t ′′ = .

Substituting: (9cosh3 ) 9(cosh3 ) 0 t t − = .

For y

2

= sinh 3t,

2

3cosh3 , y t ′ =

2

9sinh3 y t ′′ = .

Substituting: (9sinh3 ) 9(sinh3 ) t t − = 0.

For y = c

1

cosh 3t + c

2

sinh 3t,

( ) ( )

1 2 1 2

9 9cosh3 9sinh3 9 cosh3 sinh3 0 y y c t c t c t c t ′′ − = + − + = .

98 CHAPTER 2 Linearity and Nonlinearity

Different Results?

32. The solutions of Problem 31, cosh 3t =

3 3

3

2

t t

e

−

+

and sinh 3t =

3 3

2

t t

e e

−

−

,

are linear combinations of the solutions of Problem 26 and vice-versa, i.e.,

e

3t

= cosh 3t + sinh 3t and e

−3t

= cosh 3t − sinh 3t.

Many from One

33. Because ( )

2

y t t = is a solution of a linear homogeneous equation, we know by equation (3) that

2

ct is also a solution for any real number c.

Guessing Solutions

We can often find a particular solution of a nonhomogeneous DE by inspection (guessing). For the first-

order equations given for Problems 27–31 the general solutions come in two parts: solutions to the

associated homogeneous equation (which could be found by separation of variables) plus a particular

solution of the nonhomogeneous equation. For second-order linear equations as Problems 32–35 we can

also sometimes find solutions by inspection.

34. ( )

1

2

t t t

y y e y t ce e

−

′ + = ⇒ = + 35. ( )

t t t

y y e y t ce te

− − −

′ + = ⇒ = +

36. ( )

t t t

y y e y t ce te ′ − = ⇒ = + 37. ( )

2

2

0

t

y ty y t ce ′ − = ⇒ =

38. ( )

4

3

5

y c t

y t y t

t t

′ + = ⇒ = +

39. ( )

2

1 2

0

at at

y a y y t c e c e

−

′′ − = ⇒ = + . An alternative form is ( ) ( ) ( )

1 2

sinh cosh y t c at c at = + .

40. ( )

2

1 2

0 sin cos y a y y t c at c at ′′ + = ⇒ = + 41. ( )

1 2

0

t

y y y t c c e

−

′′ ′ + = ⇒ = +

42. ( )

1 2

0

t

y y y t c c e ′′ ′ − = ⇒ = +

Nonhomogeneous Principle

In these problems, the verification of

p

y is a straightforward substitution. To find the rest of the solution

we simply add to

p

y all the homogeneous solutions

h

y , which we find by inspection or separation of

variables.

43. 3

t

y y e ′ − = has general solution ( ) 3

t t

h p

y t y y ce te = + = + .

44. 2 10sin y y t ′ + = has general solution ( )

2

4sin 2cos

t

h p

y t y y ce t t

−

= + = + − .

45.

2

2

y y y

t

′ − = has general solution ( )

2 3

h p

y t y y ct t = + = + .

SECTION 2.1 Linear Equations: The Nature of Their Solutions 99

46.

1

2

1

y y

t

′ + =

+

has general solution ( )

2

2

1 1

h p

c t t

y t y y

t t

+

= + = +

+ +

.

Third-Order Examples

47. (a) For y

1

= e

t

, we substitute into

0 y y y y ′′′ ′′ ′ − − + = to obtain e

t

− e

t

− e

t

+ e

t

= 0.

For y

2

= te

t

, we obtain

2

,

t t

y te e ′ = +

2

( ) ,

t t t

y te e e ′′ = + +

2

( ) 2 ,

t t t

y te e e ′′′ = + +

and substitute to verify

(te

t

+ 3e

t

) − (te

t

+ 2e

t

) − (te

t

+ e

t

) + te

t

= 0.

For y

3

= e

−t

, we obtain

3 3 3

, , ,

t t t

y e y e y e

− − −

′ ′′ ′′′ = − = = −

and substitute to verify

( ) ( ) ( )

t t t t

e e e e

− − − −

− − − − + = 0.

(b) y

h

= c

1

e

t

+ c

2

te

t

+ c

3

e

−t

(c) Given y

p

= 2t + 1 + e

2t

:

2

2 2

t

p

y e ′ = +

2

4

t

p

y e ′′ =

2

8

t

p

y e ′′′ =

To verify:

2 2 2 2

2

8 4 (2 2 ) (2 1 )

2 1 3

t t t t

p p p p

t

y y y y e e e t e

t e

′′′ ′′ ′ − − + = − − + + + +

= − +

(d) y(t) = y

h

+ y

p

= c

1

e

t

+ c

2

te

t

+ c

3

e

−t

+ 2t + 1 + e

2t

100 CHAPTER 2 Linearity and Nonlinearity

(e)

2

1 2 3

( ) 2 2

t t t t t

y c e c te e c e e

−

′ = + + − + +

2

1 2 3

( 2 ) 4

t t t t t

y c e c te e c e e ′′ = + + + +

y(0) = 1 = c

1

+ c

3

+ 2 ⇒ c

1

+ c

3

= −1 Equation (1)

(0) y′ = 2 = c

1

+ c

2

− c

3

+ 4 ⇒ c

1

+ c

2

− c

3

= −2 Equation (2)

(0) y′′ = 3 = c

1

+ 2c

2

+ c

3

+ 4 ⇒ c

1

+ 2c

2

+ c

3

= −1 Equation (3)

Add Equation (2) to (1) and (3)

1 2

1 2

2 3

2 3 3

c c

c c

+ = − ⎫

⎬

+ = −

⎭

⇒ c

2

= 0, c

1

=

3

2

− , c

3

=

1

2

.

Thus, y =

2

3 1

2 1 .

2 2

t t t

e e t e

−

− + + + +

48. 4sin 3 y y y y t ′′′ ′′ ′ + − − = +

(a) For y

1

= e

t

, we obtain by substitution

0

t t t t

y y y y e e e e ′′′ ′′ ′ + − − = + − − = .

For y = e

−t

, we obtain by substitution

( ) ( ) 0

t t t t

y y y y e e e e

− − − −

′′′ ′′ ′ + − − = − + − − − = .

For y = te

−t

we obtain by substitution

( 3 ) ( 2 ) ( ) 0

t t t t t t t

y y y y te e te e te e te

− − − − − − −

′′′ ′′ ′ + − − = − + + − − − + − = .

(b) y

h

= c

1

e

t

+ c

2

e

−t

+ c

3

te

−t

(c) Given y

p

= cos t −sin t − 3:

sin cos

p

y t t ′ = − −

cos sin

p

y t t ′′ = − +

sin cos

p

y t t ′′′ = +

To verify:

(sin cos ) ( cos sin ) ( sin cos ) (cos sin 3)

4sin 3

p p p p

y y y y t t t t t t t t

t

′′′ ′′ ′ + − − = + + − + − − − − − −

= +

(d) y(t) = y

h

+ y

p

= c

1

e

t

+ c

2

e

−t

+ c

3

te

−t

+ cos t − sin t − 3

SECTION 2.1 Linear Equations: The Nature of Their Solutions 101

(e)

1 2 3

( ) sin cos

t t t t

y c e c e c te e t t

− − −

′ = − + − + − −

1 2 3

( 2 ) cos sin

t t t t

y c e c e c te e t t

− − −

′′ = + + − − +

y(0) = 1 = c

1

+ c

2

+ 1 − 3 ⇒ c

1

+ c

2

= 3 Equation (1)

(0) y′ = 2 = c

1

+ c

2

− c

3

− 1 ⇒ c

1

− c

2

+ c

3

= 3 Equation (2)

(0) y′′ = 3 = c

1

+ c

2

− 2c

3

− 1 ⇒ c

1

+ c

2

− 2c

3

= 4 Equation (3)

Add Equation (2) to (1) and (3)

1 3

1 3

2 6

2 7

c c

c c

+ = ⎫

⎬

− =

⎭

⇒ c

1

=

13

4

, c

2

=

1

,

4

− c

3

=

1

.

2

−

y(t) =

13 1 1

cos sin 3

4 4 2

t t t

e e te t t

− −

− − + − − .

Suggested Journal Entry

49. Student Project

102 CHAPTER 2 Linearity and Nonlinearity

2.2

Solving the First-Order Linear Differential

Equation

General Solutions

The solutions for Problems 1–15 can be found using either the Euler-Lagrange method or the integrating

factor method. For problems where we find a particular solution by inspection (Problems 2, 6, 7) we use

the Euler-Lagrange method. For the other problems we find it more convenient to use the integrating

factor method, which gives both the homogeneous solutions and a particular solution in one swoop. You

can use the Euler-Lagrange method to get the same results.

1. 2 0 y y ′ + =

By inspection we have ( )

2t

y t ce

−

= .

2. 2 3

t

y y e ′ + =

We find the homogeneous solution by inspection as

2t

h

y ce

−

= . A particular solution on the

nonhomogeneous equation can also be found by inspection, and we see

t

p

y e = . Hence the

general solution is ( )

2t t

y t ce e

−

= + .

3. 3

t

y y e ′ − =

We multiply each side of the equation by the integrating factor

( )

( ) ( ) 1 p t dt dt

t

t e e e μ

−

− ∫ ∫

= = =

giving

( ) 3

t

e y y

−

′ − = , or simply

( )

3

t

d

ye

dt

−

= .

Integrating, we find 3

t

ye t c

−

= + , or ( ) 3

t t

y t ce te = + .

4. sin y y t ′ + =

We multiply each side of the equation by the integrating factor ( )

t

t e μ = , giving

( ) sin

t t

e y y e t ′ + = , or,

( )

sin

t t

d

ye e t

dt

= .

Integrating by parts, we get ( )

1

sin cos

2

t t

ye e t t c = − + .

Solving for y, we find ( )

1 1

sin cos

2 2

t

y t ce t t

−

= + − .

SECTION 2.2 Solving the First-Order Linear Differential Equation 103

5.

1

1

t

y y

e

′ + =

+

We multiply each side of the equation by the integrating factor ( )

t

t e μ = , giving

( ) ( )

, or, .

1 1

t t

t t

t t

e d e

e y y ye

dt e e

′ + = =

+ +

Integrating, we get

( )

ln 1

t t

ye e c = + + .

Hence, ( ) ( )

ln 1

t t t

y t ce e e

− −

= + + .

6. 2 y ty t ′ + =

In this problem we see that ( )

1

2

p

y t = is a solution of the nonhomogeneous equation (there are

other single solutions, but this is the easiest to find). Hence, to find the general solution we solve

the corresponding homogeneous equation, 2 0 y ty ′ + = ,

by separation of variables, getting

2

dy

tdt

y

= − ,

which has the general solution

2

t

y ce

−

= , where c is any constant.

Adding the solutions of the homogeneous equation to the particular solution

1

2

p

y = we get the

general solution of the nonhomogeneous equation:

( )

2

1

2

t

y t ce

−

= + .

7.

2 2

3 y t y t ′ + =

In this problem we see that ( )

1

3

p

y t = is a solution of the nonhomogeneous equation (there are

other single solutions, but this is the easiest to find). Hence, to find the general solution, we solve

the corresponding homogeneous equation,

2

3 0 y t y ′ + = , by separation of variables, getting

2

3

dy

t dt

y

= − ,

which has the general solution ( )

3

t

y t ce

−

= , where c is any constant. Adding the solutions of the

homogeneous equation to the particular solution

1

3

p

y = , we get the general solution of the

nonhomogeneous equation

( )

3

1

3

t

y t ce

−

= + .

104 CHAPTER 2 Linearity and Nonlinearity

8.

2

1 1

y y

t t

′ + = , ( ) 0 t ≠

We multiply each side of the equation by the integrating factor ( )

ln

dt t

t

t e e t μ

∫

= = = ,

giving

( )

1 1 1

, or, .

d

t y y ty

t t dt t

⎛ ⎞

′ + = =

⎜ ⎟

⎝ ⎠

Integrating, we find ln ty t c = + .

Solving for y, we get ( )

1

ln

c

y t t

t t

⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

.

9. 2 ty y t ′ + =

We rewrite the equation as

1

2 y y

t

′ + = , and multiply each side of the equation by the integrating

factor ( )

ln

dt t

t

t e e t μ

∫

= = = ,

giving

( )

1

2 , or, 2 .

d

t y y t ty t

t dt

⎛ ⎞

′ + = =

⎜ ⎟

⎝ ⎠

Integrating, we find

2

ty t c = + .

Solving for y, we get ( )

c

y t t

t

= + .

10. ( ) cos sin 1 t y ty ′ + =

We rewrite the equation as ( ) tan sec y t y t ′ + = ,

and multiply each side of the equation by the integrating factor

( )

( ) ( )

1

ln cos ln cos tan

sec

t t tdt

t e e e t μ

−

−

∫

= = = = ,

giving

( ) ( ) ( ) ( )

2 2

sec tan sec , or, sec sec .

d

t y t y t t y t

dt

′ + = =

Integrating, we find ( ) sec tan t y t c = + . Solving for y, we get ( ) cos sin y t c t t = + .

SECTION 2.2 Solving the First-Order Linear Differential Equation 105

11.

2

2

cos y y t t

t

′ − = , ( ) 0 t ≠

We multiply each side of the equation by the integrating factor

( )

( )

2

2 2ln ln 2 t dt t t

t e e e t μ

−

− − − ∫

= = = = ,

giving

( )

2 2

2

cos , or, cos .

d

t y y t t y t

t dt

− −

⎛ ⎞

′ − = =

⎜ ⎟

⎝ ⎠

Integrating, we find

2

sin t y t c

−

= + . Solving for y, we get ( )

2 2

sin y t ct t t = + .

12.

3

3 sint

y y

t t

′ + = , ( ) 0 t ≠

We multiply each side of the equation by the integrating factor

( )

( ) ( )

3

ln

3

3ln 3

t

t dt

t

t e e e t μ

∫

= = = = ,

giving

( )

3 3

3

sin , or, sin .

d

t y y t t y t

t dt

⎛ ⎞

′ + = =

⎜ ⎟

⎝ ⎠

Integrating, we find

3

cos t y t c = − + .

Solving for y, we get ( )

3 3

1

cos

c

y t t

t t

= − .

13.

( )

1 0

t t

e y e y ′ + + =

We rewrite the equation as 0

1

t

t

e

y y

e

⎛ ⎞

′ + =

⎜ ⎟

+

⎝ ⎠

,

and then multiply each side of the equation by the integrating factor

( )

( ) ( )

1 ln 1

1

t t t

e e dt e

t

t e e e μ

+ +

∫

= = = + ,

giving

( ) ( )

1 0

t

d

e y

dt

+ = .

Integrating, we find

( )

1

t

e y c + = . Solving for y, we have ( )

1

t

c

y t

e

=

+

.

106 CHAPTER 2 Linearity and Nonlinearity

14.

( )

2

9 0 t y ty ′ + + =

We rewrite the equation as

2

0

9

t

y y

t

⎛ ⎞

′ + =

⎜ ⎟

+

⎝ ⎠

,

and then multiply each side of the equation by the integrating factor

( )

( )

( )

( )

2 2

9 1 2 ln 9

2

9

t t dt t

t e e t μ

+ +

∫

= = = + ,

giving

( )

2

9 0

d

t y

dt

+ = .

Integrating, we find

2

9 t y c + = .

Solving for y, we find ( )

2

9

c

y t

t

=

+

.

15.

2 1

2

t

y y t

t

+ ⎛ ⎞

′ + =

⎜ ⎟

⎝ ⎠

, ( ) 0 t ≠

We multiply each side of the equation by the integrating factor

( )

2

2 1 1

2

t

t

t dt dt te

t t

μ

+

= = + =

∫ ∫

,

giving

( )

2 2 2

2

t t

d

te y t e

dt

= .

Integrating, we find

2 2 2 2 2

1

2

t t t t

te y t e te e c = − + + .

Solving for y, we have ( )

2

1

1

2

t

e

y t c t

t t

−

⎛ ⎞

= + + −

⎜ ⎟

⎝ ⎠

.

Initial-Value Problems

16. 1 y y ′ − = , ( ) 0 1 y =

By inspection, the homogeneous solutions are

t

h

y ce = . A particular solution of the nonho-

mogeneous can also be found by inspection to be 1

p

y = − . Hence, the general solution is

1

t

h p

y y y ce = + = − .

Substituting ( ) 0 1 y = gives 1 1 c − = or 2 c = . Hence, the solution of the IVP is

( ) 2 1

t

y t e = − .

SECTION 2.2 Solving the First-Order Linear Differential Equation 107

17.

3

2 y ty t ′ + = , ( ) 1 1 y =

We can solve the differential equation using either the Euler-Lagrange method or the integrating

factor method to get

( )

2

2

1 1

2 2

t

y t t ce

−

= − + .

Substituting ( ) 1 1 y = we find

1

1 ce

−

= or c = e. Hence, the solution of the IVP is

( )

2

2 1

1 1

2 2

t

y t t e

−

= − + .

18.

3

3

y y t

t

⎛ ⎞

′ − =

⎜ ⎟

⎝ ⎠

, ( ) 1 4 y =

We find the integrating factor to be

( )

( )

3

3

3ln ln 3

t dt

t t

t e e e t μ

−

−

− − ∫

= = = = .

Multiplying the DE by this, we get

( )

3

1

d

t y

dt

−

= .

Hence,

3

t y t c

−

= + , or,

3 4

( ) y t ct t = + .

Substituting ( ) 1 4 y = gives 1 4 c + = or 3 c = . Hence, the solution of the IVP is

( )

3 4

3 y t t t = + .

19. 2 y ty t ′ + = , ( ) 0 1 y =

We solved this differential equation in Problem 6 using the integrating factor method and found

( )

2

1

2

t

y t ce

−

= + .

Substituting ( ) 0 1 y = gives

1

1

2

c + = or

1

2

c = . Hence, the solution of the IVP is

( )

2

1 1

2 2

t

y t e

−

= + .

20.

( )

1 0

t t

e y e y ′ + + = , ( ) 0 1 y =

We solved this DE in Problem 13 and found

( )

1

t

c

y t

e

=

+

.

Substituting ( ) 0 1 y = gives 1

2

c

= or 2 c = . Hence, the solution of the IVP is ( )

2

1

t

y t

e

=

+

.

108 CHAPTER 2 Linearity and Nonlinearity

Synthesizing Facts

21. (a) ( ) ( )

2

2

, 1

1

t t

y t t

t

+

= > −

+

(b) ( ) ( ) 1, 1 y t t t = + > −

(c) The algebraic solution given in Example

1 for 1 k = is

( )

( )

2

2

1

2 1

1 1

t

t t

y t

t t

+

+ +

= =

+ +

.

Hence, when 1 t ≠ − we have 1 y t = + .

–3

3

y

3

t

–3

(d) The solution passing through the origin ( ) 0, 0 asymptotically approaches the line

1 y t = + as t →∞, which is the solution passing through ( ) 0 1 y = . The entire line

1 y t = + is not a solution of the DE, as the slope is not defined when 1 t = − . The

segment of the line 1 y t = + for 1 t > − is the solution passing through ( ) 0 1 y = .

On the other hand, if the initial condition were ( ) 5 4 y − = − , then the solution

would be the segment of the line 1 y t = + for t less than –1. Notice in the direction field

the slope element is not defined at ( ) 1, 0 − .

Using Integrating Factors

In each of the following equations, we first write in the form ( ) ( ) y p t y f t ′ + = and then identify ( ) p t .

22. 2 0 y y ′ + =

Here ( ) 2 p t = , therefore the integrating factor is ( )

( ) 2

2

p t dt dt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation 2 0 y y ′ + = by

2t

e yields

( )

2

0

t

d

ye

dt

= .

Integrating gives

2t

ye c = . Solving for y gives ( )

2t

y t ce

−

= .

23. 2 3

t

y y e ′ + =

Here ( ) 2 p t = , therefore the integrating factor is ( )

( ) 2

2

p t dt dt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation 2 3

t

y y e ′ + = by

2t

e yields

( )

2 3

3

t t

d

ye e

dt

= .

Integrating gives

2 3 t t

ye e c = + . Solving for y gives ( )

2t t

y t ce e

−

= + .

SECTION 2.2 Solving the First-Order Linear Differential Equation 109

24.

3t

y y e ′ − =

Here ( ) 1 p t = − , therefore the integrating factor is ( )

( ) p t dt dt

t

t e e e μ

−

− ∫ ∫

= = = .

Multiplying each side of the equation

3t

y y e ′ − = by

t

e

−

yields

( )

2 t t

d

ye e

dt

−

= .

Integrating gives

2

1

2

t t

ye e c

−

= + . Solving for y gives ( )

3

1

2

t t

y t ce e = + .

25. sin y y t ′ + =

Here ( ) 1 p t = therefore the integrating factor is ( )

( ) p t dt dt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation sin y y t ′ + = by

t

e gives

( )

sin

t t

d

ye e t

dt

= .

Integrating gives ( )

1

sin cos

2

t t

ye e t t c = − + . Solving for y gives ( ) ( )

1

sin cos

2

t

y t t t ce

−

= − + .

26.

1

1

t

y y

e

′ + =

+

Here ( ) 1 p t = therefore the integrating factor is ( )

( ) p t dt dt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation

1

1

t

y y

e

′ + =

+

by

t

e yields

( )

1

t

t

t

d e

ye

dt e

=

+

.

Integrating gives

( )

ln 1

t t

ye e c = + + . Solving for y gives ( ) ( )

ln 1

t t t

y t e e ce

− −

= + + .

27. 2 y ty t ′ + =

Here ( ) 2 p t t = , therefore the integrating factor is ( )

( )

2

2 p t dt tdt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation 2 y ty t ′ + = by

2

t

e yields

( )

2 2

t t

d

ye te

dt

= .

Integrating gives

2 2

1

2

t t

ye e c = + . Solving for y gives ( )

2

1

2

t

y t ce

−

= + .

110 CHAPTER 2 Linearity and Nonlinearity

28.

2 2

3 y t y t ′ + =

Here ( )

2

3 p t t = , therefore the integrating factor is ( )

( )

2

3

3 p t dt t dt

t

t e e e μ

∫ ∫

= = = .

Multiplying each side of the equation

2 2

3 y t y t ′ + = by

3

t

e yields

( )

3 3

2 t t

d

ye t e

dt

= .

Integrating gives

3 3

1

3

t t

ye e c = + . Solving for y gives ( )

3

1

3

t

y t ce

−

= + .

29.

2

1 1

y y

t t

′ + =

Here ( )

1

p t

t

= , therefore the integrating factor is ( )

( ) ( ) 1

ln

p t dt t dt

t

t e e e t μ

∫ ∫

= = = = .

Multiplying each side of the equation

2

1 1

y y

t t

′ + = by t yields

( )

1 d

ty

dt t

= .

Integrating gives ln ty t c = + . Solving for y gives ( )

1 1

ln y t c t

t t

= + .

30. 2 ty y t ′ + =

Here ( )

1

p t

t

= , therefore the integrating factor is ( )

( ) ( ) 1

ln

p t dt t dt

t

t e e e t μ

∫ ∫

= = = = .

Multiplying each side of the equation 2

y

y

t

′ + = by t yields

( ) 2

d

ty t

dt

= .

Integrating gives

2

ty t c = + . Solving for y gives ( )

1

y t c t

t

= + .

Switch for Linearity

1 dy

dt t y

=

+

, ( ) 1 0 y − =

31. Flipping both sides of the equation yields the equivalent linear form

dt

t y

dy

= + , or

dt

t y

dy

− = .

Solving this equation we get ( ) 1

y

t y ce y = − − .

Using the condition ( ) 1 0 y − = , we find

0

1 1 ce − = − , and so 0 c = . Thus, we have 1 t y = − − and

solving for y gives

( ) 1 y t t = − − .

SECTION 2.2 Solving the First-Order Linear Differential Equation 111

The Tough Made Easy

2

2

y

dy y

dt e ty

=

−

32. We flip both sides of the equation, getting

2

2

,

y

dt e ty

dy y

−

= or,

2

2

y

dt e

t

dy y y

+ = .

We solve this linear DE for ( ) t y getting ( )

2

y

e c

t y

y

+

= .

A Useful Transformation

33. (a) Letting ln z y = , we have

z

y e = and

z

dy dz

e

dt dt

= .

Now the equation ln

dy

ay by y

dt

+ = can be rewritten as

z z z

dz

e ae bze

dt

+ = .

Dividing by

z

e gives the simple linear equation

dz

bz a

dt

− = − .

Solving yields

bt

a

z ce

b

= + and using ln z y = , the solution becomes ( )

( )

bt

a b ce

y t e

+

= .

(b) If 1 a b = = , we have ( )

( )

1

t

ce

y t e

+

= .

Note that when 0 c = we have the constant solution y e = .

Bernoulli Equation ( ) ( ) y p t y q t y

α

′ + = , 0 α ≠ , 1 α ≠

34. (a) We divide by y

α

to obtain

1

( ) ( ) y y p t y q t

α α − −

′ + = .

Let

1

v y

α −

= so that (1 )

a

v y y α

−

′ ′ = − and

1

a

v

y y

α

−

′

′ =

−

.

Substituting into the first equation for

1

y

α −

and y y

α −

′ , we have

( ) ( )

1

v

p t v q t

α

′

+ =

−

, a linear DE in v,

which we can now rewrite into standard form as

(1 ) ( ) (1 ) ( ) v p t v q t α α ′ + − = − .

112 CHAPTER 2 Linearity and Nonlinearity

(b) 3 α = , ( ) 1 p t = − , and ( ) 1 q t = ; hence 2 2

dv

v

dt

+ = − , which has the general solution

( )

2

1

t

v t ce

−

= − + .

Because

2

1

v

y

= , this yields ( ) ( )

1 2

2

1

t

y t ce

−

−

= − + .

Note, too, that 0 y = satisfies the given equation.

(c) When 0 α = the Bernoulli equation is

( ) ( )

dy

p t y q t

dt

+ = ,

which is the general first-order linear equation we solved by the integrating factor method

and the Euler-Lagrange method.

When 1 α = the Bernoulli equation is

( ) ( )

dy

p t y q t y

dt

+ = , or, ( ) ( ) ( )

0

dy

p t q t y

dt

+ − = ,

which can be solved by separation of variables.

Bernoulli Practice

35.

3

y ty ty ′ + = or

3 2

y y ty t

− −

′ + =

Let v = y

−2

, so

dv

dt

=

3

2

dy

y

dt

−

− .

Substituting in the DE gives

1

, so that

2

dv

tv t

dt

− + = 2 2 ,

dv

tv t

dt

− = − which is linear in v, with

integrating factor μ =

2

2tdt

t

e e

−

− ∫

= .

Thus,

2 2 2

2 2

t t t

dv

e te v te

dt

− − −

− = − , and

2 2 2

2 ,

t t t

e v te dt e c

− − −

= − = +

∫

so

2

1

t

v ce = + .

Substituting back for v gives

2

2

1

,

1

t

y

ce

=

+

hence

2

1

( ) .

1

t

y t

ce

= ±

+

SECTION 2.2 Solving the First-Order Linear Differential Equation 113

36.

2 t

y y e y ′ − = , so that

2 1 t

y y y e

− −

′ − =

Let v = y

−1

, so

2

dv dy

y

dt dt

−

= − .

Substituting in the DE gives

t

dv

v e

dt

+ = − , which is linear in v with integrating factor

dt

t

e e μ

∫

= = .

Thus

2 t t t

dv

e e v e

dt

+ = − , and

2

2

2

t

t t

e

e v e dt c = − = − +

∫

, so

2

t

t

e

v ce

−

= − + .

Substituting back for v gives

1

1

1

2

, or ( )

2

t

t

t t

e

y c e y t

e c e

− −

−

= − + =

− +

.

37.

2 4

2 3 t y ty y ′ − = or

4 2 3

2 3 y t y ty

− −

′ − = , t ≠ 0

Let v = y

−3

, so

4

3

dv dy

y

dt dt

−

= − .

Substituting in the DE gives

2

1

2 3

3

dv

t tv

dt

− − = , or,

2

6 9 dv

dt t t

+ = − ,

which is linear in v, with integrating factor μ =

6

6ln 6

.

dt

t

t

e e t

∫

= =

Thus

6 5 4

6 9

dv

t t v t

dt

+ = − , and

6 4 5

9

9

5

t v t dt t c = − = − +

∫

,

so

5

9

5

v ct

t

−

= − + . Substituting back for v gives

3

6

9

5

c

y

t t

−

= − + .

Hence

6

3

5

6

1 5

,

9

9 5

5

t

y

c

t c

t t

= =

− +

− +

so

2

3

5

1

5

( )

9

y t t

c t

=

−

.

38.

2 2

(1 ) 0 t y ty ty ′ − − − = (Assume 1 t < )

2 2 1

(1 ) y t y ty t

− −

′ − − =

Let v = y

−1

, so

2

.

dv dy

y

dt dt

−

= −

Substituting in the DE gives

2

(1 ) ,

dv

t tv t

dt

− − − = so that

2 2

,

1 1

dv t t

v

dt t t

−

+ =

− −

which is linear in v, with integrating factor

2

1

t

dt

t

e μ

−

∫

=

2

1

ln(1 )

2

t

e

− −

= =

2 1/ 2

(1 ) t

−

− .

114 CHAPTER 2 Linearity and Nonlinearity

Thus,

2 1/ 2 2 3/ 2 2 3/ 2

(1 ) (1 ) (1 )

dv

t t t v t t

dt

− − −

− + − = − , and

2 1/ 2

2 3/ 2 3/ 2

1/ 2

1/ 2 2 1/ 2

1

(1 )

2 (1 )

1

1 2

2

(1 )

t dt dw

t v

t w

w

c

w c t c

−

−

− −

−

− = = −

−

= − +

⎛ ⎞

−

⎜ ⎟

⎝ ⎠

= + = − +

∫ ∫

2

(Substitute 1

2

1

)

2

w t

dw tdt

dw tdt

= −

= −

− =

Hence v = 1 + c(1 − t

2

)

1/2

and substituting back for v gives y(t) =

1/ 2

1

.

1 (1 ) c t + −

39.

2

y y

y

t t

−

′ + = y(1) = 2

3

2

1 y

y y

t t

′ + =

Let v = y

3

, so

2

3 .

dv dy

y

dt dt

=

Substituting in the DE gives

1 1 1

3

dv

v

dt t t

+ = , or

3 3 dv

v

dt t t

+ = ,

which is linear in v, with integrating factors

3/

3ln 3

t dt

t

e e t μ

∫

= = = .

Thus,

3 2 2

3 3

dv

t t v t

dt

+ = , and

3 2 3

3 t v t dt t c = = +

∫

, so

3

1 . v ct

−

= +

Substituting back for v gives

3 3

1 y ct

−

= + or

3 3

( ) 1 . y t ct

−

= +

For the IVP we substitute the initial condition y(1) = 2, which gives 2

3

= 1 + c, so c = 7.

Thus, y

3

= 1 + 7t

−3

and

3 3

( ) 1 7 . y t t

−

= +

40.

2 3

3 2 1 0 y y y t ′ − − − =

Let v = y

3

, so

2

3

dv dy

y

dt dt

= , and 2 1

dv

v t

dt

− = + ,

which is linear in v with integrating factor

2

2

dt

t

e e μ

−

− ∫

= = .

SECTION 2.2 Solving the First-Order Linear Differential Equation 115

Thus,

2 2 2

2 ( 1)

t t t

dv

e e v t e

dt

− − −

− = + , and

2 2

( 1)

t t

e v t e dt

− −

= +

∫

2 2

( 1) . (Integration by parts)

2 4

t t

e e

t c

− −

= − + − +

Hence v =

2 2

1 1 3

.

2 4 2 4

t t

t t

ce ce

+

− − + = − − +

Substituting back for v gives y

3

=

2

3

2 4

t

t

ce − − + .

For the IVP, substituting the initial condition y(0) = 2 gives 8 =

3

4

c − + , c =

35

4

.

Hence, y

3

=

2

3 35

2 4 4

t

t

e − − + , and

2

3

3 35

( ) .

2 4 4

t

t

y t e

−

= − +

Ricatti Equation ( ) ( ) ( )

2

y p t q t y r t y ′ = + +

41. (a) Suppose

1

y satisfies the DE so that

( ) ( ) ( )

2 1

1 1

dy

p t q t y r t y

dt

= + + .

If we define a new variable

1

1

y y

v

= + , then

1

2

1 dy dy dv

dt dt dt v

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

.

Substituting for

1

dy

dt

yields ( ) ( ) ( )

2

1 1

2

1 dy dv

p t q t y r t y

dt dt v

⎛ ⎞

= + + −

⎜ ⎟

⎝ ⎠

.

Now, if we require, as suggested, that v satisfies the linear equation

( ) ( ) ( ) ( )

1

2

dv

q t r t y v r t

dt

= − + − ,

then substituting in the previous equation gives

( ) ( ) ( )

( ) ( ) ( )

1 2

1 1

2

2 q t r t y r t

dy

p t q t y r t y

dt v v v

= + + + + + ,

which simplifies to

( ) ( ) ( ) ( ) ( ) ( )

2 2 1

1 1

2

1 1

2

y dy

p t q t y r t y p t q t y r t y

dt v v v

⎛ ⎞

⎛ ⎞ ⎛ ⎞

= + + + + + = + +

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎝ ⎠

.

Hence,

1

1

y y

v

= + satisfies the Ricatti equation as well, as long as v satisfies its given

equation.

116 CHAPTER 2 Linearity and Nonlinearity

(b)

2

1 2 y y y ′ = − + −

Let

1

1 y = so

1

0 y′ = , and substitution in the DE gives

2

1 1

0 1 2 1 2 1 0 y y = − + − = − + − = .

Hence,

1

y satisfies the given equation. To find v and then y, note that

( ) 1 p t = − , ( ) 2 q t = , ( ) 1 r t = − .

Now find v from the assumed requirement that

( )( ) ( ) ( ) 2 2 1 1 1

dv

v

dt

= − + − − − ,

which reduces to 1

dv

dt

= . This gives ( ) v t t c = + , hence ( )

1

1 1

1 y t y

v t c

= + = +

+

.

Computer Visuals

42. (a) 2 y y t ′ + =

–5

5

y

5 –5

t

y x = − 05 025 . .

(b) ( )

2t

h

y t ce

−

= ,

1 1

2 4

p

y t = −

The general solution is

( )

2

1 1

2 4

t

h p

y t y y ce t

−

= + = + − .

The curves in the figure in part (a) are labeled for different values of c.

(c) The homogeneous solution

h

y is transient because 0

h

y → as t →∞. However,

although all solutions are attracted to

p

y , we would not call

p

y a steady-state solution

because it is neither constant nor periodic;

p

y →∞ as t →∞.

SECTION 2.2 Solving the First-Order Linear Differential Equation 117

43.

(a)

3t

y y e ′ − =

(b) ( )

t

h

y t ce = ,

3

1

2

t

p

y e = .

The general solution is

( )

3

1

2

h

p

t t

h p

y

y

y t y y ce e = + = + .

3 –3

t

–3

3

y

c = –2

(c) There is no steady-state solution because all solutions (including both

h

y and

p

y ) go to

∞ as t →∞. The c values are approximate:

{0.5, –0.8, –1.5, –2, –2.5, –3.1}

as counted from the top-most curve to the bottom-most one.

44. (a)

sin y y t ′ + =

6 –6

t

–2

2

y

c=-0.002

(b) ( )

t

h

y t ce

−

= ,

1 1

sin cos

2 2

p

y t t = − .

The general solution is ( )

1 1

sin cos

2 2

t

h p

y t y y ce t t

−

= + = + − .

The curves in the figure in part (a) are labeled for different values of c.

(c) The sinusoidal steady-state solution

1 1

sin cos

2 2

p

y t t = −

occurs when 0 c = . Note that the other solutions approach this solution as t →∞.

118 CHAPTER 2 Linearity and Nonlinearity

45. (a)

sin 2 y y t ′ + =

6 –6

t

–2

2

y

(b) ( )

t

h

y t ce

−

= , ( )

1

sin 2 2cos 2

5

p

y t t = − .

The general solution is ( )

sin 2 2cos 2

5

h

p

t

y

y

t t

y t ce

−

−

= +

.

(c) The steady-state solution is

p

y , which attracts all other solutions.

The transient solution is

h

y .

46. (a)

2 0 y ty ′ + =

–2

2

y

c = –2

c = 2

c = –1

c = 1

c = 0

2 –2

t

(b) This equation is homogeneous.

The general solution is

( )

2

t

h

y t ce

−

= .

(c) The equation has steady-state

solution 0 y = . All solutions

tend towards zero as t →∞.

SECTION 2.2 Solving the First-Order Linear Differential Equation 119

47. (a)

2 1 y ty ′ + =

–4

4

y

4 –4

t

The approximate c values corresponding

to the curves in the center counted from

top to bottom, are

{1; –1; 2; –2}

and approximately 50,000 (left curve) and

–50,000 (right curve) for the side curves.

(b) ( )

2

t

h

y t ce

−

= ,

2 2

t t

p

y e e dt

−

=

∫

.

The general solution is ( )

2 2 2

h

p

t t t

y

y

y t ce e e dt

− −

= +

∫

.

(c) The steady-state solution is ( ) 0 y t = , which is not equal to

p

y . Both

h

y and

p

y are

transient, but as t →∞, all solutions approach 0.

Computer Numerics

48. 2 y y t ′ + = , ( ) 0 1 y =

(a) Using step size 0.1 h = and 0.01 h = and Euler’s method, we compute the following

values. In the latter case we print only selected values.

Euler’s Method

t

( ) = 0.1 y h ( ) = 0.01 y h

T

( ) = 0.1 y h ( ) = 0.01 y h

0 1 1.0000 0.6 0.3777 0.4219

0.1 0.8 0.8213 0.7 0.3621 0.4039

0.2 0.65 0.6845 0.8 0.3597 0.3983

0.3 0.540 0.5819 0.9 0.3678 0.4029

0.4 0.4620 0.5071 1 0.3842 0.4158

0.5 0.4096 0.4552

By Runge-Kutta (RK4) we obtain ( ) 1 0.4192 y ≈ for step size 0.1 h = .

120 CHAPTER 2 Linearity and Nonlinearity

(b) From Problem 42, we found the general solution of DE to be ( )

2

1 1

2 4

t

y t ce t

−

= + − .

Using IC ( ) 0 1 y = yields

5

4

c = . The solution of the IVP is ( )

2

5 1 1

4 2 4

t

y t e t

−

= + − ,

and to 4 places, we have

( )

2

5 1 1

1 0.4192

4 2 4

y e

−

= + − ≈ .

(c) The error for ( ) 1 y using step size 0.1 h = in Euler’s approximation is

ERROR 0.4192 0.3842 0.035 = − =

Using step size 0.01 h = , Euler’s method gives

ERROR 0.4192 0.4158 0.0034 = − = ,

which is much smaller. For step size 0.1 h = , Runge-Kutta gives ( ) 1 0.4158 y = and zero

error to four decimal places.

(d) The accuracy of Euler’s method can be greatly improved by using a smaller step size.

The Runge-Kutta method is more accurate for a given step size in most cases.

49. Sample analysis:

3t

y y e ′ − = , ( ) 0 1 y = , ( ) 1 y .

Exact solution is

3

0.5 0.5

t t

y e e = + , so ( ) 1 11.4019090461656 y = to thirteen decimal places.

(a) ( ) 1 9.5944 y ≈ by Euler’s method for step size 0.1 h = ,

( ) 1 11.401909375 y ≈ by Runge-Kutta for step size 0.1 (correct to six decimal places).

(b) From Problem 24, we found the general solution of the DE to be

3

1

2

t t

y ce e = + .

(c) The accuracy of Euler’s method can be greatly improved by using a smaller step size; but

it still is not correct to even one decimal place for step size 0.01.

( ) 1 11.20206 y ≈ for step size 0.01 h =

(d) MORAL: Euler’s method converges ever so slowly to the exact answer—clearly a far

smaller step would be necessary to approach the accuracy of the Runge-Kutta method.

SECTION 2.2 Solving the First-Order Linear Differential Equation 121

50. 2 1 y ty ′ + = , ( ) 0 1 y =

(a) Using step size 0.1 h = and 0.01 h = and Euler’s method, we compute the following

values.

Euler’s Method

t

( ) = 0.1 y h ( ) = 0.01 y h

T

( ) = 0.1 y h ( ) = 0.01 y h

0 1 1.0000 0.6 1.2308 1.1780

0.1 1.1 1.0905 0.7 1.1831 1.1288

0.2 1.178 1.1578 0.8 1.1175 1.0648

0.3 1.2309 1.1999 0.9 1.0387 0.9905

0.4 1.2570 1.2165 1 0.9517 0.9102

0.5 1.2564 1.2084

( ) 1 0.905958 y ≈ by Runge-Kutta method using step size 0.1 h = (correct to six decimal

places).

(b) From Problem 47, we found the general solution of DE to be ( )

2 2 2

t t t

y t ce e e dt

− −

= +

∫

.

Using IC ( ) 0 1 y = , yields 1 c = . The solution of the IVP is

( )

2 2

0

(1 )

t

t u

y t e e du

−

= +

∫

and so to 10 places, we have ( ) 1 0.9059589485 y =

(c) The error for ( ) 1 y using step size 0.1 h = in Euler’s approximation is

ERROR 0.9517 0.9059 0.0458 = − = .

Using step size 0.01 h = , Euler’s method gives

ERROR 0.9102 0.9060 0.0043 = − = ,

which is much smaller. Using step size 0.1 h = in Runge-Kutta method gives ERROR

less then 0.000001.

(d) The accuracy of Euler’s method can be greatly improved by using a smaller step size, but

the Runge-Kutta method has much better performance because of higher degree of

accuracy.

122 CHAPTER 2 Linearity and Nonlinearity

Direction Field Detective

51. (a) (A) is linear homogeneous, (B) is linear nonhomogeneous, (C) is nonlinear.

(b) If

1

y and

2

y are solutions of a linear homogeneous equation,

( ) 0 y p t y ′ + = ,

then ( )

1 1

0 y p t y ′ + = , and ( )

2 2

0 y p t y ′ + = . We can add these equations, to get

( ) ( ) ( ) ( )

1 1 2 2

0 y p t y y p t y ′ ′ + + + = .

Because this equation can be written in the equivalent form

( )

( )

( )

1 2 1 2

0 y y y y

p t

′

+ + + =

,

then

1 2

y y + is also a solution of the given equation.

(c) The sum of any two solutions follows the direction field only in (A). For the linear

homogeneous equation (A) you plot any two solutions

1

y and

2

y by simply following

curves in the direction field, and then add these curves, you will see that the sum

1 2

y y +

also follows the direction field.

However, in equation (B) you can observe a straight line solution, which is

1

1 1

2 4

y t = − .

If you add this to itself you get

1 1 1

1

2

2

y y y t + = = − , which clearly does not follow the

direction field and hence is not a solution. In equation (C)

1

1 y = is a solution but if you

add it to itself you can see from the direction field that

1 2

2 y y + = is not a solution.

Recognizing Linear Homogeneous DEs from Direction Fields

52. For (A) and (D): The direction fields appear to represent linear homogeneous DEs because the

sum of any two solutions is a solution and a constant times a solution is also a solution. (Just

follow the direction elements.)

For (B) , (C) , and (E): These direction fields cannot represent linear homogeneous DEs because

the zero function is a solution of linear homogeneous equations, and these direction fields do not

indicate that the zero function is a solution. (B) seems to represent a nonlinear DE with more

than one equilibrium, while (C) and (E) represent linear but nonhomogeneous DEs.

Note: It may be helpful to look at textbook Figures 2.1.1 and 2.2.2.

Suggested Journal Entry

53. Student Project

SECTION 2.3 Growth and Decay Phenomena 123

2.3

Growth and Decay Phenomena

Half-Life

1. (a) The half-life

h

t is the time required for the solution to reach

0

1

2

y . Hence,

0 0

1

2

kt

y e y = .

Solving for

h

t , yields ln 2

h

kt = − , or

1

ln 2

h

t

k

= − .

(b) The solution to y ky ′ = is ( )

0

kt

y t y e = so at time

1

t t = , we have

( )

1

1 0

kt

y t y e B = = .

Then at

1 h

t t t = + we have

( )

( ) ( ) ( )

1 1 1

ln 2 ln 1 2 ln 2

1 0 0 0

1

2

h h

k t t k k kt kt kt

h

y t t y e y e e y e e Be Be B

+ − −

+ = = = = = = .

Doubling Time

2. For doubling time

d

t , we solve

0 0

2

d

kt

y e y = , which yields

1

ln 2

d

t

k

= .

Interpretation of

1

k

3. If we examine the value of the decay curve

( )

0

kt

y t y e =

we find

( )

( )

1 1

0 0 0

0

1

0.3678794

.

3

k k

y y e y e y

k

y

− −

⎛ ⎞

= = =

⎜ ⎟

⎝ ⎠

≈

…

Hence,

1

k

0

y

t

4

0.2

0.4

0.6

0.8

1

1 2

y y e

kt

=

0

k = Š1

1/e

0

t

y e

−

falls from

0

y to roughly

0

3

y

when

1

t

k

= −

124 CHAPTER 2 Linearity and Nonlinearity

is a crude approximation of the third-life of a decay curve. In other words, if a substance decays

and has a decay constant 0.02 k = − , and time is measured in years, then the third-life of the

substance is roughly

1

50

0.02

= years. That is, every 50 years the substance decays by

2

3

. Note

that the curve in the figure falls to

1

3

of its value in approximately 1 t = unit of time.

Radioactive Decay

4.

dQ

kQ

dt

= has the general solution

( )

kt

Q t ce = .

Initial condition ( ) 0 100 Q = gives ( ) 100

kt

Q t e = where Q is measured in grams. We also have

the initial condition ( ) 50 75 Q = , from which we find

1 3

ln 0.0058

50 4

k = ≈ − .

The solution is

( )

0.0058

100

t

Q t e

−

≈

where t is measured in years. The half-life is

ln 2

120

0.0058

h

t = ≈ years.

Determining Decay from Half-Life

5.

dQ

kQ

dt

= has the general solution ( )

kt

Q t ce = . With half-life 5

h

t = hours, the decay constant has

the value

1

ln 2 0.14

5

k = − ≈ − . Hence,

( )

0.14

0

t

Q t Q e

−

= .

Calling

t

t the time it takes to decay to

1

10

the original amount, we have

0.14

0 0

1

10

t

t

Q Q e

−

= ,

which we can solve for

t

t getting

5ln10

16.6

ln 2

t

t = ≈ hours.

SECTION 2.3 Growth and Decay Phenomena 125

Thorium-234

6. (a) The general decay curve is ( )

kt

Q t ce = . With the initial condition ( ) 0 1 Q = , we have

( )

kt

Q t e = . We also are given ( ) 1 0.8 Q = so 0.8

k

e = , or ( ) ln 0.8 0.22 k = ≈ − . Hence, we

have

( )

0.22t

Q t e

−

=

where Q is measured in grams and t is measured in weeks.

(b)

ln 2 ln 2

3.1

0.22

h

t

k

= − = ≈ weeks (c) ( )

( ) 0.22 10

10 0.107 Q e

−

≈ grams

Dating Sneferu’s Tomb

7. The half-life for Carbon-14 is 5600

h

t = years, so

1 ln 2

ln 2 0.000124

5600

h

k

t

= − = − ≈ − .

Let

c

t be the time the wood has been aging, and

0

y be the original amount of carbon. Fifty-five

percent of the original amount is

0

0.55y . The length of time the wood has aged satisfies the

equation

0.000124

0 0

0.55

e

t

y e y

−

= .

Solving for

c

t gives

5600ln0.55

4830

ln 2

c

t = − ≈ years.

Newspaper Announcement

8. For Carbon-14,

ln 2 ln 2

0.000124

5600

h

k

t

− −

= = ≈ − . If

0

y is the initial amount, then the final amount

of Carbon-14 present after 5000 years will be

( ) 5000 0.000124

0 0

0.54 y e y

−

= .

In other words, 54% of the original carbon was still present.

Radium Decay

9. 6400 years is 4 half-lives, so that

4

1

6.25%

2

≈ will be present.

126 CHAPTER 2 Linearity and Nonlinearity

General Half-Life Equation

10. We are given the two equations

1

2

1 0

2 0

.

kt

kt

Q Q e

Q Q e

=

=

If we divide, we get

( )

1 2 1

2

k t t

Q

e

Q

−

=

or

( )

1

1 2

2

ln

Q

k t t

Q

− =

or

1

2

1 2

ln

Q

Q

k

t t

=

−

.

Substituting in

1

ln 2

h

t

k

= − yields the general half-life of

( ) ( )

1 1

2 2

1 2 2 1

ln 2 ln 2

ln ln

h

Q Q

Q Q

t t t t

t

− −

= − = .

Nuclear Waste

11. We have

ln 2 ln 2

0.00268

258

h

k

t

= − = − ≈ −

and solve for t in

0.00268

0 0

0.05

t

y e y

−

= . Thus

258ln 20

1,115

ln 2

t = ≈ years.

Bombarding Plutonium

12. We are given

ln 2

4.6209812

0.15

k = − ≈ − . The differential equation for the amount present is

0.00002

dA

kA

dt

= + , ( ) 0 0 A = .

Solving this initial-value problem we get the particular solution

( )

0.00002

kt

A t ce

k

= −

SECTION 2.3 Growth and Decay Phenomena 127

where

0.00002

0.000004 c

k

= ≈ − . Plugging in these values gives the total amount

( ) ( )

4.6

0.000004 1

t

A t e

−

≈ −

measured in micrograms.

Blood Alcohol Levels

13. (a) First, because the initial blood-alcohol level is 0.2%, we have ( ) 0 0.2 P = . After one

hour, the level reduces by 10%, therefore, we have

( ) ( ) ( ) 1 0.9 0 0.9 0.2 P P = = .

From the decay equation we have ( ) 1 0.2

k

P e = , hence we have the equation

( ) 0.2 0.9 0.2

k

e =

from which we find ln0.9 0.105 k = ≈ − . Thus our decay equation is

( )

( ) ln0.9 0.105

0.2 0.2

t t

P t e e

−

= ≈ .

(b) The person can legally drive as soon as ( ) 0.1 P t < . Setting

( )

0.105

0.2 0.1

t

P t e

−

= =

and solving for t, yields

ln 2

6.6

0.105

t = − ≈

−

hours.

Exxon Valdez Problem

14. The measured blood-alcohol level was 0.06%, which had been dropping at a rate of 0.015

percentage points per hour for nine hours. This being the case, the captain’s initial blood-alcohol

level was

( ) 0.06 9 0.015 0.195% + = .

The captain definitely could be liable.

Sodium Pentathol Elimination

15. The half-life is 10 hours. The decay constant is

ln 2

0.069

10

k = − ≈ − .

Ed needs

( )( ) 50 mg kg 100 kg 5000 mg =

128 CHAPTER 2 Linearity and Nonlinearity

of pentathal to be anesthetized. This is the minimal amount that can be presented in his

bloodstream after three hours. Hence,

( )

( ) 0.069 3

0 0

3 0.813 5000 A A e A

−

= ≈ = .

Solving for

0

A yields

0

6,155.7 A = milligrams or an initial dose of 6.16 grams.

Moonlight at High Noon

16. Let the initial brightness be

0

I . At a depth of 25 d = feet, we have that 15% of the light is lost,

and so we have ( )

0

25 0.85 I I = . Assuming exponential decay, ( )

0

kd

I d I e = , we have the equation

( )

25

0 0

25 0.85

k

I I e I = =

from which we can find

ln0.85

0.0065

25

k = ≈ − .

To find d, we use the equation

0.0065

0 0

1

300, 000

d

I e I

−

= ,

from which we determine the depth to be

( )

1

ln 300, 000 1940

0.0065

d = ≈ feet.

Tripling Time

17. Here

ln 2

10

k = . We can find the tripling time by solving for t in the equation

( ) ln 2 10

0 0

3

t

y e y

⎡ ⎤

⎣ ⎦

= ,

giving

ln 2

ln3

10

t

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

or

10ln3

15.85

ln 2

t = ≈ hours.

Extrapolating the Past

18. If

0

P is the initial number of bacteria present (in millions), then we are given

6

0

5

k

P e = and

9

0

8

k

P e = . Dividing one equation by the other we obtain

3

8

5

k

e = , from which we find

8

5

ln

3

k = .

SECTION 2.3 Growth and Decay Phenomena 129

Substituting this value into the first equation gives

( ) 2ln 8 5

0

5 P e = , in which we can solve for

( ) 2ln 8 5

0

5 1.95 P e

−

= ≈ million bacteria.

Unrestricted Yeast Growth

19. From Problem 2, we are given

ln 2

ln 2

1

k = =

with the initial population of

0

5 P = million. The population at time t will be

( ) ln 2

5

t

e million, so

at 4 t = hours, the population will be

4ln2

5 5 16 80 e = ⋅ = million.

Unrestricted Bacterial Growth

20. From Problem 2, we are given

ln 2

12

k = so the population equation is

( )

( ) ln 2 12

0

t

P t P e = .

In order to have five times the starting value, we require

( ) ln 2 12

0 0

5

t

P e P = , from which we can

find

ln5

12 27.9

ln 2

t = ≈ hours.

Growth of Tuberculosis Bacteria

21. We are given the initial number of cells present is

0

100 P = , and that ( ) 1 150 P = (1.5 times

larger), then 100

( ) 1

150

k

e = , which yields

3

ln

2

k = . Therefore, the population ( ) P t at any time t

is

( )

( ) ln 3 2 0.405

100 100

t t

P t e e = ≈ cells.

Cat and Mouse Problem

22. (a) For the first 10 years, the mouse population simply had exponential growth

( )

0

kt

M t M e = .

Because the mouse population doubled to 50,000 in 10 years, the initial population must

have been 25,000, hence

ln 2

10

k = . For the first 10 years, the mouse population (in

thousands) was

( )

( ) ln 2 10

25

t

M t e = .

130 CHAPTER 2 Linearity and Nonlinearity

Over the next 10 years, the differential equation was 6

dM

kM

dt

= − , where ( ) 0 50 M = ; t

now measures the number of years after the arrival of the cats. Solving this differential

equation yields

( )

6

kt

M t ce

k

= + .

Using the initial condition ( ) 0 50 M = , we find

6

50 c

k

= − . The number of mice (in

thousands) t years after the arrival of the cats is

( )

6 6

50

kt

M t e

k k

⎛ ⎞

= − +

⎜ ⎟

⎝ ⎠

where the constant k is given by

ln 2

0.069

10

k = ≈ . We obtain

( )

0.069

37 87

t

M t e = − + .

(b) ( )

( ) 0.069 10

10 87 37 13.2 M e = − ≈ thousand mice.

(c) From part (a), we obtain the value of k for the population growth without harvest, i.e.,

ln 2

0.0693

10

k = ≈ . We obtain the new rate of change M′ of the mouse population:

0.0307

ln 2

0.10 0.0307

10

25000

t

M M M M

M e

−

′ = − ≈ −

=

After 10 years the mouse population will be 18393 (give or take a mouse or two).

Banker’s View of e

23. The amount of money in a bank account that collects compound interest with continuous

compounding is given by

( )

0

rt

A t A e =

where

0

A is the initial amount and r is an annual interest rate. If

0

$1 A = is initially deposited,

and if the annual interest rate is 0.10 r = , then after 10 years the account value will be

( )

( ) 0.10 10

10 $1 $2.72 A e = ⋅ ≈ .

SECTION 2.3 Growth and Decay Phenomena 131

Rule of 70

24. The doubling time is given in Problem 2 by

ln 2 0.70 70

100

d

t

r r r

= ≈ =

where 100r is an annual interest rate (expressed as a percentage). The rule of 70 makes sense.

Power of Continuous Compounding

25. The future value of the account will be

( )

0

rt

A t A e = ,

If

0

$0.50 A = , 0.06 r = and 160 t = , then the value of the account after 160 years will be

( )

( )( ) 0.06 160

160 0.5 $7, 382.39 A e = ≈ .

Credit Card Debt

26. If Meena borrows

0

$5000 A = at an annual interest rate of 0.1995 r = (i.e., 19.95%),

compounded continuously, then the total amount she owes (initial principle plus interest) after t

years is

( )

0.1995

0

$5, 000

rt t

A t A e e = = .

After 4 t = years, she owes

( )

( ) 0.1995 4

4 $5, 000 $11,105.47 A e = ≈ .

Hence, she pays $11,105.47 $5000 $6,105.74 − = interest for borrowing this money.

Compound Interest Thwarts Hollywood Stunt

27. The growth rate is ( )

0

rt

A t A e = . In this case

0

3 A = , 0.08 r = , and 320 t = . Thus, the total bottles

of whiskey will be

( )

( )( ) 0.08 320

320 3 393, 600, 000, 000 A e = ≈ .

That’s 393.6 billion bottles of whiskey!

It Ain’t Like It Use to Be

28. The growth rate is ( )

0

rt

A t A e = , where

0

1 A = , 50 t = , and ( ) 50 18 A = (using thousands of dol-

lars). Hence, we have

50

18

r

e = , from which we can find

ln18

0.0578

50

r = ≈ , or 5.78%.

132 CHAPTER 2 Linearity and Nonlinearity

How to Become a Millionaire

29. (a) From Equation (11) we see that the equation for the amount of money is

( ) ( )

0

1

rt rt

d

A t A e e

r

= + − .

In this case,

0

0 A = , 0.08 r = , and $1, 000 d = . The solution becomes

( ) ( )

0.08

1000

1

0.08

t

A t e = − .

(b) ( )

( )

( )

0.08 40

40 $1, 000, 000 1

0.08

d

A e = = − . Solving for d, we get that the required annual

deposit $3, 399.55 d = .

(c) ( ) ( )

40

2500

40 $1, 000, 000 1

r

A e

r

= = − . To solve this equation for r we require a

computer. Using Maple, we find the interest rates 0.090374 r = ( ) 9.04% ≈ . You can

confirm this result using direct substitution.

Living Off Your Money

30. ( ) ( )

0

1

rt rt

d

A t A e e

r

= − − . Setting ( ) 0 A t = and solving for t gives

0

1

ln

d

t

r d rA

⎛ ⎞

=

⎜ ⎟

−

⎝ ⎠

Notice that when

0

d rA = this equation is undefined, as we have division by 0; if

0

d rA < , this

equation is undefined because we have a negative logarithm. For the physical translation of these

facts, you must return to the equation for ( ) A t . If

0

d rA = , you are only withdrawing the interest,

and the amount of money in the bank remains constant. If

0

d rA < , then you aren’t even

withdrawing the interest, and the amount in the bank increases and ( ) A t never equals zero.

How Sweet It Is

31. From equation (11), we have

( ) ( )

0.08 0.08

100, 000

$1, 000, 000 1

0.08

t t

A t e e = − − .

Setting ( ) 0 A t = , and solving for t, we have

ln5

20.1

0.08

t = ≈ years,

the time that the money will last.

SECTION 2.3 Growth and Decay Phenomena 133

The Real Value of the Lottery

32. Following the hint, we let

0.10 50, 000 A A ′ = − .

Solving this equation with initial condition ( )

0

0 A A = yields

( ) ( )

0.10

0

500, 000 500, 000

t

A t A e = − + .

Setting ( ) 20 0 A = and solving for

0

A we get

( )

2

0

2

500, 000 1

$432,332

e

A

e

−

= ≈ .

Continuous Compounding

33. (a) After one year compounded continuously the value of the account will be

( )

0

1

r

S S e = .

With 0.08 r = (8%) interest rate, we have the value

( )

0.08

0 0

1 $1.083287 S S e S = ≈ .

This is equivalent to a single annual compounding at a rate 8.329

eff

r = %.

(b) If we set the annual yield from a single compounding with interest

eff

r , ( )

0 eff

1 S r + equal

to the annual yield from continuous compounding with interest r,

0

r

S e , we have

( )

0 eff 0

1

r

S r S e + = .

Solving for

eff

r yields

eff

1

r

r e = − .

(c)

365

daily

0.08

1 1 0.0832775

365

r

⎛ ⎞

= + − =

⎜ ⎟

⎝ ⎠

(i.e., 8.328%) effective annual interest rate, which is

extremely close to that achieved by continuous compounding as shown in part (a).

Good Test Equation for Computer or Calculator

34. Student Project.

Your Financial Future

35. We can write the savings equation (10) as

' 0.08 5000 A A = + , (0) 0 A = .

The exact solution by (11) is

0.08

5000

( 1)

0.08

t

A e = − .

134 CHAPTER 2 Linearity and Nonlinearity

We list the amounts, rounded to the nearest dollar, for each of the first 20 years.

Year Amount Year Amount

1 5,205 11 88,181

2 10,844 12 100,731

3 16,953 13 114,326

4 23,570 14 129,053

5 30,739 15 145,007

6 38,505 16 162,290

7 46,917 17 181,012

8 56,030 18 201,293

9 65,902 19 223,264

10 76,596 20 247,065

After 20 years at 8%, contributions deposited have totalled 20 $5, 000 $100, 000 × = while

$147,064 has accumulated in interest, for a total account value of $247,064.

Experiment will show that the interest is the more important parameter over 20 years.

This answer can be seen in the solution of the annuity equation

( )

0.08

5000

1

0.08

t

A e = − .

The interest rate occurs in the exponent and the annual deposit simply occurs as a multiplier.

Mortgaging a House

36. (a) Since the bank earns 1% monthly interest on the outstanding principle of the loan, and

Kelly’s group make monthly payments of $2500 to the bank, the amount of money A(t)

still owed the bank at time t, where t is measured in months starting from when the loan

was made, is given by the savings equation (10) with a = −2500.

Thus, we have

0.01 2500, (0) $200, 000.

dA

A A

dt

= − =

SECTION 2.3 Growth and Decay Phenomena 135

(b) The solution of the savings equation in (a) was seen (11) to be

0.01 0.01

0.01

( ) (0) ( 1)

2500

200, 000 ( 1)

0.01

50, 000 $250, 000.

rt rt

t t

t

a

A t A e e

r

e e

e

= + −

= − −

= − +

(c) To find the length of time for the loan to be paid off, we set A(t) = 0, and solve for t.

Doing this, we have

−50,000e

0.01t

= −$250,000.

or

0.01t = ln 5 or t = 100 ln 5 ≈ 100(1.609) ≈ 161 months (13 years and 5 months).

Suggested Journal Entry

37. Student Project

136 CHAPTER 2 Linearity and Nonlinearity

2.4

Linear Models: Mixing and Cooling

Mixing Details

1. Separating variables, we find

2

100

dx

dt

x t

=

−

from which we get

ln 2ln 100 x t c = − + .

We can solve for ( ) x t using properties of the logarithm, getting

( )

( )

2

2 2ln 100 ln 100

100

t t c

x e e Ce C t

− −

= = = −

where 0

c

C e = > is an arbitrary positive constant. Hence, the final solution is

( ) ( ) ( )

2 2

1

100 100 x t C t c t = ± − = −

where

1

c is an arbitrary constant.

English Brine

2. (a) Salt inflow is

( )( ) 2 lbs gal 3 gal min 6 lbs min = .

Salt outflow is

( )

Q Q

lbs gal 3 gal min lbs min

300 100

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

.

The differential equation for ( ) Q t , the amount of salt in the tank, is

6 0.01

dQ

Q

dt

= − .

Solving this equation with initial condition ( ) 0 50 Q = yields

( )

0.01

600 550

t

Q t e

−

= − .

(b) The concentration ( ) conc t of salt is simply the amount ( ) Q t divided by the volume

(which is constant at 300). Hence the concentration at time t is given by the expression

( )

( )

0.01

11

2

300 6

t

Q t

conc t e

−

= = − .

SECTION 2.4 Linear Models: Mixing and Cooling 137

(c) As t →∞,

0.01

0

t

e

−

→ . Hence ( ) 600 Q t → lbs of salt in the tank.

(d) Either take the limiting amount and divide by 300, or take the limit as t →∞ of ( ) conc t .

The answer is 2 lbs gal in either case.

(e) Note that the graphs of ( ) Q t and of ( ) conc t differ only in the scales on the vertical axis,

because the volume is constant.

0

t

400

200

300

400

500

600

100 200

100

300

Q t ( )

Number of lbs of salt in the tank

conc t ( )

0

t

400

1

2

100 200 300

Concentration of salt in the tank

Metric Brine

3. (a) The salt inflow is

( )( ) 0.1 kg liter 4 liters min 0.4 kg min = .

The outflow is

4

kg min

100

Q . Thus, the differential equation for the amount of salt is

0.4 0.04

dQ

Q

dt

= − .

Solving this equation with the given initial condition ( ) 0 50 Q = gives

( )

0.04

10 40

t

Q t e

−

= + .

(b) The concentration ( ) conc t of salt is simply the amount ( ) Q t divided by the volume

(which is constant at 100). Hence the concentration at time t is given by

( )

( )

0.04

0.1 0.4

100

t

Q t

conc t e

−

= = − .

(c) As t →∞,

0.04

0

t

e

−

→ . Hence ( ) 10 kg Q t → of salt in the tank.

(d) Either take the limiting amount and divide by 100 or take the limit as t →∞ of ( ) conc t .

The answer is 0.1 kg liter in either case.

138 CHAPTER 2 Linearity and Nonlinearity

Salty Goal

4. The salt inflow is given by

( )( ) 2 lb gal 3 gal min 6 lbs min = .

The outflow is

3

20

Q. Thus,

3

6

20

dQ

Q

dt

= − .

Solving this equation with the given initial condition ( ) 0 5 Q = yields the amount

( )

3 20

40 35

t

Q t e

−

= − .

To determine how long this process should continue in order to raise the amount of salt in the

tank to 25 lbs, we set ( ) 25 Q t = and solve for t to get

20 7

ln 5.6

3 3

t = ≈ minutes.

Mysterious Brine

5. Input in lbs min is 2x (where x is the unknown concentration of the brine). Output is

2

lbs min

100

Q .

The differential equation is given by

2 0.01

dQ

x Q

dt

= − ,

which has the general solution

( )

0.01

200

t

Q t x ce

−

= + .

Because the tank had no salt initially, ( ) 0 0 Q = , which yields 200 c x = − . Hence, the amount of

salt in the tank at time t is

( ) ( )

0.01

200 1

t

Q t x e

−

= − .

We are given that

( ) ( )( ) 120 1.4 200 280 Q = = ,

which we solve for x, to get 2.0 lb gal x ≈ .

SECTION 2.4 Linear Models: Mixing and Cooling 139

Salty Overflow

6. Let x = amount of salt in tank at time t.

We have

1 lb 3 gal ( lb) 1 gal/min

gal min (300 (3 1) )gal

dx x

dt t

⋅

= ⋅ −

+ −

, with initial volume = 300 gal, capacity = 600 gal.

IVP: 3

300 2

dx x

dt t

= −

+

, x(0) = 0

The DE is linear,

3

300 2

dx x

dt t

+ =

+

,

with integrating factor

1 1

ln(300 2 )

300 2 2

dt t

t

e e μ

+

+

= =

∫

= (300 + 2t)

1/2

Thus,

1/ 2 1/ 2

1/ 2

1/ 2 1/ 2

3/ 2

(300 2 ) 3(300 2 )

(300 2 )

(300 2 ) 3(300 2 )

3 (300 2 )

,

2 3/ 2

dx x

t t

dt t

t x t dt

t

c

+ + = +

+

+ = +

+ ⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

∫

so

1/ 2

( ) (300 2 ) (300 2 ) x t t c t

−

= + + +

The initial condition x(0) = 0 implies 0 = 300 +

300

c

, so c = 3000 3 − .

The solution to the IVP is

x(t) = 300 + 2t −

1/ 2

3000 3(300 2 ) t

−

+

The tank will be full when 300 + 2t = 600, so t = 150 min.

At that time, x(150) = 300 + 2(150) −

1/ 2

3000 3(300 2(150))

−

+ ≈ 388 lbs

Cleaning Up Lake Erie

7. (a) The inflow of pollutant is

( )( )

3 3

40 mi yr 0.01% 0.004 mi yr = ,

and the outflow is

( )

( )

( )

3

3 3

3

mi

40 mi yr 0.4 mi yr

100 mi

V t

V t = .

140 CHAPTER 2 Linearity and Nonlinearity

Thus, the DE is

0.004 0.4

dV

V

dt

= −

with the initial condition

( ) ( )( )

3 3

0 0.05% 100 mi 0.05 mi V = = .

(b) Solving the IVP in part (a) we get the expression

( )

0.4

0.01 0.04

t

V t e

−

= +

where V is the volume of the pollutant in cubic miles.

Correcting a Goof

8. Input in lbs min is 0 (she’s not adding any salt). Output is 0.03 lbs min Q . The differential

equation is

0.03

dQ

Q

dt

= − ,

which has the general solution

( )

0.03t

Q t ce

−

= .

Using the initial condition ( ) 0 20 Q = , we get the particular solution

( )

0.03

20

t

Q t e

−

= .

Because she wants to reduce the amount of salt in the tank to 10 lbs, we set

( )

0.03

10 20

t

Q t e

−

= = .

Solving for t, we get

100

ln 2 23

3

t = ≈ minutes.

(c) A pollutant concentration of 0.02% corresponds to

( )

3 3

0.02% 100 mi 0.02 mi =

of pollutant. Finally, setting ( ) 0.02 V t = gives the equation

0.4

0.02 0.01 0.04

t

e

−

= + ,

which yields

( ) 2.5ln 4 3.5 t = ≈ years.

SECTION 2.4 Linear Models: Mixing and Cooling 141

Changing Midstream

9. Let x = amount of salt in tank at time t.

(a) IVP:

1 lb 4 gal lb 4 gal

gal sec 200 gal sec

dx x

dt

⎛ ⎞

= ⋅ −

⎜ ⎟

⎝ ⎠

x(0) = 0

(b) x

eq

=

1 lb

gal

⋅ 200 gal = 200 lb

(c) Now let x = amount of salt in tank at time t,

but reset t = 0 to be when the second faucet

is turned on. This setup gives

4 lb 2lb 2 gal lb 4 gal/sec

sec gal sec (200 + 2 )gal

dx x

dt t

⋅

= + ⋅ − ,

which gives a new IVP:

4

8

200 2

dx x

dt t

= −

+

x(0) = x

eq

= 200

(d) To find t

f

: 200 + 2t

f

= 1000 t

f

= 400 sec

(e) The DE in the new IVP is

2

8

100

dx x

dt t

+ =

+

, which is linear with integrating factor

μ =

2

2

ln(100 )

100

dt

t

t

e e

+

+

=

∫

= (100 + t)

2

.

Thus,

2 2

(100 ) 2(100 ) 8(100 ) , and

dx

t t x t

dt

+ + + = +

2 2 3

8

(100 ) 8(100 ) (100 ) ,

3

t x t dt t c + = + = + +

∫

so

2

8

( ) (100 ) (100 )

3

x t t c t

−

= + + + .

The initial condition x(0) = 200 implies 200 =

2

8

(100)

3 (100)

c

+ or c =

6

2

10

3

− × .

Thus the solution to the new IVP is

x =

6 2

8 1

(100 ) (2 10 )(100 )

3 3

t t

−

+ − × + .

When t

f

= 400, x(400) =

6

2

8 1 (2 10 )

(500)

3 3 (500)

×

− ≈ 1330.7 lb.

142 CHAPTER 2 Linearity and Nonlinearity

(f) After tank starts to overflow,

Inflow:

1 lb 4 gal

gal sec

⋅ +

2 lb 2 gal

gal sec

⋅ =

8 lbs

sec

1

st

faucet 2

nd

faucet

Outflow:

4 gal

sec

⎛

⎜

⎝

+

2 gal lb

sec 1000 gal

x ⎞

⋅

⎟

⎠

=

6 lbs

1000 sec

x

drain overflow

Hence for t > 400 sec, the IVP now becomes

6

8

1000

dx x

dt

= − , x(400) = 1330.7 lb.

Cascading Tanks

10. (a) The inflow of salt into tank A is zero because fresh water is added. The outflow of salt is

( )

1

lbs gal 2 gal min lb min

100 50

A

A

Q

Q

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

.

Tank A initially has a total of ( ) 0.5 100 50 = pounds of salt, so the initial-value problem

is

50

A A

dQ Q

dt

= − , ( ) 0 50

A

Q = lbs.

(b) Solving for

A

Q gives

( )

50 t

A

Q t ce

−

=

and with the initial condition ( ) 0 50

A

Q = gives

( )

50

50

t

A

Q t e

−

= .

(c) The input to the second tank is

( )

50

1

lb gal 2 gal min lb min lb min

100 50

t A

A

Q

Q e

−

⎛ ⎞

= =

⎜ ⎟

⎝ ⎠

.

The output from tank B is

( )

1

lb gal 2 gal min lbs min

100 50

B

B

Q

Q

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

.

SECTION 2.4 Linear Models: Mixing and Cooling 143

Thus the differential equation for tank B is

50

1

50

t B

B

dQ

e Q

dt

−

= −

with initial condition ( ) 0 0

B

Q = .

(d) Solving the initial-value problem in (c), we get

( )

50 t

B

Q t te

−

= pounds.

More Cascading Tanks

11. (a) Input to the first tank is

( )( ) 0 gal alch gal 1 gal min 0 gal alch min = .

Output is

0

1

gal alch min

2

x .

The tank initially contains 1 gallon of alcohol, or ( )

0

0 1 x = . Thus, the differential

equation is given by

0

0

1

2

dx

x

dt

= − .

Solving, we get ( )

2

0

t

x t ce

−

= . Substituting ( )

0

0 x , we get 1 c = , so the first tank’s

alcohol content is

( )

2

0

t

x t e

−

= .

(b) The first step of a proof by induction is to check the initial case. In our case we check

0 n = . For 0 n = ,

0

1 t = , 0! 1 = ,

0

2 1 = , and hence the given equation yields ( )

2

0

t

x t e

−

= .

This result was found in part (a). The second part of an induction proof is to assume that

the statement holds for case n, and then prove the statement holds for case 1 n + . Hence,

we assume

( )

2

!2

n t

n

n

t e

x t

n

−

= ,

which means the concentration flowing into the next tank will be

2

n

x

(because the

volume is 2 gallons). The input of the next tank is

1

2

n

x and the output ( )

1

1

2

n

x t

+

. The

differential equation for the ( ) 1 n + tank will be

144 CHAPTER 2 Linearity and Nonlinearity

2

1

1

1

1

2 !2

n t

n

n

n

dx t e

x

dt n

−

+

+

+

+ = , ( )

1

0 0

n

x

+

= .

Solving this IVP, we find

( )

( )

1 2

1

1

1 !2

n t

n

n

t e

x t

n

+ −

+

+

=

+

which is what we needed to verify. The induction step is complete.

(c) To find the maximum of ( )

n

x t , we take its derivative, getting

( )

1 2 2

1

1 !2 !2

n t n t

n

n n

t e t e

x

n n

− − −

+

′ = −

−

.

Setting this value to zero, the equation reduces to

1

2 0

n n

nt t

−

− = , and thus has roots

0 t = , 2n. When 0 t = the function is a minimum, but when 2 t n = , we apply the first

derivative test and conclude it is a local maximum point. Substituting this in ( )

n

x t yields

the maximum value

( )

( ) 2

2

! !2

n

n

n n

n n

n

n e

n e

x n M

n n

−

−

≡ = = .

We can also see that ( )

n

x t approaches 0 as t →∞ and so we can be sure this point is a

global maximum of ( )

n

x t .

(d) Direct substitution of Stirling’s approximation for n! into the formula for

n

M in part (c)

gives ( )

1/ 2

2

n

M n π

−

≈ .

Three Tank Setup

12. Let x, y, and z be the amounts of salt in

Tanks 1, 2, and 3 respectively.

(a) For Tank 1:

0 lbs 5 gal lbs 5 gal

,

gal sec 200 gal sec

dx x

dt

= ⋅ − ⋅

so the IVP for x(t) is

5

200

dx x

dt

−

= , x(0) = 20.

The IVP for the identical Tank 2 is

5

200

dy y

dt

−

= , y(0) = 20.

SECTION 2.4 Linear Models: Mixing and Cooling 145

(b) For Tank 1,

40

dx x

dt

−

= , so x =

/ 40

20

t

e

−

.

For Tank 2,

40

dy y

dt

−

= , so y =

/ 40

20

t

e

−

.

(c)

/ 40 / 40

lbs 10 gal

40 40 500 gal sec

1 1

.

2 2 50

t t

dz x y z

dt

z

e e

− −

= + − ⋅

= + −

Again we have a linear equation,

/ 40

50

t

dz z

e

dt

−

+ = ,

with integrating factor

1/ 50

/ 50

dt

t

e e μ = =

∫

.

Thus

/ 50 / 50 / 40 / 50 / 200

5/ 50 / 200 / 200

1

,

50

200 ,

t t t t t

t t

dz

e e z e e

dt

e z e dt e c

− + −

− −

+ = =

= = − +

∫

so

/ 40 / 50

( ) 200 .

t t

z t e ce

− −

= − +

Another Solution Method

13. Separating variables, we get

dT

kdt

T M

= −

−

.

Solving this equation yields

ln T M kt c − = − + , or,

c kt

T M e e

−

− = .

Eliminating the absolute value, we can write

c kt kt

T M e e Ce

− −

− = ± =

where C is an arbitrary constant. Hence, we have ( )

kt

T t M Ce

−

= + .

Finally, using the condition ( )

0

0 T T = gives ( ) ( )

0

kt

T t M T M e

−

= + − .

Still Another Approach

14. If ( ) ( ) y t T t M = − , then

dy dT

dt dt

= , and ( ) ( ) T t y t M = + .

Hence the equation becomes

( )

dy

k y M M

dt

= − + − , or,

dy

ky

dt

= − ,

a decay equation.

146 CHAPTER 2 Linearity and Nonlinearity

Using the Time Constant

15. (a) ( ) ( )

0

1

kt kt

T t T e M e

− −

= + − , from Equation (8). In this case, 95 M = ,

0

75 T = , and

1

4

k = , yielding the expression

( ) ( )

4 4

75 95 1

t t

T t e e

− −

= + −

where t is time measured in hours. Substituting 2 t = in this case (2 hours after noon),

yields ( ) 2 82.9 T ≈ °F.

(b) Setting ( ) 80 T t = and simplifying for ( ) T t yields

3

4ln 1.15

4

t = − ≈ hours,

which translates to 1:09 P.M.

A Chilling Thought

16. (a) ( ) ( )

0

1

kt kt

T t T e M e

− −

= + − , from Equation (8). In this problem,

0

75 T = , 10 M = , and

1

50

2

T

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

(taking time to be in hours). Thus, we have the equation

2

50 10 60

k

e

−

= + ,

from which we can find the rate constant

2

2ln 0.81

3

k = − ≈ .

After one hour, the temperature will have fallen to

( )

( ) 2ln 2 3

4

1 10 60 10 60 36.7

9

T e

⎛ ⎞

≈ + = + ≈

⎜ ⎟

⎝ ⎠

° F.

(b) Setting ( ) 15 T t = gives the equation

( ) 2 ln 2 3

15 10 60

t

e = + .

Solving for t gives

( )

2

3

ln12

3.06

2ln

t = − ≈ hours (3 hrs, 3.6 min).

SECTION 2.4 Linear Models: Mixing and Cooling 147

Drug Metabolism

17. The drug concentration ( ) C t satisfies

dC

a bC

dt

= −

where a and b are constants, with ( ) 0 0 C = . Solving this IVP gives

( ) ( )

1

bt

a

C t e

b

−

= − .

As t → ∞, we have 0

bt

e

−

→ (as long as b is positive), so the limiting concentration of ( ) C t is

a

b

. Notice that b must be positive or for large t we would have ( ) 0 C t < , which makes no sense,

because ( ) C t is the amount in the body. To reach one-half of the limiting amount of

a

b

we set

( )

1

2

bt

a a

e

b b

−

= −

and solve for t, getting

ln 2

t

b

= .

Warm or Cold Beer?

18. Again, we use

( ) ( )

0

kt

T t M T M e

−

= + − .

In this case, 70 M = ,

0

35 T = . If we measure t in minutes, we have ( ) 10 40 T = , giving

10

40 70 35

k

e

−

= − .

Solving for the decay constant k, we find

( )

6

7

ln

0.0154

10

k = − ≈ .

Thus, the equation for the temperature after t minutes is

( )

0.0154

70 35

t

T t e

−

≈ − .

Substituting 20 t = gives ( ) 20 44.3 T ≈ ° F.

148 CHAPTER 2 Linearity and Nonlinearity

The Coffee and Cream Problem

19. The basic law of heat transfer states that if two substances at different temperatures are mixed

together, then the heat (calories) lost by the hotter substance is equal to the heat gained by the

cooler substance. The equation expressing this law is

1 1 1 2 2 2

M S t M S t Δ = Δ

where

1

M and

2

M are the masses of the substances,

1

S and

2

S are the specific heats, and

1

t Δ

and

2

t Δ are the changes in temperatures of the two substances, respectively.

In this problem we assume the specific heat of coffee (the ability of the substance to hold heat) is

the same as the specific heat of cream. Defining

( )

( )

0 initial temperature of the coffee

room temperature temp of the cream

temperature of the coffee after the cream is added

C

R

T

=

=

=

we have

( ) ( ) ( )

1 2

0 M C T M T R − = − .

If we assume the mass

2

M of the cream is

1

10

the mass Mg of the coffee (the exact fraction does

not affect the answer), we have

( ) ( )

10 0 C T T R − = − .

The temperature of the coffee after John initially adds the cream is

( ) 10 0

11

C R

T

+

= .

After that John and Maria’s coffee cools according to the basic law of cooling, or

John:

( ) 10 0

11

kt

C R

e

⎛ ⎞ +

⎜ ⎟

⎜ ⎟

⎝ ⎠

, Maria: ( ) 0

kt

C e

where we measure t in minutes. At time 10 t = the two coffees will have temperature

John:

( )

10

10 0

11

k

C R

e

⎛ ⎞ +

⎜ ⎟

⎜ ⎟

⎝ ⎠

, Maria: ( )

10

0

k

C e .

SECTION 2.4 Linear Models: Mixing and Cooling 149

Maria then adds the same amount of cream to her coffee, which means John and Maria’s coffees

now have temperature

John:

( )

10

10 0

11

k

C R

e

⎛ ⎞ +

⎜ ⎟

⎜ ⎟

⎝ ⎠

, Maria:

( )

10

10 0

11

k

C e R

⎛ ⎞

+

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

Multiplying each of these temperatures by 11, subtracting ( )

10

10 0

k

C e and using the fact that

10

Re

k

R > , we conclude that John drinks the hotter coffee.

Professor Farlow’s Coffee

20. ( ) ( )

0

1

kt kt

T t T e M e

− −

= + − . For this problem, 70 M = and

0

200 T = °F. The equation for the

coffee temperature is

( ) 70 130

kt

T t e

−

= + .

Measuring t in hours, we are given

4

1

120 70 130

4

k

T e

−

⎛ ⎞

= = +

⎜ ⎟

⎝ ⎠

;

so the rate constant is

5

4ln 3.8

13

k = − ≈ .

Hence

( )

3.8

70 130

t

T t e

−

= + .

Finally, setting ( ) 90 T t = yields

3.8

90 70 130

t

e

−

= + ,

from which we find 0.49 t ≈ hours, or 29 minutes and 24 seconds.

Case of the Cooling Corpse

21. (a) ( ) ( )

0

1

kt kt

T t T e M e

− −

= + − . We know that 50 M = and

0

98.6 T = °F. The first

measurement takes place at unknown time

1

t so

( )

1

1

70 50 48.6

kt

T t e

−

= = +

or

1

48.6 20

kt

e

−

= . The second measurement is taken two hours later at

1

2 t + , yielding

( )

1

2

60 50 48.6

k t

e

− +

= +

150 CHAPTER 2 Linearity and Nonlinearity

or

( )

1

2

48.6 10

k t

e

− +

= . Dividing the second equation by the first equation gives the

relationship

2

1

2

k

e

−

= from which

ln 2

2

k = . Using this value for k the equation for ( )

1

T t

gives

1

ln2 2

70 50 48.6

t

e

−

= +

from which we find

1

2.6 t ≈ hours. Thus, the person was killed approximately 2 hours

and 36 minutes before 8 P.M., or at 5:24 P.M.

(b) Following exactly the same steps as in part (a) but with T

0

= 98.2° F, the sequence of

equations is

T(t

1

) = 70 = 50 +

1

( )

48.2

k t

e

−

⇒

1

48.2

kt

e

−

= 20.

T(t

1

+ 2) = 60 = 50 +

1

( 2)

48.2

k t

e

− +

⇒

1

( 2)

48.2

k t

e

− +

= 10.

Dividing the second equation by the first still gives the relationship e

−2k

=

1

,

2

so k =

ln 2

.

2

Now we have

T(t

1

) = 70 = 50 +

1

ln 2/ 2

48.2

t

e

−

which gives t

1

≈ 2.54 hours, or 2 hours and 32 minutes. This estimates the time of the

murder at 5.28 PM, only 4 minutes earlier than calculated in part (a).

A Real Mystery

22. ( ) ( )

0

1

kt kt

T t T e M e

− −

= + −

While the can is in the refrigerator

0

70 T = and 40 M = , yielding the equation

( ) 40 30

kt

T t e

−

= + .

Measuring time in minutes, we have

( )

15

15 40 30 60

k

T e

−

= + = ,

which gives

1 2

ln 0.027

15 3

k

⎛ ⎞ ⎛ ⎞

= − ≈

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

. Letting

1

t denote the time the can was removed from the

refrigerator, we know that the temperature at that time is given by

( )

1

1

40 30

kt

T t e

−

= + ,

SECTION 2.4 Linear Models: Mixing and Cooling 151

which would be the

0

W for the warming equation ( ) W t , the temperature after the can is removed

from the refrigerator

( ) ( )

0

70 70

kt

W t W e

−

= + −

(the k of the can doesn’t change). Substituting

0

W where

1

t t = and simplifying, we have

( )

( )

1

70 30 1

kt kt

W t e e

− −

= + − .

The initial time for this equation is

1

t (the time the can was taken out of the refrigerator), so the

time at 2 P.M. will be

1

60 t − minutes yielding the equation in

1

t :

( )

1

60 W t − =

( )

( )

1 1

60

60 70 30 1

k t kt

e e

− − −

= + − .

This simplifies to

( )

1

60 60

1

3

k t k

e e

− − −

= − ,

which is relatively easy to solve for

1

t (knowing that 0.027 k ≈ ). The solution is

1

37 t ≈ ; hence

the can was removed from the refrigerator at 1:37 P.M.

Computer Mixing

23.

1

2

1

y y

t

′ + =

−

, ( ) 0 0 y =

When the inflow is less than the outflow,

we note that the amount of salt ( ) y t in the

tank becomes zero when 1 t = , which is

also when the tank is emptied.

0

1

y

1 0

t

24.

1

2

1

y y

t

′ + =

+

, ( ) 0 0 y =

When the inflow is greater than the out-

flow, the amount of dissolved substance

keeps growing without end.

y

0

4

t

3 0

Suggested Journal Entry

25. Student Project

152 CHAPTER 2 Linearity and Nonlinearity

2.5

Nonlinear Models: Logistic Equation

Equilibria

Note: Problems 1–6 are all autonomous s equations, so lines of constant slope (isoclines) are horizontal

lines.

1.

2

y ay by ′ = + , ( ) 0, 0 a b > >

We find the equilibrium points by solving

2

0 y ay by ′ = + = ,

getting 0 y = ,

a

b

− . By inspecting

( ) y y a by ′ = + ,

we see that solutions have positive slope ( ) 0 y′ > when 0 y > or

a

y

b

< − and negative slope

( ) 0 y′ < for 0

a

y

b

− < < . Hence, the equilibrium solution ( ) 0 y t ≡ is unstable, and the

equilibrium solution ( )

a

y t

b

≡ − is stable.

y

y = 0

y = –a/b

stable equilibrium

unstable equilibrium

t

2.

2

y ay by ′ = − , ( ) 0, 0 a b > >

We find the equilibrium points by solving

2

0 y ay by ′ = − = ,

getting 0 y = ,

a

b

. By inspecting

( ) y y a by ′ = − ,

SECTION 2.5 Nonlinear Models: Logistic Equation 153

we see that solutions have negative slope ( ) 0 y′ < when 0 y < or

a

y

b

> and positive slope

( ) 0 y′ > for 0

a

y

b

< < . Hence, the equilibrium solution ( ) 0 y t ≡ is unstable, and the equilibrium

solution ( )

a

y t

b

≡ is stable.

y

y = 0

y = a/b

stable equilibrium

unstable equilibrium

t

3.

2

y ay by ′ = − + , ( ) 0, 0 a b > >

We find the equilibrium points by solving

2

0 y ay by ′ = − + = ,

getting 0 y = ,

a

b

. By inspecting

( ) y y a by ′ = − + ,

we see that solutions have positive slope when 0 y < or

a

y

b

> and negative slope for 0

a

y

b

< < .

Hence, the equilibrium solution ( ) 0 y t ≡ is stable, and the equilibrium solution ( )

a

y t

b

≡ is

unstable.

y

y = 0

y = a/b

unstable equilibrium

stable equilibrium

t

154 CHAPTER 2 Linearity and Nonlinearity

4.

2

y ay by ′ = − − , ( ) 0, 0 a b > >

We find the equilibrium points by solving

2

0 y ay by ′ = − − = ,

getting 0 y = ,

a

b

− . By inspecting

( ) y y a by ′ = − + ,

y

t

y = 0

y = –a/b

stable equilibrium

unstable equilibrium

we see that solutions have negative slope when 0 y > or

a

y

b

< − and positive slope for

0

a

y

b

− < < . Hence, the equilibrium solution ( ) 0 y t ≡ is stable, and the equilibrium solution

( )

a

y t

b

≡ − is unstable.

5.

1

y

y e ′ = −

Solving for y in the equation

1 0

y

y e ′ = − = ,

we get 0 y = , hence we have one equilibrium

(constant) solution ( ) 0 y t ≡ . Also 0 y′ > for y

positive, and 0 y′ < for y negative. This says that

( ) 0 y t ≡ is an unstable equilibrium point.

y

y = 0

unstable equilibrium

t

6.

y y y ′ = −

Setting 0 y′ = we find equilibrium points at

0 y = and 1.

The equilibrium at 0 y = is stable; that at 1 y = is

unstable. Note also that the DE is only defined

when 0 y ≥ .

stable equilibrium

y = 0

y = 1

unstable equilibrium

DE not defined

–2

2

y

2 –2

t

SECTION 2.5 Nonlinear Models: Logistic Equation 155

Nonautonomous Sketching

For nonautonomous equations, the lines of constant slope are not horizontal lines as they were in the

autonomous equations in Problems 1–6.

7. ( ) y y y t ′ = −

In this equation we observe that 0 y′ = when 0 y = , and when y t = ; 0 y ≡ is equilibrium, but

y t = is just an isocline of horizontal slopes. We can draw these lines in the ty-plane with

horizontal elements passing through them.

We then observe from the DE that when

0 y > and y t > the slope is positive

0 y > and y t < the slope is negative

0 y < and y t > the slope is negative

0 y < and y t < the slope is positive.

From the preceding facts, we surmise that the solutions behave according to our simple analysis

of the sign y′ . As can be seen from this figure, the equilibrium 0 y ≡ is stable at 0 t > and

unstable at 0 t < .

156 CHAPTER 2 Linearity and Nonlinearity

8. ( )

2

y y t ′ = −

In this equation we observe that 0 y′ = when y t = . We can draw isoclines y t c − = and

elements with slope

2

y c ′ = passing through them. Note that the solutions for 1 c = ± are also

solutions to the DE. Note also that for this DE the slopes are all positive.

9. ( ) sin y yt ′ =

Isoclines of horizontal slopes (dashed) are hyper-

bolas yt nπ = ± for 0, 1, 2, n = … . On the com-

puter drawn graph you can sketch the hyperbolas

for isoclines and verify the alternating occurrence

of positive and negative slopes between them as

specified by the DE.

Only 0 y ≡ is an equilibrium (unstable for 0 t < , stable for 0 t > ).

Inflection Points

10. 1

y

y r y

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

We differentiate with respect to t (using the chain rule), and then substitute for

dy

dt

from the DE.

This gives

2

2

1 2

1 1

d y d dy dy y dy ry y

ry r r ry

dt dt L dt L dt L L dt

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

= = − + − = − + −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

.

Setting

2

2

0

d y

dt

= and solving for y yields 0 y = , L,

2

L

. Values 0 y = and y L = are equilibrium

points;

2

L

y = is an inflection point. See text Figure 2.5.8.

SECTION 2.5 Nonlinear Models: Logistic Equation 157

11. 1

y

y r y

T

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

We differentiate with respect to t (using the chain rule), and then substitute for

dy

dt

from the DE.

This gives

2

2

1 2

1 1

d y d dy dy y dy ry y

ry r r r y

dt dt T dt T dt T T dt

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

= = − − − − = − − −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

.

Setting

2

2

0

d y

dt

= and solving for y yields 0 y = , T,

2

T

. Values 0 y = and y T = are equilibrium

points;

2

T

y = is an inflection point. See text Figure 2.5.9.

12. ( ) cos y y t ′ = −

We differentiate y′ with respect to t (using the chain rule), and then substitute for

dy

dt

from the

DE. This gives

( ) ( ) ( )

2

2

sin sin cos

d y d dy dy

y t y t y t

dt dt dt dt

⎛ ⎞

= = − − = − − −

⎜ ⎟

⎝ ⎠

.

Setting

2

2

0

d y

dt

= and solving for y yields y t nπ − = ,

2

n

y t n

π

π − = + for 0, 1, 2, n = ± ± ….

Note the inflection points change with t in this nonautonomous case. See text Figure 2.5.3, graph

for (2), to see that the inflection points occur only when 1 y = − , so they lie along the lines

y t mπ = + where m is an odd integer.

Logistic Equation 1

y

y r y

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

13. (a) We rewrite the logistic DE by separation of variables and partial fractions to obtain

1

1

1

L

y

L

dy rdt

y

⎛ ⎞

+ = ⎜ ⎟

⎜ ⎟

−

⎝ ⎠

.

Integrating gives

ln ln 1

y

y rt c

L

− − = + .

158 CHAPTER 2 Linearity and Nonlinearity

If

0

y L > , we know by qualitative analysis (see text Figure 2.5.8) that y L > for all

future time. Thus ln 1 ln 1

y y

L L

⎛ ⎞

− = −

⎜ ⎟

⎝ ⎠

in this case, and the implicit solution (8) becomes

1

rt

y

L

y

Ce =

−

, with

0

0

1

y

L

y

C =

−

.

Substitution of this new value of C and solving for y gives

0

y L > gives

( )

( )

0

1 1

rt

L

y

L

y t

e

−

=

+ −

which turns out (surprise!) to match (10) for

0

y L < . You must show the algebraic steps

to confirm this fact.

(b) The derivation of formula (10) required ln 1

y

L

− , which is undefined if y L = . Thus,

although formula (10) happens to evaluate also to y L ≡ if y L ≡ , our derivation of the

formula is not legal in that case, so it is not legitimate to use (10).

However the original logistic DE 1

y

y r y

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

is fine if y L ≡ and reduces in

that case to 0

dy

dt

= , so a constant y = (which must be L if ( ) 0 y L = ).

(c) The solution formula (10) states

( )

( )

0

1 1

rt

L

y

L

y t

e

−

=

+ −

.

If

0

0 y L < < , the denominator is greater than 1 and as t increases, ( ) y t approaches L

from below.

If

0

y L > , the denominator is less than 1 and as t increases, ( ) y t approaches L

from above.

If

0

y L = , ( ) y t L = . These implications of the formula are confirmed by the

graph of Figure 2.5.8.

(d) By straightforward though lengthy computations, taking the second derivative of

( )

( )

0

1 1

rt

L

y

L

y t

e

−

=

+ −

(10)

SECTION 2.5 Nonlinear Models: Logistic Equation 159

gives

( ) ( ) ( )

{ }

( )

0 0 0

0

2

3

1 2 1 1 1

1 1

rt rt rt

L L L

y y y

rt

L

y

L r e e e

y

e

− − −

−

⎡ ⎤

− − − + −

⎣ ⎦

′′ =

⎡ ⎤

+ −

⎣ ⎦

.

Setting 0 y′′ = , we get

0 0

2 1 1 1 0

rt rt

L L

e e

y y

− −

⎡ ⎤

⎛ ⎞ ⎛ ⎞

− − + − =

⎢ ⎥ ⎜ ⎟ ⎜ ⎟

⎢ ⎥ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦

or

0

1 1

rt

L

e

y

−

⎛ ⎞

− =

⎜ ⎟

⎝ ⎠

.

Solving for t, we get

*

0

1

ln 1

L

t

r y

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

. Substituting this value into the analytical solution

for the logistic equation, we get ( ) *

2

L

y t = .

At

*

t the rate y′ is

2

1

2 2 2 4

L

L r L rL

r

L

⎛ ⎞

⎛ ⎞ ⎛ ⎞

− = = ⎜ ⎟

⎜ ⎟ ⎜ ⎟

⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎝ ⎠

.

Fitting the Logistic Law

14. The logistic equation is

1

y

y ry

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

.

If initially the population doubles every hour, we have

ln 2

1

d

t

r

= =

which gives the growth rate

1

1.4

ln 2

r = ≈ . We are also given

9

5 10 L = × . The logistic curve after

4 hrs is calculated from the analytic solution formula,

( )

( ) ( )

( )

9

9

0

9 9

9

5.6 1.4 4

5 10

10

5 10 5 10

4.9 10

1 4

1 1 1 1

rt

L

y

L

y t

e

e e

− − − ×

× ×

= = = ≈ ×

+

+ − + −

.

160 CHAPTER 2 Linearity and Nonlinearity

Culture Growth

15. Let y = population at time t, so y(0) = 1000 and L = 100,000.

The DE solution, from equation (10), is

y =

100, 000

100, 000

1 1

1000

rt

e

−

⎛ ⎞

+ −

⎜ ⎟

⎝ ⎠

.

To evaluate r, substitute the given fact that when t = 1, population has doubled.

y(1) = 2(1000) =

100, 000

1 (100 1)

r

e

−

+ −

2(1 + 99e

−r

) = 100

198e

−r

= 98

e

−r

=

98

198

−r =

98

ln

198

⎛ ⎞

⎜ ⎟

⎝ ⎠

r = .703

Thus y(t) =

.703

100, 000

1 99

t

e

−

+

.

(a) After 5 days: y(5) =

(.703)5

100, 000

1 99e

−

+

= 25,348 cells

(b) When y = 50,000, find t:

50,000 =

.703

100, 000

1 99

t

e

−

+

1+ 99e

−.703t

= 2

t ≈ 6.536 days

SECTION 2.5 Nonlinear Models: Logistic Equation 161

Logistic Model with Harvesting

16. ( ) 1

y

y ry h t

L

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

(a) Graphs of y′ versus y for different val-

ues of harvesting h are shown. Feasible

harvests are those values of h that keep

the slope y′ positive for some

0 y L < < .

Because the curve y′ versus y is always

a maximum at

2

L

y = ,

y

0.8

–0.2

0.2

0.2 0.4 0.6 1

y'

r

L

=

=

1

1

h = 0

′ = − ( ) − y y y 1 h

h = 0.1

h = 0.2

h = 0.25

we find the value of h that gives 0

2

L

y

⎛ ⎞

′ =

⎜ ⎟

⎝ ⎠

; this will be the maximum sustainable

harvesting value

max

h . By setting 0

2

L

y

⎛ ⎞

′ =

⎜ ⎟

⎝ ⎠

, we find

max

.

4

rL

h =

(b) As a preliminary to graphing, we find the equilibrium solutions under harvesting by

setting 0 y′ = in the equation

1

y

y r y h

L

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

getting

2

0

h

y Ly L

R

⎛ ⎞

− + =

⎜ ⎟

⎝ ⎠

.

Solving for y, we get

( )

2

4

1 2

,

2

hL

r

L L

y y

± −

=

where both roots are positive. The smaller root represents the smaller equilibrium

(constant) solution, which is unstable, and the larger root is the larger stable equilibrium

solution. As we say in part (a), harvesting (measured in fish/day or some similar unit)

must satisfy

4

rL

h < .

162 CHAPTER 2 Linearity and Nonlinearity

y

2 0

t

y = 0

y = 1

stable equilibrium

unstable equilibrium

Straight logistic

( ) 1 y y y ′ = −

0

2

y

2 0

t

semistable equilibrium

y = 0

Logistic with maximum sustainable

harvesting

( ) 1 0.25 y y y ′ = − −

Note that the equilibrium value with harvesting 0.25 h = is lower than the equilibrium

value without harvesting. Note further that maximum harvesting has changed the phase

line and the direction of solutions below equilibrium. The harvesting graph implies that

fishing is fine when the population is above equilibrium, but wipes out the population

when it is below equilibrium.

Campus Rumor

17. Let x = number in thousands of people who have heard the rumor.

(80 )

dx

kx x

dt

= − x(0) = 1 x(1) = 10

Rearranging the DE to standard logistic form (6) gives 80 1 .

80

dx x

k x

dt

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

With r = 80k, the solution, by equation (10), is x(t) =

80

.

80

1 1

1

rt

e

−

⎛ ⎞

+ −

⎜ ⎟

⎝ ⎠

To evaluate r, substitute the given fact that when t = 1, ten thousand people have heard the rumor.

10 =

80

1 79

r

e

−

+

1 + 79e

−r

= 8 ⇒ e

−r

=

7

79

⇒ r ≈ 2.4235.

Thus x(t) =

2.4235

80

.

1 79

t

e

−

+

SECTION 2.5 Nonlinear Models: Logistic Equation 163

Water Rumor

18. Let N be the number of people who have heard the rumor at time t

(a) (200, 000 ) 200, 000 1

200, 000

dN N

kN N k N

dt

⎛ ⎞

= − = −

⎜ ⎟

⎝ ⎠

(b) Yes, this is a logistic model.

(c) Set 0.

dN

dt

= Equilibrium solutions: N = 0, N = 200,000.

(d) Let r = 200,000k. Assume N(0) = 1.

Then

N =

200, 000

200, 000

1 1

1

rt

e

−

⎛ ⎞

+ −

⎜ ⎟

⎝ ⎠

At t = 1 week,

1000 =

200, 000

1 199, 999

r

e

−

+

1 199, 999 200

199

6.913.

199, 999

r

r

e

e r

−

−

+ =

= ⇒ =

Thus

6.913

200, 000

( ) .

1 199, 999

t

N t

e

−

=

+

To find t when N = 100,000:

100,000 =

6.913

200, 000

1 199, 999

t

e

−

+

⇓

1+199,999e

−6.913t

= 2, e

−6.913t

=

1

199, 999

, and t = 1.77 weeks = 12.4 days.

(e) We assume the same population. Let t

N

> 0 be the time the article is published.

Let P = number of people who are aware of the counterrumor.

Let P

0

be the number of people who became aware of the counterrumor at time t

N

.

(200, 000 )

dP

aP P

dt

= − P(t

N

) = P

0

, and a is a constant of proportionality.

164 CHAPTER 2 Linearity and Nonlinearity

Semistable Equilibrium

19. ( )

2

1 y y ′ = −

We draw upward arrows on the y-axis for

1 y ≠

to indicate the solution is increasing. When

1 y =

we have a constant solution.

–1

2

y

6

t

y = 1

semistable equilibrium

Because the slope lines have positive slope both above and below the constant solution

( ) 1 y t ≡ , we say that the solution ( ) 1 y t ≡ , or the point 1, is semistable (stable from below,

unstable from above). In other words, if a solution is perturbed from the value of 1 to a value

below 1, the solution will move back towards 1, but if the constant solution ( ) 1 y t ≡ is perturbed

to a value larger than 1, it will not move back towards 1. Semistable equilibria are customarily

noted with half-filled circles.

Gompertz Equation ( ) 1 ln

dy

y b y

dt

= −

20. (a) Letting ln z y = and using the chain rule we get

1 dz dz dy dy

dt dy dt y dt

⎛ ⎞ ⎛ ⎞

⎛ ⎞ ⎛ ⎞

= =

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎝ ⎠ ⎝ ⎠

.

Hence, the Gompertz equation becomes

dz

a bz

dt

= − .

(b) Solving this DE for z we find

( )

bt

a

z t Ce

b

−

= + .

Substituting back ln z y = gives

( )

bt

a b Ce

y t e e

−

= .

Using the initial condition ( )

0

0 y y = , we finally get

0

ln

a

C y

b

= − .

(c) From the solution in part (b), ( ) lim

a b

t

y t e

→∞

= when 0 b > , ( ) y t →∞ when 0 b < .

SECTION 2.5 Nonlinear Models: Logistic Equation 165

Fitting the Gompertz Law

21. (a) From Problem 20,

( )

bt

a b ce

y t e e

−

=

where

0

ln

a

c y

b

= − . In this case ( ) 0 1 y = , ( ) 2 2 y = . We note ( ) ( ) 24 28 10 y y ≈ ≈

means the limiting value

a b

e has been reached. Thus

10

a b

e = ,

so

ln10 2.3

a

b

= ≈ .

The constant ln1 0 2.3 2.3

a

c

b

= − = − = − . Hence,

( )

2.3

10

bt

e

y t e

−

−

=

and

( )

2

2.3

2 10 2

b

e

y e

−

−

= = .

Solving for b:

( )

2

2

2

2.3 ln 1.609

10

1.609

0.6998

2.3

2 ln 0.6998 0.357

0.1785

b

b

e

e

b

b

−

−

− = ≈ −

≈ − ≈

− = ≈ −

≈

and 2.3 a b = gives 0.4105 a ≈ .

(b) The logistic equation

1

y

y ry

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

has solution

( )

0

1 1

rt

L

y t

L

e

y

−

=

⎛ ⎞

+ −

⎜ ⎟

⎝ ⎠

.

0

y

t

12

10

24

logistic

Gompertz

y 2 2 ( ) =

y

=

10

y 0 1 ( ) =

166 CHAPTER 2 Linearity and Nonlinearity

We have 10 L = and

0

1 y = , so

( )

10

1 9

rt

y t

e

−

=

+

and

( )

2

10

2 2

1 9

r

y

e

−

= =

+

.

Solving for r

2

10

9 1 4

2

4

2 ln 0.8109

9

0.8109

0.405.

2

r

e

r

r

−

= − =

− = ≈ −

−

= ≈

−

Autonomous Analysis

22. (a)

2

y y ′ =

(b) One semistable equilibrium at ( ) 0 y t ≡

is stable from below, unstable from

above.

–4

4

y

semistable equilibrium

4 –4

t

23. (a) ( ) 1 y y y ′ = − −

(b) The equilibrium solutions are ( ) 0 y t ≡ , ( ) 1 y t ≡ . The solution ( ) 0 y t ≡ is stable. The

solution ( ) 1 y t ≡ is unstable.

–4

4

y

y =1

y =0

stable equilibrium

unstable equilibrium

4 –4

t

SECTION 2.5 Nonlinear Models: Logistic Equation 167

24. (a) 1 1

y y

y y

L M

⎛ ⎞⎛ ⎞

′ = − − −

⎜ ⎟⎜ ⎟

⎝ ⎠⎝ ⎠

, ( )( ) 1 1 0.5 y y y y ′ = − − −

(b) The equilibrium points are 0 y = , L, M. 0 y = is stable. y M = is stable if M L > and

unstable if M L < . y L = is stable if M L < and unstable if M L > .

y

t

stable equilibrium

unstable equilibrium

stable equilibrium

y = 0

y = L

y = M

25. (a) y y y ′ = −

Note that the DE is only defined for

0 y ≥ .

(b) The constant solution ( ) 0 y t ≡ is stable,

the solution ( ) 1 y t ≡ is unstable.

–4

4

y

–2

stable equilibrium

unstable equilibrium

y =0

t

y =1

2

26. (a) ( )

2

1 y k y ′ = − , 0 k >

(b) The constant solution ( ) 1 y t ≡ is semi-stable (unstable above, stable below).

–4

4

y

4 –4

t

semistable equilibrium

y =1

27. (a)

( )

2 2

4 y y y ′ = −

(b) The equilibrium solution ( ) 2 y t ≡ is stable, the solution ( ) 2 y t ≡ − is unstable and the

solution ( ) 0 y t ≡ is semistable.

168 CHAPTER 2 Linearity and Nonlinearity

Stefan’s Law Again

28.

( )

4 4

T k M T ′ = −

The equation tells us that when

0 T M < < ,

the solution ( ) T T t = increases because 0 T′ > ,

and when M T < the solution decreases because

0 T′ < . Hence, the equilibrium point ( ) T t M ≡ is

stable. We have drawn the directional field of

Stefan’s equation for 3 M = , 0.05 k = .

( )

4 4

0.05 3

dT

T

dt

= −

To M > gives solutions falling to M.

To M < gives solutions rising to M.

These observations actions match intuition and experiment.

Hubbert Peak

29. (a) From even a hand-sketched logistic

curve you can graph its slope y′ and

find a roughly bell-shaped curve for

( ) y t ′ . Depending on the scales used,

it may be steeper or flatter than the

bell curve shown in Fig. 1.3.5.

SECTION 2.5 Nonlinear Models: Logistic Equation 169

(b) For a pure logistic curve, the inflection

point always occurs when

2

L

y = .

However, if we consider models

different from the logistic model that

still show similar solutions between 0

and the nonzero equilibrium, it is

possible for the inflection point to be

closer to 0. When this happens oil

recovery reaches the maximum

production rate much earlier.

Of course the logistic model is a crude model of oil production. For example it

doesn’t take into consideration the fact that when oil prices are high, many oil wells are

placed into production.

If the inflection point is lower than halfway on an approximately logistic curve, the

peak on the y′ curve occurs sooner and lower creating an asymmetric curve for y′ .

(c) These differences may or may not be significant to people studying oil production; it

depends on what they are looking for. The long-term behavior, however, is the same; the

peak just occurs sooner. After the peak occurs, if the model holds, it is downhill insofar

as oil production is concerned. Typical skew of peak position is presented on the figures

above.

Useful Transformation

30. ( ) 1 y ky y ′ = −

Letting

1

y

z

y

=

−

yields

( )

2

1

1

dz dz dy dy

dt dy dt dt

y

⎛ ⎞

⎛ ⎞ ⎛ ⎞

= =

⎜ ⎟⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ − ⎝ ⎠

.

Substituting for

dy

dt

from the original DE yields a new equation

( ) ( )

2

1 1

dz

y ky y

dt

− = − ,

which gives the result

1

dz ky

kz

dt y

= =

−

.

inflection point for dashed

y curve.

inflection point for solid

y curve.

170 CHAPTER 2 Linearity and Nonlinearity

Solving this first-order equation for ( ) z z t = , yields ( )

kt

z t ce = and substituting this in the

transformation

1

y

z

y

=

−

, we get

1

kt

y

ce

y

=

−

.

Finally, solving this for y gives ( )

1

1

1 1

,

1 1

kt kt

c

y t

e c e

− −

= =

+ +

where

1

0

1

1 c

y

= − .

Chemical Reactions

31. ( )( ) 100 50 x k x x = − −

The solutions for the given initial conditions

are shown on the graph. Note that all behaviors

are at equilibrium or flown off scale before

0.1 t = !

The solution curve for ( ) 0 150 x = is almost

vertical.

0

200

y

0.5 0

t

stable equilibrium

unstable equilibrium

Direction field and equilbrium

(a) A solution starting at ( ) 0 0 x =

increases and approaches 50.

(b) A solution starting at ( ) 0 75 x =

decreases and approaches 50.

(c) A solution starting at ( ) 0 150 x =

increases without bound.

0

200

y

0.5 0

t

Solutions for three given initial conditions.

Noting the location of equilibrium and the direction field as shown in a second graph leads to the

following conclusions: Any (0) 100 x > causes ( ) x t to increase without bound and fly off scale

very quickly. On the other hand, for any ( ) ( ) 0 0,100 x ∈ the solution will approach an equilibrium

value of 50, which implies the tiniest amount is sufficient to start the reaction.

If you are looking for a different scenario, you might consider some other modeling

options that appear in Problem 32.

SECTION 2.5 Nonlinear Models: Logistic Equation 171

General Chemical Reaction of Two Substances ( ) ( ) ,

m n

dx

k a x b x a b

dt

= − − <

32. (a), (b) We consider the four cases when the exponents are even positive integers and/or odd

positive integers. In each case, we analyze the sign of the derivative for different values

of x. For convenience we pick 1 a = , 2 b = , k = 1.

• ( ) ( )

even even

1 2

dx

x x

dt

= − − .

We have drawn a graph of

dx

dt

versus x.

By drawing arrows to the right when

dx

dt

is positive

and arrows to the left when

dx

dt

is negative,

2

x

1 3

semistable semistable

dx dt

Both even exponents

we have a horizontal phase line for ( ) x t . We also see that both equilibrium solutions

( ) 1 x t ≡ , ( ) 2 x t ≡ are unstable; although both are semistable; stable from below and

unstable from above.

• ( ) ( )

even odd

1 2

dx

x x

dt

= − − .

Here ( ) 1 x t ≡ is unstable although it is

stable from below. The solution ( ) 2 x t ≡

is stable.

2

x

1

–1

1 3

semistable

stable

dx dt

Even and odd exponents

172 CHAPTER 2 Linearity and Nonlinearity

• ( ) ( )

odd even

1 2

dx

x x

dt

= − − .

Here ( ) 2 x t ≡ is semistable, stable from

above and unstable from below. The

solution ( ) 1 x t ≡ is stable.

2

x

1

–1

1 3

stable

semistable

dx dt

Odd and even exponents

• ( ) ( )

odd odd

1 2

dx

x x

dt

= − − .

Here the smaller of the two solutions,

( ) 1 x t ≡ , is stable; the larger solution,

( ) 2 x t ≡ , is unstable.

2

x

1

–1

1 3

stable unstable

dx dt

Both odd exponents

Solving the Threshold Equation

33. 1

y

y ry

T

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

Introducing backwards time t τ = − , yields

dy dy d dy

dt d dt d

τ

τ τ

= = − .

Hence, if we run the threshold equation

1

dy y

ry

dt T

⎛ ⎞

= − −

⎜ ⎟

⎝ ⎠

backwards, we get

1

dy y

ry

d T τ

⎛ ⎞

− = − −

⎜ ⎟

⎝ ⎠

.

Equivalently it also yields the first-order equation

1

dy y

ry

d T τ

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

,

SECTION 2.5 Nonlinear Models: Logistic Equation 173

which is the logistic equation with L T = and t τ = . We know the solution of this logistic

equation to be

( )

( )

0

1 1

r

T

y

T

y

e

τ

τ

−

=

+ −

.

We can now find the solution of the threshold equation by replacing τ by t − , yielding

( )

( )

0

1 1

rt

T

y

T

y t

e

=

+ −

.

Limiting Values for the Threshold Equation

34. 1

y

y ry

T

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

(a) For

0

0 y T < < as t →∞ the denominator of

( )

( )

0

1 1

rt

T

y

T

y t

e

=

+ −

goes to plus infinity and so ( ) y t goes to zero.

(b) For

0

y T > the denominator of

( )

( )

0

1 1

rt

T

y

T

y t

e

=

+ −

will reach zero (causing the solution to “blow up”) when

0

1 1 0

rt

T

e

y

⎛ ⎞

+ − =

⎜ ⎟

⎝ ⎠

.

Solving for t gives the location of a vertical asymptote on the ty graph

* 0

0

1

ln

y

t

r y T

⎛ ⎞

=

⎜ ⎟

−

⎝ ⎠

.

174 CHAPTER 2 Linearity and Nonlinearity

Pitchfork Bifurcation

35.

( )

3 2

y y y y y α α ′ = − = −

(a) For 0 α ≤ the only real root of

( )

2

0 y y α − = is 0 y = . Because

( )

2

0 y y y α ′ = − < for 0 y >

and

( )

2

0 y y y α ′ = − > for 0 y < ,

the equilibrium solution 0 y = is stable.

(b) When 0 α > the equation

( )

2

0 y y y α ′ = − =

has roots

0 y = , α ± .

The points y α = ± are stable,

but 0 y = is unstable as illustrated

by graphing a phase line or

direction field of

( )

2

1 y y y ′ = − − .

(c)

1 2 –1

–1

–2

1

y

α

unstable equilibria stable equilibria

Pitchfork bifurcation at ( ) 0, 0

Another Bifurcation

36.

2

1 y y by ′ = + +

(a) We find the equilibrium points of the equation by setting 0 y′ = and solving for y. Doing

this we get

2

4

2

b b

y

− ± −

= .

We see that for 2 2 b − < < there are no (real) solutions, and thus no equilibrium solutions.

For 2 b = − we have the equilibrium solution +1, and for 2 b = + we have equilibrium

solution –1. For each 2 b ≥ we have two equilibrium solutions.

(b) The bifurcation points are at 2 b = − and 2 b = + . As b passes through –2 (increasing), the

number of equilibrium solutions changes from 2 to 1 to 0, and when b passes through +2,

the number of equilibrium solutions changes from 0 to 1 to 2.

SECTION 2.5 Nonlinear Models: Logistic Equation 175

(c) We have drawn some solution for each of the values 3 b = − , –2, –1, 0, 1, 2, and 3.

b = –3

1

t

unstable equilibrium

stable equilibrium

–3

3

y

b = –2

1

t

se

–3

3

y

1

t

–3

3

y

b = –1

no equilibrium points

1

t

–3

3

y

b = 0

no equilibrium points

1

t

–3

3

y

b =1

no equilibrium points

b = 2

1

t

sem

–3

3

y

1

t

–3

3

y

s

u

b = 3

(d) For 2 b = and 2 b = − the single equilibrium is semistable. (Solutions above are repelled;

those below are attracted.) For 2 b > there are two equilibria; the larger one is unstable

and the smaller one is stable. For 2 b < there are no equilibria.

0

0

0

0

0

semistable equilbrium

0

0

unstable equilbrium

stable equilbrium

semistable equilbrium

176 CHAPTER 2 Linearity and Nonlinearity

(e) The bifurcation diagram shows the loca-

tion of equilibrium points for y versus the

parameter value b. Solid circles represent

stable equilibria; open circles represent

unstable equilibria.

2 4 –2

–2

–4

–4

2

4

y

t

unstable

unstable

stable

stable

semistable

semistable

b

Equilibria of

2

1 y y by ′ = + + versus b

Computer Lab: Bifurcation

37.

2

1 y ky y ′ = + +

(a) Setting 0 y′ = yields two equilibria,

1 1 4

2

e

k

y

k

− ± −

=

for

1

4

k < ; none for

1

4

k > ; one for

1

4

k = .

(b) The following phase-plane graphs illustrate the bifurcation.

SECTION 2.5 Nonlinear Models: Logistic Equation 177

38.

2

y y y k ′ = + +

(a) Setting 0 y′ = yields two equilibria,

1 1 4

2

e

k

y

− ± −

= ,

for

1

4

k < ; none for

1

4

k > ; one for

1

4

k = .

(b) The following phase-plane graphs illustrate the bifurcation.

–4

4

y

4 –4

t

k = 1

no equilibrium points

–4

4

y

k = 1/4

semistable equilibrium

4 –4

t

–4

4

y

4 –4

t

k = –1

stable equilibrium

unstable equilibrium

178 CHAPTER 2 Linearity and Nonlinearity

Computer Lab: Growth Equations

39. 1

y

y ry

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

We graph the direction field of this equation for 1 L = , 0.5 r = , 1, 2, and 5. We keep L fixed

because all it does is raise or lower the steady-state solution to y L = . We see that the larger the

parameter r, the faster the solution approaches the steady-state L.

r = 0.5

stable equilibrium

unstable equilibrium

2

y

3 –1

t

r = 1

stable equilibrium

unstable equilibrium

2

y

3 –1

t

r = 2

stable equilibrium

unstable equilibrium

2

y

3 –1

t

continued on next page

SECTION 2.5 Nonlinear Models: Logistic Equation 179

unstable equilibrium

r = 3

stable equilibrium

unstable equilibrium

2

y

3 –1

t

40. 1

y

y r y

T

⎛ ⎞

′ = − −

⎜ ⎟

⎝ ⎠

See text Figure 2.5.9.

The parameter r governs the steepness of the solution curves; the higher r the more steeply y

leaves the threshold level T.

41.

ln

1

y

y r y

L

⎛ ⎞

′ = −

⎜ ⎟

⎝ ⎠

Equilibrium at

L

y e = ; higher r values give

steeper slopes.

42.

t

y re y

β −

′ =

For larger β or for larger r, solution curves fall

more steeply. Unstable equilibrium 1 r = , 1 β =

–100

100

y

2 –2

t

Suggested Journal Entry

43. Student Project

180 CHAPTER 2 Linearity and Nonlinearity

2.6

Systems of DEs: A First Look

Predicting System Behavior

1. (a)

3

x y

y x y

′ =

′ = −

This (linear) system has one equilibrium point at the origin, ( ) ( ) , 0, 0 x y = , as do all

linear systems. The υ- and h-nullclines are respectively, as shown in part (b).

(b)

h

–2

2

y

2 –2

x

-nullcline

υ -nullcline

(c) A few solutions along with the vertical

and horizontal nullclines are drawn.

–2

2

y

2 –2

x

(d) The equilibrium point (0, 0) is unstable.

All solutions tend quickly to

3

x

y = then move gradually towards +∞ or −∞

asymptotically along that line. Whether the motion is left or right depends on the initial

conditions.

2. (a)

2

1 x x y

y x y

′ = − −

′ = −

Setting 0 x′ = and 0 y′ = gives

2

-nullcline 1 0

-nullcline 0.

x y

h x y

υ − − =

− =

SECTION 2.6 Systems of DEs: A First Look 181

From the intersection of the two nullclines we find two equilibrium points shown in the

following figures. We can locate them graphically far more easily than algebraically!

(b)

2 –2

x

–2

2

y

h -nullcline

υ -nullcline

(c)

2 –2

x

–2

2

y

(d) The lower equilibrium point at

2

1 1

(1 5) , ( 1 5)

4 2

⎡ ⎤

+ − −

⎢ ⎥

⎣ ⎦

is unstable and the upper equilibrium at

2

1 1

(1 5) , ( 5 1)

4 2

⎡ ⎤

− −

⎢ ⎥

⎣ ⎦

is stable.

Most trajectories spiral counterclockwise toward the first quadrant equilibrium

point. However, if the initial condition is somewhat left or below the 4th quadrant

equilibrium, they shoot down towards −∞. We suspect a dividing line between these

behaviors, and we will find it in Chapter 6.

3. (a)

2 2

1

1

x x y

y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = gives

2 2

-nullcline 1

-nullcline 1.

h x y

x y υ

+ =

+ =

From the intersection of the two nullclines we find two equilibrium points (0, 1), (1, 0).

182 CHAPTER 2 Linearity and Nonlinearity

(b)

2 –2

x

–2

2

y

h-nullcline

υ -nullcline

(c)

2 –2

x

–2

2

y

(d) The equilibrium at (1, 0) is unstable; the equilibrium at (0, 1) is stable. Most trajectories

seem to be attracted to the stable equilibrium, but those that approach the lower unstable

equilibrium from below or form the right will turn down toward the lower right.

4.

2 2

x x y

y x y

′ = +

′ = +

(a) This (singular) system has an entire line

of equilibrium points on the line

0 x y + = .

(b) The direction field and the line of unsta-

ble equilibrium points are shown at the

right.

(c) We superimpose on the direction field a

few solutions.

–2

2

y

2 –2

x

(d) From part (c) we see the equilibrium points on the line 0 x y + = are all unstable.

All nonequilibrium trajectories shoot away from the equilibria along straight

lines (of slope 2), towards +∞ if the IC is above the line 0 x y + = and toward −∞ if the

IC is below 0 x y + = .

5. (a)

2 2

4

3

x x y

y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = gives the intersection of the nullclines:

2 2

-nullcline 3

-nullcline 4 .

h x y

y x υ

+ =

= −

We find no equilibria because the nullclines do not intersect.

SECTION 2.6 Systems of DEs: A First Look 183

(b)

–4

4

y

4 –4

x

h-nullcline

υ -nullcline

(c)

–4

4

y

4 –4

x

(d) There are no equilibria—all solutions head down to lower right.

6. (a)

5 3

x y

y x y

′ =

′ = +

This (linear) system has one equilibrium point at ( ) ( ) , 0, 0 x y = as do all linear systems.

The 64-dollar question is: Is it stable? The υ- and h-nullclines: 0 y = , 5 3 0 x y + = , are

shown following and indicate that the origin (0, 0) is unstable. Hence, points starting near

the origin will leave the origin. We will see later other ways for showing that (0, 0) is

unstable.

(b)

–2

2

y

2 –2

x

h-nullcline

υ -nullcline

184 CHAPTER 2 Linearity and Nonlinearity

(c) The direction field and a few solutions

are drawn. Note how the solutions cross

the vertical and horizontal nullclines

–2

2

y

2 –2

x

(d) We see from the preceding figure that solutions come from infinity along a line (that is,

not a nulllcline), and then if they are not exactly on the line head off either upwards and

to the left or downwards and go to the left on another line. Whether they go up or down

depends on whether they initially start above or below the line. It appears that points that

start exactly on the line will go to (0, 0). We will see later in Chapter 6 when we study

linear systems using eigenvalues and eigenvectors that the solutions come from infinity

on one eigenvector and go to infinity on another eigenvector.

7. (a)

1 x x y

y x y

′ = − −

′ = −

Setting 0 x y ′ ′ = = and finding the intersection of the nullclines:

-nullcline

-nullcline 1

h y x

y x υ

=

= −

we find one equilibrium point

1 1

,

2 2

⎛ ⎞

⎜ ⎟

⎝ ⎠

. The arrows indicate that it is a stable equilib-

rium.

(b)

2 –2

x

–2

2

y

(c)

2 –2

x

–2

2

y

(d) The equilibrium is stable; all other solutions spiral into it.

SECTION 2.6 Systems of DEs: A First Look 185

8. (a)

2 x x y

y x

′ = +

′ =

This (linear) system has one equilibrium point at the origin (0, 0), as do all linear

systems. The υ- and h-nullclines: 2 0 x y + = , 0 x = , are shown in part (b) and indicate

that the origin (0, 0) is unstable. We will see later other ways to show that the system is

unstable.

(b)

–2

2

y

2 –2

x

h-nullcline

υ -nullcline

(c)

–2

2

y

2 –2

x

(d) The equilibrium point (0, 0) is unstable. Other solutions come from upper left or the

lower right, heading toward origin but veers off towards ±∞ in the upper right or lower

left.

Creating a Predator-Prey Model

9. (a)

dR

dt

= 0.15R − 0.00015RF

dF

dt

= −0.25F + 0.00003125RF

The rabbits reproduce at a natural rate of 15%; their population is diminished by meetings

with foxes. The fox population is diminishing at a rate of 25%; this decline is mitigated

only slightly by meeting rabbits as prey. Comparing the predator-prey rates in the two

populations shows a much larger effect on the rabbit population, which is consistent with

the fact that each fox needs several rabbits to survive.

(b)

dR

dt

= 0.15R − 0.00015RF − 0.1R = 0.05R − 0.00015RF

dF

dt

= −0.25F + 0.00003125RF − 0.1F = −0.35F + 0.00003125RF

Both populations are diminished by the harvesting. The equilibrium populations move

from (8000, 1000) in Part (a) to (11200, 333) in Part (b), i.e., more rabbits and fewer

foxes if both populations are harvested at the same rate. Figures on the next page.

186 CHAPTER 2 Linearity and Nonlinearity

In figures, x and y are measured in thousands. Note that the vertical axes have different

scales from the horizontal axes.

9(a) Equilbrium at (8, 1) 9(b) Equilbrium at (11.2, 0.3)

Sharks and Sardines with Fishing

10. (a) With fishing the equilibrium point of the system

( )

( )

x x a by f

y y c dx f

′ = − −

′ = − + −

is

.

e

e

c f c f

x

d d d

a f a f

y

b b b

+

= = +

−

= = −

**With fishing we increase the equilibrium of the prey
**

e

x by

f

d

and decrease the equilib-

rium of the predator

e

y by

f

b

.

SECTION 2.6 Systems of DEs: A First Look 187

Using the parameters from Ex-

ample 3 we set

2 a = , 1 b = , 3 c = , 0.5 d = ;

the new equilibrium point of the fished

model is

6 2

2 .

e

e

c f c f

x f

d d d

a f a f

y f

b b b

+

= = + = +

−

= = − = −

0

4

y

15 0

x

Shark (y) and sardine (x) trajectories

The trajectories are closed curves representing periodic motion of both sharks and

sardines. The trajectories look like the trajectories of the unfished case in Example 3

except the equilibrium point has moved to the right (more prey) and down (fewer

predators).

(b) With the parameters in part (a) and 0.5 f = the equilibrium point is (7, 1.5). This

compares with the equilibrium point (6, 2) in the unfished case.

As the fishing rate f increases

from 0 to 2, the equilibrium point moves

along the line from the unfished equilib-

rium at (6, 2) to (10, 0). Hence, the fish-

ing of each population at the same rate

benefits the sardines (x) and drives the

sharks (y) to extinction. This is illustrated

in the figure.

8 4

y

x

12 6 2 10

2

(6, 2)

(10, 0)

(sharks)

(sardines)

1

(7, 1.5)

(c) You should fish for sardines when the sardine population is increasing and sharks when

the shark population is increasing. In both cases, more fishing tends to move the

populations closer to equilibrium while maintaining higher populations in the low parts

of the cycle.

(d) If we look at the insecticide model and assume both the good guys (predators) and bad

guys (prey) are harvested at the same rate, the good guys will also be diminished and the

bad guys peak again. As 1 f → (try 0.8 f = ) the predators get decimated first, then the

prey can peak again. If you look at part (a), you see that the predator/prey model does not

allow either population to go below zero, as the x- and y-axes are solutions and the

solutions move along the axes, thus it is impossible for other solutions to cross either of

these axes. You might continue this exploration with the IDE tool, Lotka-Volterra with

Harvest, as in Problem 24.

188 CHAPTER 2 Linearity and Nonlinearity

Analyzing Competition Models

11. (1200 2 3 )

dR

R R S

dt

= − − , (500 )

dS

S R S

dt

= − −

Rabbits are reproducing at the astonishing rate of 1200 per rabbit per unit time, in the absence of

competition. However, crowding of rabbits decreases the population at a rate double the

population. Furthermore, competition by sheep for the same resources diminishes the rabbit

population by three times the number of sheep!

Sheep on the other hand reproduce at a far slower (but still astonishing) rate of 500 per sheep per

unit time. Competition among themselves and with rabbits diminishes merely one to one with the

number of rabbits and sheep.

Equilibria occur at

0 0 600 300

, , , and

0 500 0 200

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

The equilibria on the axes that are not the origin are the points toward which the populations

head. Which species dies out depends on where they start. See Figure, where x and y are

measured in hundreds.

12. (1200 3 2 )

dR

R R S

dt

= − −

(500 )

dS

S R S

dt

= − −

The explanations of the equations are the same as those in Problem 11 except that the rabbit

population is affected more by the crowding of its own population, less by the number of sheep.

Equilibria occur at

0 0 400 200

, , , or

0 500 0 300

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

SECTION 2.6 Systems of DEs: A First Look 189

In this system the equilibria on the axes are all unstable, so the populations always head toward a

coexistence equilibrium at

200

300

⎡ ⎤

⎢ ⎥

⎣ ⎦

. See Figure, where x and y are measured in hundreds.

Finding the Model

Example of appropriate models are as follows, with real positive coefficients.

13.

2

x ax bx dxy fx

y cy dxy

′ = − − −

′ = − +

14.

2

x ax bxy

y cy dxy eyz

z fz gx hyz

′ = +

′ = − +

′ = − −

15.

2

2

x ax bx cxy dxz

y ey fy gxy

z hz kxz

′ = − − −

′ = − +

′ = − +

Host-Parasite Models

16. (a) A suggested model is

1

H

H aH c

P

P bP dHP

′ = −

+

′ = − +

where a, b, c, and d are positive parameters. Here a species of beetle (parasite) depends

on a certain species of tree (host) for survival. Note that if the beetle were so effective as

to wipe out the entire population of trees, then it would die out itself, which is reflected in

our model (note the differential equation in P). On the other hand, in the absence of the

beetle, the host tree may or may not die out depending on the size of the parameters a and

c. We would probably pick a c > , so the host population would increase in the absence of

the parasite. Note too that model says that when the parasite (P) population gets large, it

190 CHAPTER 2 Linearity and Nonlinearity

will not destroy the host entirely, as

1

H

P +

becomes small. The modeler might want to

estimate the values of parameters a, b, c, d so the solution fits observed data. The modeler

would also like to know the qualitative behavior of ( ) , P H in the PH plane.

Professor Larry Turyn of Wright State University argues for a different model,

CH

H

HP

′ = − ,

to better account for the case of very small P.

(b) Many bacteria are parasitic on external and internal body surfaces; some invading inner

tissue causing diseases such as typhoid fever, tuberculosis, and pneumonia. It is

important to construct models of the dynamics of these complex organisms.

Competition

17. (a)

(4 2 )

(4 2 )

x x x y

y y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = we find

-nullclines 2 4, 0

-nullclines 2 4, 0.

x y x

h x y y

υ + = =

+ = =

Equilibrium points: (0, 0), (0, 2), (2, 0),

4 4

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

. The directions of the solution

curves are shown in the figure.

5 0

x

y

(b) It can be seen from the figure, that the equilibrium points (0, 0), (0, 2) and (2, 0) are

unstable. Only the point

4 4

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

is stable because all solution curves nearby point

toward it.

(c) Some solution curves are shown in the figure.

(d) Because all the solution curves eventually reach the stable equilibrium at

4 4

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

, the

two species described by this model can coexist.

SECTION 2.6 Systems of DEs: A First Look 191

18. (a)

(1 )

(2 )

x x x y

y y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = we find

-nullclines 1, 0

-nullclines 2, 0.

x y x

h x y y

υ + = =

+ = =

Equilibrium points: (0, 0), (0, 2), (1, 0).

The directions of the solution curves are

shown in the figure.

3 0

x

y

(b) It can be seen from the figure, that the equilibrium points (0, 0) and (1, 0) are unstable;

the point (0, 2) is stable because all solution curves nearby point toward it.

(c) Some solution curves are shown in the figure.

(d) Because all the solution curves eventually reach the stable equilibrium at (0, 2), the x

species always die out and the two species described by this model cannot coexist.

19. (a)

(4 2 )

(1 2 )

x x x y

y y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = we find

-nullclines 2 4, 0

-nullclines 2 1, 0.

x y x

h x y y

υ + = =

+ = =

Equilibrium points: (0, 0), (0, 1), (4, 0).

The directions of the solution curves are

shown in the figure.

5 0

x

y

(b) It can be seen from the figure, that the equilibrium points (0, 0) and (0, 1) are unstable;

the point (4, 0) is stable because all solution curves nearby point toward it.

(c) Some solution curves are shown in the figure.

(d) Because all the solution curves eventually reach the stable equilibrium at (4, 0), the y

species always die out and the two species described by this model cannot coexist.

192 CHAPTER 2 Linearity and Nonlinearity

20. (a)

(2 2 )

(2 2 )

x x x y

y y x y

′ = − −

′ = − −

Setting 0 x′ = and 0 y′ = we find

-nullclines 2 2, 0

-nullclines 2 2, 0.

x y x

h x y y

υ + = =

+ = =

Equilibrium points: (0, 0), (0, 2), (2, 0),

2 2

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

. The directions of the solution

curves are shown in the figure.

3 0

x

y

(b) It can be seen from the figure, that the equilibrium points (0, 0) and

2 2

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

are unstable;

the points (0, 2) and (2, 0) are stable because all nearby arrows point toward them.

(c) Some solution curves are shown in the figure.

(d) Because all the solution curves eventually reach one of the stable equilibria at (0, 2) or

(2, 0), the two species described by this model cannot coexist, unless they are exactly at

the unstable equilibrium point

2 2

,

3 3

⎛ ⎞

⎜ ⎟

⎝ ⎠

. Which species dies out is determined by the

initial conditions.

Simpler Competition

21. ( ) x x a by ′ = −

( ) y y c dx ′ = −

Setting 0 x′ = , we find the υ-nullclines are the vertical line 0 x = and the horizontal line

a

y

b

= .

Setting 0 y′ = , we find the h-nullclines are the horizontal 0 y = and vertical line

c

x

d

= . The

equilibrium points are (0, 0) and ,

c a

d b

⎛ ⎞

⎜ ⎟

⎝ ⎠

. By observing the signs of x′ , y′ we find

0, 0 x y ′ ′ > > when ,

c a

x y

d b

< <

0, 0 x y ′ ′ < < when ,

c a

x y

d b

> >

0, 0 x y ′ ′ < > when ,

c a

x y

d b

> <

0, 0 x y ′ ′ > < when ,

c a

x y

d b

< >

SECTION 2.6 Systems of DEs: A First Look 193

Hence, both equilibrium points are unstable. We

can see from the following direction field (with

1 a b c d = = = = ) that one of two species,

depending on the initial conditions, goes to

infinity and the other toward extinction.

2

4

y

x

4

2

One can get the initial values for these curves directly from the graph.

Nullcline Patterns

22. (a–e) When the υ-nullcline lies above the h-nullcline, there are three equilibrium points in the

first quadrant: (0, 0), , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

and , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

. The points (0, 0), , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

are unstable and

, 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

is stable. Hence, only population x survives.

(a)–(b)

y

x

a/c

d/f

d/e a/b

h-nullcline

υ -nullcline

Nullclines and equilibria

(c)–(d)

y

x

a/c

d/f

d/e a/b

h-nullcline

υ -nullcline

Sample trajectories when the

υ-nullcline is above the h-nullcline

194 CHAPTER 2 Linearity and Nonlinearity

23. (a–e) When the h-nullcline lies above the υ-nullcline, there are three equilibrium points in the

first quadrant: (0, 0), , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

and , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

. The points (0, 0), , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

are unstable and

, 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

stable. Hence, only population y survives.

(a)–(b)

y

x

a/b d/e

d/f

a/c

h-nullcline

υ -nullcline

Nullclines

(c)–(d)

y

x

d/f

a/c

a/b d/e

h-nullcline υ -nullcline

Sample trajectories when the

h-nullcline is above the υ-nullcline

24. (a–e) When the two nullclines intersect as they do in the figure, then there are four equilibrium

points in the first quadrant: (0, 0), , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

, , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

, and ( ) ,

e e

x y , where ( ) ,

e e

x y is the

intersection of the lines bx cy a + = , ex fy d + = . Analyzing the sign of the derivatives in

the four regions of the first quadrant, we find that ( ) ,

e e

x y is stable and the others

unstable. Hence, the two populations can coexist.

(a)–(b)

y

x

d/f

a/b d/e

a/c

h-nullcline

υ -nullcline

Nullclines and equilibria

(c)–(d)

y

x

a/c

d/f

a/b d/e

h-nullcline

υ -nullcline

Typical trajectories when the nullclines

intersect and the slope of the vertical

nullcline is more negative.

SECTION 2.6 Systems of DEs: A First Look 195

25. (a–e) When the two nullclines intersect as they do in the figure, then there are four equilibrium

points in the first quadrant: (0, 0), , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

, , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

, and ( ) ,

e e

x y , where ( ) ,

e e

x y is the

intersection of the lines bx cy a + = , ex fy d + = . Analyzing the sign of the derivatives in

the four regions of the first quadrant, we find , 0

a

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

and , 0

d

f

⎛ ⎞

⎜ ⎟

⎝ ⎠

are stable and the

other two unstable. Hence, only one of the two populations survives, and which survives

depends on the initial conditions. See Figures. For initial conditions in the upper region

y survives; for initial conditions in the lower region, x survives.

(a)–(b)

y

x

a/c

d/e a/b

d/f

h-nullcline

υ -nullcline

Nullclines and equilibria

(c)–(d)

y

x

d/f

a/c

d/e a/b

h-nullcline

υ -nullcline

Typical trajectories when the nullclines

intersect and the slope of the h-nullcline

is more negative.

Unfair Competition

26.

( ) 1 x ax bx cxy

y dy exy

′ = − −

′ = −

Setting 0 x y ′ ′ = = , we find three equilibrium points:

( ) 0, 0 ,

1

, 0

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

, and ,

d e bd

a

e ce

− ⎛ ⎞

⎜ ⎟

⎝ ⎠

.

The point (0, 0) corresponds to both populations becoming extinct, the point

1

, 0

b

⎛ ⎞

⎜ ⎟

⎝ ⎠

corresponds

to the second population becoming extinct, and the point ,

d e bd

a

e ce

− ⎛ ⎞

⎜ ⎟

⎝ ⎠

corresponds to either a

stable coexistent point or an unstable point. If we take the special case where

1 d

b e

> , e.g.,

( ) 1

0.5

x x x xy

y y xy

′ = − −

′ = −

196 CHAPTER 2 Linearity and Nonlinearity

where 1 a b c e = = = = , 0.5 d = , we have the equilibrium points (0, 0), (1, 0), and (0.5, 0.5). If we

draw two nullclines; v-nullcline: 1 y x = − , h-nullcline: 0.5 x = , as shown following we see that

the equilibrium point (0.5, 0.5) is unstable. Hence, the two species cannot coexist.

1.5 0

x

0

1.5

y

d/e 1/b

h-nullcline

υ -nullcline

Nullclines and equilibria for

1 d

b e

>

1.5 0

x

0

1.5

y

d/e 1/b

Sample trajectories for

1 d

b e

>

The reader must check separately the cases where

1 d

b e

= or

1 d

b e

< .

Basins of Attraction

27. Adding shading to the graph obtained in Problem 2

shows the basis of the stable equilibrium at

( ) ( )

2

1 1

1 5 , 5 1 (0.38, 0.60).

4 2

⎛ ⎞

− − ≈

⎜ ⎟

⎝ ⎠

SECTION 2.6 Systems of DEs: A First Look 197

28. Adding shading to the graph obtained in Problem 3

shows the basis of the stable equilibrium at

(0, 1).

29. Adding shading to the graph obtained in Problem 18

shows that the entire first and second quadrants are the

basin of attraction for the stable equilbrium at

(0, 2).

30. The graph obtained in Problem 21 has no stable

equilibrium, but we say that there are three basins:

For x > y and y > 0, trajectories are attracted

to (0, ∞).

For x > 0 and x < y, trajectories are attracted

to (∞, 0).

For x < 0 and y < 0, trajectories are attracted

to (−∞, −∞).

198 CHAPTER 2 Linearity and Nonlinearity

Computer Lab: Parameter Investigation

31. Hold three of the parameters constant and observe how the fourth parameter affects the behavior

of the two species. See if the behavior makes sense in your understanding of the model. Keep in

mind that parameter

R

a is a measure of how well the prey grows in the absence of the predator

(large

R

a for rabbits),

F

a is a measure of how fast the predator population will decline when the

prey is absent (large

F

a if the given prey is the only source of food for the predator),

R

c is a

measure of how fast the prey’s population declines per number of prey and predators, and

F

c is a

measure of how fast the predator’s population increases per number of prey and predators. Even

if you are not a biology major, you may still ponder the relative sizes of the four parameters in the

two predator–prey systems: foxes and rabbits, and ladybugs and aphids. You can use these

explanations to reach the same conclusions as in Problem 9.

Computer Lab: Competition Outcomes

32. (a) Using the IDE software, hold all parameters fixed except one and observe how the last

parameters affects the solution. See if the behavior of the two species makes sense in

your understanding of the model. Play a mind game and predict if there will be

coexistence between the species, whether one becomes extinct, and so on, before you

make the parameter change.

Note that in the IDE tool, Competitive Exclusion, there are six parameters;

1

K ,

1

B ,

1

r ,

2

K ,

2

B , and

2

r . The parameters in our text called

R

a ,

R

b ,

R

c ,

F

a ,

F

b , and

F

c

and enter the equations slightly differently. The reason for this discrepancy is due to the

way the parameters in the IDE software affect the two isoclines, called the

1

N and

2

N

isoclines in the IDE software.

By changing the parameters

1

K and

2

K in the IDE software, you simply move

the respective isoclines in a parallel direction. The parameters

1

B ,

2

B change the slopes

of the nullclines. And finally, the parameters

1 2

, r r do not affect the nullclines, but affect

the direction field or the transient part of the solution.

Your hand-sketched phase plane for the four cases should qualitatively look like

the following four pictures, with the basins of attraction colored for each equilibrium.

SECTION 2.6 Systems of DEs: A First Look 199

0

3

y

5 0

x

Case 1: Population x dies out

( )

( )

1

2

x x x y

y y x y

′ = − −

′ = − −

0

3

y

5 0

x

Case 2: Population y dies out

( )

( )

4 2

1 2

x x x y

y y x y

′ = − −

′ = − −

2 0

x 0

2

y

Case 3: Populations coexist

( )

( )

2 2

2 2

x x x y

y y x y

′ = − −

′ = − −

2 0

x 0

2

y

Case 4: One of the populations dies out

( )

( )

2 2

2 2

x x x y

y y x y

′ = − −

′ = − −

Of the four different scenarios to the competitive model, in only one (Case 3) can

both species coexist. In the other three cases one of the two dies out. In Case 4 the species

that dies out depends on the initial conditions, and in Case 1 and 2 one species will die

out regardless of the initial condition. Note too that in all four cases if one population

initially starts at zero, it remains at zero.

200 CHAPTER 2 Linearity and Nonlinearity

(b) The basins of attraction for each stable equilibrium are shown for each of the four cases.

Compare with the figures in part (a).

y

x

Case 1

y

x

Case 2

y

x

Case 3

y

x

Case 4

Suggested Journal Entry

33. Student Project

201

CHAPTER

3

Linear

Algebra

3.1

Matrices: Sums and Products

Do They Compute?

1.

2 0 6

2 4 2 4

2 0 2

− ⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A 2.

1 6 3

2 2 3 2

1 0 3

⎡ ⎤

⎢ ⎥

+ =

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A B

3. 2 − C D, Matrices are not compatible

4.

1 3 3

2 7 2

1 3 1

− − ⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

AB 5.

5 3 9

2 1 2

1 0 1

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

BA 6.

3 1 0

8 1 2

9 2 6

− ⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥

⎣ ⎦

CD

7.

1 1

6 7

− ⎡ ⎤

=

⎢ ⎥

⎣ ⎦

DC 8. ( )

T

1 6

1 7

⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

DC

9.

T

C D, Matrices are not compatible 10.

T

D C, Matrices are not compatible

11.

2

2 0 0

2 1 10

0 0 2

− ⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A 12. AD, Matrices are not compatible

13.

3

2 0 3

2 0 2

1 0 0

− ⎡ ⎤

⎢ ⎥

− =

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A I 14.

3

1 12 0

4 3 0 1 0

0 0 1

⎡ ⎤

⎢ ⎥

− =

⎢ ⎥

⎢ ⎥

⎣ ⎦

B I

15.

3

− C I , Matrices are not compatible 16.

2 9

6 7

0 3

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

AC

202 CHAPTER 3 Linear Algebra

More Multiplication Practice

17. [ ]

1 0 2 2

1 0 2

1 0 2 2

a b

a c e a e

c d

b d f b f

e f

⎡ ⎤

⋅ + ⋅ − ⋅ − ⎡ ⎤ ⎡ ⎤

⎢ ⎥

− = =

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⋅ + ⋅ − ⋅ −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

18.

0

0

a b d b ad bc ab ba ad bc

c d c a cd dc db da ad bc

− − − + − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − − + −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

19.

1 1

0 2 0 0

2 0 1 0

2 2

1 1 1 1 1 0 1

1 1

2 2 2

⎡ ⎤ ⎡ ⎤

⋅ +

⎢ ⎥ ⎢ ⎥

⎡ ⎤ ⎡ ⎤

= = ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎢ ⎥

− −

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

20. [ ] [ ] 0 1 0

a b c

d e f d e f

g h k

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

21. [ ] [ ] 0 1 1 1 0

a b c

d e f

⎡ ⎤

⎢ ⎥

⎣ ⎦

= [ ][ ] 1 1 0 d e f not possible

22. [ ] [ ] [ ]

1 1

1 1 0

1 1

a b

a c b d a c b d c d

e f

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

= + + = + + +

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

Rows and Columns in Products

23. (a) 5 columns (b) 4 rows (c) 6 4 ×

Which Rules Work for Matrix Multiplication?

24. Counterexample:

A =

1 1

1 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

B =

2 1

0 1

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

(A + B)(A − B) =

3 0 1 2

1 1 1 1

− ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

=

3 6

0 1

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

A

2

− B

2

=

2 1 4 3 2 4

1 1 0 1 1 0

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

− =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

25. Counterexample:

Also due to the fact that AB ≠ BA for most matrices

A =

1 1

1 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

B =

2 1

0 1

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

(A + B)

2

=

9 0

4 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

AB =

2 0

2 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

A

2

+ 2AB + B

2

=

2 1 2 0 4 3

2

1 1 2 1 0 1

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

10 2

5 0

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

SECTION 3.1 Matrices: Sums and Products 203

26. Proof (I + A)

2

= (I + A)(I + A) = I(I + A) + A(I + A) distributive property

= I

2

+ IA + AI + A

2

= I + A + A + A

2

identity property

= I + 2A + A

2

27. Proof (A + B)

2

= (A + B)(A + B) = A(A + B) + B(A + B) distributive property

= A

2

+ AB + BA + B

2

distributive property

Find the Matrix

28. Set

1 2 3 2 4 0 0

3 4 3 2 4 0 0

a b a b a b

c d c d c d

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

a + 3b = 0 a + 3b = 0 c + 3d = 0 c + 3d = 0

2a + 4b = 0 a + 2b = 0 2c − 4d = 0 c + 2d = 0

b = 0 d = 0

∴ a = 0 ∴ c = 0

Therefore no nonzero matrix A will work.

29. B must be 3 × 2 Set

1 2 3 1 0

0 1 0 1 0

a b

c d

e f

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

a + 2c + 3e = 1

c = 0

b + 2d + 3f = 0

d = 1

B is any matrix of the form

1 3 2 3

0 1

e f

e f

− − − ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

for any real numbers e and f.

30. Set

1 2 2 0

4 1 1 4

a b

c d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

a + 2c = 2 b + 2d = 0

4a + c = 1 4b + d = 4

Thus c = 1, a = 0 b =

8

7

, d =

4

7

− and

8

2

7

4

1

7

a b

c d

⎡ ⎤

⎢ ⎥

⎡ ⎤

= ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎢ ⎥

−

⎢ ⎥

⎣ ⎦

.

Commuters

31.

0 1 0

0 0 1

a

a

a

⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

so the matrix commutes with every 2 × 2 matrix.

a = 1 − 3e

b = −2 − 3f

⇒

204 CHAPTER 3 Linear Algebra

32.

1

1

k a b a kc b kd

k c d ka c kb d

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

1

1

a b k a bk ak b

c d k c kd ck d

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

∴ Any matrix of the form

a b

b a

⎡ ⎤

⎢ ⎥

⎣ ⎦

a, b ∈R will commute with

1

1

k

k

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

a + kc = a + bk

k(c − b) = 0

∴ c = b since k ≠ 0

b + kd = ak + b

k(d − a) = 0

∴ d = a since k ≠ 0

Same results from

ka + c = c + kd

and kb + d = a + d

To check:

1 1

1 1

k a b a kb b ka a b k

k b a ka b kb a b a k

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

33.

0 1

1 0

a b c d

c d a b

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

0 1

1 0

a b b a

c d d c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

b c

a d

∴ =

=

Any matrix of the form ,

a b

a b

b a

⎡ ⎤

∈

⎢ ⎥

⎣ ⎦

R will commute with

0 1

1 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

Products with Transposes

34. (a) [ ]

T

1

1 4 3

1

⎡ ⎤

= = −

⎢ ⎥

−

⎣ ⎦

A B (b) [ ]

T

1 1 1

1 1

4 4 4

− ⎡ ⎤ ⎡ ⎤

= − =

⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

AB

(c) [ ]

T

1

1 1 3

4

⎡ ⎤

= − = −

⎢ ⎥

⎣ ⎦

B A (d) [ ]

T

1 1 4

1 4

1 1 4

⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥

− − −

⎣ ⎦ ⎣ ⎦

BA

Reckoning

35. Let a

ij

, b

ij

, be the ijth elements of matrices A and B, respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n.

A − B = ( 1)

ij ij ij ij

a b a b ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ − = + −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

= A + (−1)B

36. Let a

ij

, b

ij

be the ijth elements of matrices A and B respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n.

A + B =

ij ij ij ij

a b a b ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

from the commutative property of real numbers

=

ij ij

b a ⎡ ⎤ +

⎣ ⎦

=

ij ij

b a ⎡ ⎤ ⎡ ⎤ +

⎣ ⎦ ⎣ ⎦

= B + A

SECTION 3.1 Matrices: Sums and Products 205

37. Let a

ij

be the ijth element of matrix A and c and d be any real numbers.

(c + d)A = ( )

ij ij ij

c d a ca da ⎡ ⎤ ⎡ ⎤ + = +

⎣ ⎦ ⎣ ⎦

from the distributive property of real numbers

=

ij ij ij ij

ca da c a d a ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

= cA + dA

38. Let a

ij

, b

ij

be the ijth elements of A and B respectively, for 1 ≤ i ≤ m, 1 ≤ j ≤ n. Let c be any real

number. The result again follows from the distributive property of real numbers.

c(A + B) =

( ) ( )

ij ij ij ij

c a b c a b

⎡ ⎤

⎡ ⎤ ⎡ ⎤ + − +

⎣ ⎦ ⎣ ⎦

⎣ ⎦

=

ij ij ij ij

ca cb ca cb ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ + = +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

ij ij

c a c b ⎡ ⎤ ⎡ ⎤ +

⎣ ⎦ ⎣ ⎦

= cA + cB

Properties of the Transpose

Rather than grinding out the proofs of Problems 39–42, we make the following observations:

39.

( )

T

T

= A A. Interchanging rows and columns of a matrix two times reproduce the original matrix.

40. ( )

T

T T

+ = + A B A B . Add two matrices and then interchange the rows and columns of the

resulting matrix. You get the same as first interchanging the rows and columns of the matrices

and then adding.

41. ( )

T

T

k k = A A . Demonstrate that it makes no difference whether you multiply each element of

matrix A before or after rearranging them to form the transpose.

42. ( )

T

T T

= AB B A . This identity is not so obvious. Due to lack of space we verify the proof for

2 2 × matrices. The verification for 3 3 × and higher-order matrices follows along exactly the

same lines.

( )

11 12

21 22

11 12

21 22

11 12 11 12 11 11 12 21 11 12 12 22

21 22 21 22 21 11 22 21 21 12 22 22

T

11 11 12 21 21 11 22 21

11 12 12 22 21 12 22 22

a a

a a

b b

b b

a a b b a b a b a b a b

a a b b a b a b a b a b

a b a b a b a b

a b a b a b a b

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

+ + ⎡ ⎤

=

⎢ ⎥

+ +

⎣ ⎦

A

B

AB

AB

11 21 11 21 11 11 12 21 21 11 22 21 T T

12 22 12 22 11 12 12 22 21 12 22 22

b b a a a b a b a b a b

b b a a a b a b a b a b

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

B A

Hence, ( )

T

T T

= AB B A for 2 2 × matrices.

206 CHAPTER 3 Linear Algebra

Transposes and Symmetry

43. If the matrix

ij

a ⎡ ⎤ =

⎣ ⎦

A is symmetric, then

ij ji

a a = . Hence

T

ji

a ⎡ ⎤ =

⎣ ⎦

A is symmetric since

ji ij

a a = .

Symmetry and Products

44. We pick at random the two symmetric matrices

0 2

2 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A ,

3 1

1 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

B ,

which gives

0 2 3 1 2 2

2 1 1 1 7 3

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

AB .

This is not symmetric. In fact, if A, B are symmetric matrices, we have

( )

T

T T

= = AB B A BA ,

which says the only time the product of symmetric matrices A and B is symmetric is when the

matrices commute (i.e. = AB BA ).

Constructing Symmetry

45. We verify the statement that

T

+ A A is symmetric for any 2 2 × matrix. The general proof

follows along the same lines.

11 12 11 21 11 12 21 T

21 22 12 22 21 12 22

2

2

a a a a a a a

a a a a a a a

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ = + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A A ,

which is clearly symmetric.

More Symmetry

46. Let

11 12

21 22

31 32

a a

a a

a a

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

A .

Hence, we have

11 12

11 21 31 11 12 T

21 22

12 22 32 21 22

31 32

a a

a a a A A

a a

a a a A A

a a

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

= =

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

A A ,

2 2 2

11 11 21 31

12 11 12 21 22 31 32

21 11 12 21 22 31 32

2 2 2

22 12 22 32

.

A a a a

A a a a a a a

A a a a a a a

A a a a

= + +

= + +

= + +

= + +

Note

12 21

A A = , which means

T

AA is symmetric. We could verify the same result for 3 3 ×

matrices.

SECTION 3.1 Matrices: Sums and Products 207

Trace of a Matrix

47. ( ) ( ) ( ) Tr Tr Tr + = + A B A B

( ) ( ) ( ) ( ) ( )

11 11 11 11

Tr Tr( )+Tr( )

nn nn nn nn

a b a b a a b b + = + + + + = + + + + + = A B A B .

48. ( ) ( ) ( )

11 11

Tr Tr

nn nn

c ca ca c a a c = + + = + + = A A

49.

( ) ( )

T

Tr Tr = A A . Taking the transpose of a (square) matrix does not alter the diagonal element,

so ( ) ( )

T

Tr Tr = A A .

50.

( ) [ ] [ ] [ ] [ ]

( ) ( )

( ) ( )

[ ] [ ] [ ] [ ] ( )

11 1 11 1 1 1

11 11 1 1 1 1

11 11 1 1 1 1

11 1 11 1 1 1

Tr

Tr

n n n nn n nn

n n n n nn nn

n n n n nn nn

n n n nn n nn

a a b b a a b b

a b a b a b a b

b a b a b a b a

b b a a b b a a

= + + ⋅ + + + + + + ⋅ + +

= + + + + + +

= + + + + + +

= + + ⋅ + + + + + + ⋅ + + =

AB

BA

Matrices Can Be Complex

51.

3 0

2

2 4 4

i

i i

+ ⎡ ⎤

+ =

⎢ ⎥

+ −

⎣ ⎦

A B 52.

3 1

8 4 5 3

i i

i i

− + − + ⎡ ⎤

=

⎢ ⎥

+ −

⎣ ⎦

AB

53.

1 3

4 1

i

i i

− − ⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

BA 54.

2

6 4 6

6 4 5 8

i i

i i

+ ⎡ ⎤

=

⎢ ⎥

− − −

⎣ ⎦

A

55.

1 2

2 3 2

i

i

i i

− + − ⎡ ⎤

=

⎢ ⎥

+

⎣ ⎦

A 56.

1 2 2

2

6 4 5

i i

i

i

− − + ⎡ ⎤

− =

⎢ ⎥

−

⎣ ⎦

A B

57.

T

1 2

1

i

i i

⎡ ⎤

=

⎢ ⎥

− +

⎣ ⎦

B 58. ( ) Tr 2 i = + B

Real and Imaginary Components

59.

1 2 1 0 1 2

2 2 3 2 2 0 3

i i

i

i

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A ,

1 1 0 0 1

2 1 0 1 2 1

i

i

i i

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

B

Square Roots of Zero

60. If we assume

a b

c d

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A

is the square root of

0 0

0 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

then we must have

2

2

2

0 0

0 0

a b a b a bc ab bd

c d c d ac cd bc d

⎡ ⎤ + + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A ,

208 CHAPTER 3 Linear Algebra

which implies the four equations

2

2

0

0

0

0.

a bc

ab bd

ac cd

bc d

+ =

+ =

+ =

+ =

From the first and last equations, we have

2 2

a d = . We now consider two cases: first we assume

a d = . From the middle two preceding equations we arrive at 0 b = , 0 c = , and hence 0 a = ,

0 d = . The other condition, a d = − , gives no condition on b and c, so we seek a matrix of the

form (we pick 1 a = , 1 d = − for simplicity)

1 1 1 0

1 1 0 1

b b bc

c c bc

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Hence, in order for the matrix to be the zero matrix, we must have

1

b

c

= − , and hence

1

1

1

c

c

⎡ ⎤

−

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

,

which gives

1 1

0 0 1 1

0 0

1 1

c c

c c

⎡ ⎤ ⎡ ⎤

− − ⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

− −

⎣ ⎦ ⎣ ⎦

.

Zero Divisors

61. No, = AB 0 does not imply that = A 0 or = B 0. For example, the product

1 0 0 0

0 0 0 1

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

is the zero matrix, but neither factor is itself the zero matrix.

Does Cancellation Work?

62. No. A counterexample is:

0 0 1 2 0 0 0 0

0 1 0 4 0 1 0 4

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

=

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

since

1 2 0 0

0 4 0 4

⎛ ⎞ ⎛ ⎞

≠

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

.

SECTION 3.1 Matrices: Sums and Products 209

Taking Matrices Apart

63. (a)

1 2 3

1 5 2

1 0 3

2 4 7

⎡ ⎤

⎢ ⎥

⎡ ⎤ = = −

⎣ ⎦ ⎢ ⎥

⎢ ⎥

⎣ ⎦

A A A A ,

2

4

3

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

x

where

1

A ,

2

A , and

3

A are the three columns of the matrix A and

1

2 x = ,

2

4 x = ,

3

3 x =

are the elements of x

. We can write

1 1 2 2 3 3

1 5 2 2 1 2 5 4 2 3 1 5 2

1 0 3 4 1 2 0 4 3 3 2 1 4 0 3 3

2 4 7 3 2 2 4 4 7 3 2 4 7

. x x x

× + × + × ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= − = − × + × + × = − + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ × + × + ×

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

= + +

Ax

A A A

**(b) We verify the fact for a 3 3 × matrix. The general n n × case follows along the same lines.
**

11 12 13 1 11 1 12 2 13 3 11 1 12 2 13 3

21 22 23 2 21 1 22 2 23 3 21 1 22 2 23 3

31 32 33 3 31 1 32 2 33 3 31 1 32 2 33 3

1

a a a x a x a x a x a x a x a x

a a a x a x a x a x a x a x a x

a a a x a x a x a x a x a x a x

x

+ + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = + + = + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

=

Ax

11 12 13

21 2 22 3 23 1 1 2 2 3 3

31 32 33

a a a

a x a x a x x x

a a a

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ + = + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A A A

Diagonal Matrices

64.

11

22

0 0

0 0

0

0 0

nn

a

a

a

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

A

,

11

22

0 0

0 0

0

0 0

nn

b

b

b

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

B

.

By multiplication we get

11 11

22 22

0 0

0 0

0

0 0

nn nn

a b

a b

a b

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

AB

,

which is a diagonal matrix.

210 CHAPTER 3 Linear Algebra

65. By multiplication of the general matrices, and commutativity of resulting individual elements, we

have

11 11 11 11

22 22 22 22

0 0 0 0

0 0 0 0

0 0

0 0 0 0

nn nn nn nn

a b b a

a b b a

a b a b

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ = = =

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

AB BA

. .

.

However, it is not true that a diagonal matrix commutes with an arbitrary matrix.

Upper Triangular Matrices

66. (a) Examples are

1 2

0 3

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

1 3 0

0 0 5

0 0 2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

,

2 7 9 0

0 3 8 1

0 0 4 2

0 0 0 6

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

(b) By direct computation, it is easy to see that all the entries in the matrix product

11 12 13 11 12 13

22 23 22 23

33 33

0 0

0 0 0 0

a a a b b b

a a b b

a b

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

AB

below the diagonal are zero.

(c) In the general case, if we multiply two upper-triangular matrices, it yields

11 12 13 1 11 12 13 1 11 12 13 1

22 23 2 22 23 2 22 23 2

33 3 33 3 33 3

0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0

n n n

n n n

n n n

nn nn nn

a a a a b b b b c c c c

a a a b b b c c c

a a b b c c

a b c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = × =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

AB

.

We won’t bother to write the general expression for the elements

ij

c ; the important point

is that the entries in the product matrix that lie below the main diagonal are clearly zero.

Hard Puzzle

67. If

a b

c d

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

M

is a square root of

0 1

0 0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A ,

then

2

0 1

0 0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

M ,

SECTION 3.1 Matrices: Sums and Products 211

which leads to the condition

2 2

a d = . Each of the possible cases leads to a contradiction. How-

ever for matrix B because

1 0 1 0 1 0

1 1 0 1 α α

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

for any α, we conclude that

1 0

1 α

⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

B

is a square root of the identity matrix for any number α.

Orthogonality

68.

1 1

2

0 3

k

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 1 ⋅ 1 + 2 ⋅ k + 3 ⋅ 0 = 0

2k = −1

k =

1

2

−

69.

1

2 0

4

k

k

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= k ⋅ 1 + 2 ⋅ 0 + k ⋅ 4 = 0

5k = 0

k = 0

70.

2

1

0 2

3

k

k

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= k ⋅ 1 + 0 ⋅ 2 + k

2

⋅ 3 = 0

3k

2

+ k = 0

k(5k + 1) = 0

k = 0,

1

3

−

71.

2

1 1

2 1

1 k

− ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦

= 1 ⋅ (−1) + 2 ⋅ 1 + k

2

(−1) = 0

1 − k

2

= 0

k = ±1

Orthogonality Subsets

72. Set

1

0

1

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0

a ⋅ 1 + b ⋅ 0 + c ⋅ 1 = 0

a + c = 0

c = −a

Orthogonal set = : ,

a

b a b

a

⎧ ⎫ ⎡ ⎤

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥ −

⎣ ⎦ ⎩ ⎭

R

73. Set

1

0

1

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get c = −a

Set

2

1

0

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0

2a + b ⋅ 1 + c ⋅ 0 = 0

2a + b = 0

b = −2a

Orthogonal set = 2 :

a

a a

a

⎧ ⎫ ⎡ ⎤

⎪ ⎪

⎢ ⎥

− ∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥ −

⎣ ⎦ ⎩ ⎭

R

212 CHAPTER 3 Linear Algebra

74. Set

1

0

1

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get c = −a

Set

2

1

0

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get b = −2a

Set

3

4

5

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get a ⋅ 3 + b ⋅ 4 + c ⋅ 5 = 0

3a − 8a − 5a = 0

∴a = 0

0

0

0

⎧ ⎫ ⎡ ⎤

⎪ ⎪

⎢ ⎥

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

is the orthogonal set

75. Set

1

0

1

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get c = −a

Set

2

1

0

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= 0 to get b = −2a

Set

0

1

2

a

b

c

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⋅ −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

= a ⋅ 0 + b(−1) + c(2)

= −b + 2c = 0

= 2a − 2a = 0

2 :

a

a a

a

⎧ ⎫ ⎡ ⎤

⎪ ⎪

⎢ ⎥

− ∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥ −

⎣ ⎦ ⎩ ⎭

R is the orthogonal set

Dot Products

76. [ ] [ ] 2, 1 1, 2 0 • − = , orthogonal

77. [ ] [ ] 3, 0 2, 1 6 − • = − , not orthogonal. Because the dot product is negative, this means the angle

between the vectors is greater than 90°.

78. [ ] [ ] 2, 1, 2 3, 1, 0 5 • − = . Because the dot product is positive, this means the angle between the

vectors is less than 90°.

79. [ ] [ ] 1, 0, 1 1, 1, 1 0 − • = , orthogonal

80. [ ] [ ] 5, 7, 5, 1 2, 4, 3, 3 0 • − − − = , orthogonal

81. [ ] [ ] 7, 5, 1, 5 4, 3, 2, 3 30 • − = , not orthogonal

Lengths

82. Introducing the two vectors [ ] , a b = u

, [ ] , c d = v

**, we have the distance d between the heads of
**

the vectors

( ) ( )

2 2

d a c b d = − + − .

But we also have

( ) ( ) ( ) ( )

2 2 2

a c b d − = − • − = − + − u v u v u v

,

so d = − u v

. This proof can be extended easily to u

and v

in

n

R .

SECTION 3.1 Matrices: Sums and Products 213

Geometric Vector Operations

83. + A C lies on the horizontal axis, from 0 to –2.

[ ] [ ] [ ] 1, 2 3, 2 2, 0 + = + − − = − A C

1 3 –1

–1

–2

–3

1

3

2

–2

–3 2

A = 1 2 ,

C= − − 3 2 ,

A C + = −2, 0

84. [ ] [ ] [ ]

1 1

1, 2 3, 1 2.5, 2

2 2

+ = + − = − A B

1 3 –1

–1

–2

–3

1

3

2

–2

–3 2

A = 1 2 ,

B A + = −

1

2

25 2 . ,

B = −3 1 ,

85. 2 − A Blies on the horizontal axis, from 0 to 7.

4 8 2

–1

–2

–3

1

3

2

–2

–4 6

A = 1 2 ,

B = −3 1 , A B − = 2 7, 0

Triangles

86.

If [ ] 3, 2 and [ ] 2, 3 are two sides of a triangle,

their difference [ ] 1, 1 − or [ ] 1, 1 − is the third

side. If we compute the dot products of these

sides, we see

[ ] [ ] 3, 2 2, 3 12 • = ,

[ ] [ ] 3, 2 1, 1 1 • − = ,

[ ] [ ] 2, 3 1, 1 1 • − = − .

1 3 –1

–1

–2

–3

1

3

2

–2

–3 2

2, 3

3 2 ,

None of these angles are right angles, so the triangle is not a right triangle (see figure).

214 CHAPTER 3 Linear Algebra

87. [ ] [ ] 2, 1, 2 1, 0, 1 0 − • − = so in 3-space these vectors form a right angle, since dot product is zero.

Properties of Scalar Products

We let [ ]

1 n

a a = a

… , [ ]

1 n

b b = b

… , and [ ]

1 n

c c = c

… for simplicity.

88. True. [ ] [ ] [ ] [ ]

1 1 1 1 1 1

n n n n n n

a a b b a b a b b a b a • = • = = = • a b b a

.

89. False. Neither

( )

• • a b c

nor

( )

• • a b c

**. Invalid operation, since problem asks for the scalar
**

product of a vector and a scalar, which is not defined.

90. True.

( ) [ ] [ ]

[ ] [ ] ( )

1 1 1 1 1 1

1 1

n n n n n n

n n

k ka ka b b ka b ka b a kb a kb

a a kb kb k

• = • = + + = + +

= • = •

a b

a b

91. True.

( ) [ ] [ ] ( ) ( )

( ) ( )

1 1 1 1 1 1

1 1 1 1

n n n n n n

n n n n

a a b c b c a b c a b c

a b a b a c a c

• + = • + + = + + + +

= + + + + + = • + •

a b c

a b a c

Directed Graphs

92. (a)

0 1 1 0 1

0 0 1 0 0

0 0 0 0 1

0 0 0 0 0

0 0 1 1 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ =

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

A

(b)

2

0 0 2 1 1

0 0 0 0 1

0 0 1 1 0

0 0 0 0 0

0 0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ =

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

A

The ijth entry in

2

A gives the number of paths of length 2 from node i to node j.

Tournament Play

93. The tournament graph had adjacency matrix

0 1 1 0 1

0 0 0 1 1

0 1 0 0 1

1 0 1 0 1

0 0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ =

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

T .

Ranking players by the number of games won means summing the elements of each row of T,

which in this case gives two ties: 1 and 4, 2 and 3, 5. Players 1 and 4 have each won 3 games.

Players 2 and 3 have each won 2 games. Player 5 has won none.

SECTION 3.1 Matrices: Sums and Products 215

Second-order dominance can be determined from

2

0 1 0 1 2

1 0 1 0 1

0 0 0 1 1

0 2 1 0 2

0 0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ =

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

T

For example,

2

T tells us that Player 1 can dominate Player 5 in two second-order ways (by

beating either Player 2 or Player 4, both of whom beat Player 5). The sum

2

0 2 1 1 3

1 0 1 1 2

0 1 0 1 2

1 2 2 0 3

0 0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ + =

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

T T ,

gives the number of ways one player has beaten another both directly and indirectly. Reranking

players by sums of row elements of

2

+ T T can sometimes break a tie: In this case it does so and

ranks the players in order 4, 1, 2, 3, 5.

Suggested Journal Entry

94. Student Project

216 CHAPTER 3 Linear Algebra

3.2

Systems of Linear Equations

Matrix-Vector Form

1.

1 2 1

2 1 0

3 2 1

x

y

⎡ ⎤ ⎡ ⎤

⎡ ⎤

⎢ ⎥ ⎢ ⎥

− =

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

2.

1

2

3

4

1 2 1 3 2

1 3 3 0 1

i

i

i

i

⎡ ⎤

⎢ ⎥

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥ −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

Augmented matrix

1 2 1

2 1 0

3 2 1

⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥

⎣ ⎦

Augmented matrix

1 2 1 3 2

1 3 3 0 1

⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

3.

1 2 1 1

1 3 3 1

0 4 5 3

r

s

t

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

4. [ ]

1

2

3

1 2 3 0

x

x

x

⎡ ⎤

⎢ ⎥

− =

⎢ ⎥

⎢ ⎥

⎣ ⎦

Augmented matrix

1 2 1 1

1 3 3 1

0 4 5 3

⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

Augmented matrix [ ] 1 2 3 | 0 = −

Solutions in

2

R

5. (A) 6. (B) 7. (C) 8. (B) 9. (A)

A Special Solution Set in

3

R

10. The three equations

1

2 2 2 2

3 3 3 3

x y z

x y z

x y z

+ + =

+ + =

+ + =

are equivalent to the single plane 1 x y z + + = , which can be written in parametric form by letting

y s = , z t = . We then have the parametric form ( ) { }

1 , , : , any real numbers s t s t s t − − .

Reduced Row Echelon Form

11. RREF 12. Not RREF (not all zeros above leading ones)

13. Not RREF (leading nonzero element in row 2 is not 1; not all zeros above the leading ones)

14. Not RREF (row 3 does not have a leading one, nor does it move to the right; plus pivot columns

have nonzero entries other than the leading ones)

15. RREF 16. Not RREF (not all zeros above leading ones)

17. Not RREF (not all zeros above leading ones)

18. RREF 19. RREF

SECTION 3.2 Systems of Linear Equations 217

Gauss-Jordan Elimination

20. Starting with

1 3 8 0

0 1 2 1

0 1 2 4

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

3 3 2

1 R R R

∗

= + −

1 3 8 0

0 1 2 1

0 0 0 3

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

3 3

1

3

R R

∗

=

1 3 8 0

0 1 2 1

0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

This matrix is in row echelon form. To further reduce it to RREF we carry out the following

elementary row operations

( )

1 1 2

3 R R R

∗

= + − , ( )

2 2 3

1 R R R

∗

= + −

1 0 2 0

0 1 2 0 RREF

0 0 0 1

⎡ ⎤

⎢ ⎥

←

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

Hence, we see the leading ones in this RREF form are in columns 1, 2, and 4, so the pivot

columns of the original matrix are columns 1, 2, and 4 shown in bold and underlined as follows:

8

2

2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 3 0

0 1 1

0 1 4

.

21.

0 0 2 2 2

2 2 6 14 4

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

1 2

R R ↔

2 2 6 14 4

0 0 2 2 2

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

1 1

1

2

R R

∗

=

1 1 3 7 2

0 0 2 2 2

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

2 2

1

2

R R

∗

=

1 1 3 7 2

0 0 1 1 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

.

The matrix is in row echelon form. To further reduce it to RREF we carry out the following

elementary row operation.

( )

1 1 2

3 R R R

∗

= + −

1 1 0 4 5

RREF

0 0 1 1 1

⎡ ⎤

←

⎢ ⎥

−

⎣ ⎦

The pivot columns of the original matrix are first and third.

0 2 2

2 14 4

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

0 2

2 6

.

218 CHAPTER 3 Linear Algebra

22.

1 0 0

2 4 6

5 8 12

0 8 12

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

2 2 1

3 3 1

2

5

R R R

R R R

∗

∗

= + −

= + −

1 0 0

0 4 6

0 8 12

0 8 12

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

2 2

1

4

R R

∗

=

1 0 0

3

0 1

2

0 8 12

0 8 12

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

3 3 2

4 4 2

8

8

R R R

R R R

∗

∗

= + −

= + −

1 0 0

3

0 1

RREF

2

0 0 0

0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

←

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

This matrix is in both row echelon form and RREF form.

0

6

12

12

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 0

2 4

5 8

0 8

The pivot columns of the original matrix are the first and second columns.

23.

1 2 3 1

3 7 10 4

2 4 6 2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

2 2 1

3 3 1

3

2

R R R

R R R

∗

∗

= + −

= + −

,

1 2 3 1

0 1 1 1 rowechelonform

0 0 0 0

⎡ ⎤

⎢ ⎥

←

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

The matrix is in row echelon form. To further reduce it to RREF, we carry out the following

elementary row operation.

( )

1 1 2

2 R R R

∗

= + −

1 0 1 1

0 1 1 1 RREF

0 0 0 0

− ⎡ ⎤

⎢ ⎥

←

⎢ ⎥

⎢ ⎥

⎣ ⎦

The pivot columns of the original matrix are first and second.

3 1

10 4

6 2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 2

3 7

2 4

.

Solving Systems

24.

1 1 4

1 1 0

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

( )

2 2 1

1 R R R

∗

= + −

1 1 4

0 2 4

⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

*

2 2

1

2

R R = −

1 1 4

0 1 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

( )

1 1 2

1 R R R

∗

= + −

1 0 2

0 1 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

unique solution; 2 x = , 2 y =

SECTION 3.2 Systems of Linear Equations 219

25.

2 1 0

1 1 3

− ⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

1 2

R R ↔

1 1 3

2 1 0

− − ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

( )

2 2 1

2 R R R

∗

= + −

1 1 3

0 1 6

− − ⎡ ⎤

⎢ ⎥

⎣ ⎦

( )

1 1 2

1 R R R

∗

= + RREF

1 0 3

0 1 6

⎡ ⎤

⎢ ⎥

⎣ ⎦

unique solution; 3 x = , 6 y =

26.

1 1 1 0

0 1 1 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

( )

1 1 2

1 R R R

∗

= + − RREF

1 0 0 1

0 1 1 1

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

arbitrary (infinitely many solutions); 1 x = − , 1 y z = − , z arbitrary

27.

2 4 2 0

5 3 0 0

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

1 1

1

2

R R

∗

=

1 2 1 0

5 3 0 0

− ⎡ ⎤

⎢ ⎥

⎣ ⎦

( )

2 2 1

5 R R R

∗

= + −

1 2 1 0

0 7 5 0

− ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

2 2

1

7

R R

∗

= −

1 2 1 0

5

0 1 0

7

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

( )

1 1 2

2 R R R

∗

= + − RREF

3

1 0 0

7

5

0 1 0

7

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

nonunique solutions;

3

7

x z = − ,

5

7

y z = , z is arbitrary

28.

1 1 2 1

2 3 1 2

5 4 2 4

− − ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

2 2 1

3 3 1

2

5

R R R

R R R

∗

∗

= + −

= + −

1 1 2 1

0 5 5 0

0 9 12 1

− − ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

2 2

1

5

R R

∗

=

1 1 2 1

0 1 1 0

0 9 12 1

− − ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

( )

1 1 2

3 3 2

9

R R R

R R R

∗

∗

= +

= + −

1 0 1 1

0 1 1 0

0 0 3 1

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

3 3

1

3

R R

∗

=

1 0 1 1

0 1 1 0

1

0 0 1

3

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

( )

1 1 3

2 2 3

1

R R R

R R R

∗

∗

= +

= + −

RREF

2

1 0 0

3

1

0 1 0

3

1

0 0 1

3

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

unique solution;

2

3

x = ,

1

3

y = ,

1

3

z = −

220 CHAPTER 3 Linear Algebra

29.

1 4 5 0

2 1 8 9

− ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

( )

2 2 1

2 R R R

∗

= + −

1 4 5 0

0 9 18 9

− ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

2 2

1

9

R R

∗

= −

1 4 5 0

0 1 2 1

− ⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

( )

1 1 2

4 R R R

∗

= + − RREF

1 0 3 4

0 1 2 1

⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

nonunique solutions;

1 3

4 3 x x = − ,

2 3

1 2 x x = − + ,

3

x is arbitrary

30.

1 0 1 2

2 3 5 4

3 2 1 4

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

( )

( )

2 2 1

3 3 1

2

3

R R R

R R R

∗

∗

= + −

= + −

1 0 1 2

0 3 3 0

0 2 4 2

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

2 2

1

3

R R

∗

= −

1 0 1 2

0 1 1 0

0 2 4 2

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

( )

3 3 2

2 R R R

∗

= + −

1 0 1 2

0 1 1 0

0 0 2 2

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

3 3

1

2

R R

∗

= −

1 0 1 2

0 1 1 0

0 0 1 1

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

1 1 3

2 2 3

1 R R R

R R R

∗

∗

= + −

= +

RREF

1 0 0 1

0 1 0 1

0 0 1 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

unique solution; 1 x y z = = =

31.

1 1 1 0

1 1 0 0

1 2 1 0

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

( )

( )

2 2 1

3 3 1

1

1

R R R

R R R

∗

∗

= + −

= + −

1 1 1 0

0 2 1 0

0 3 2 0

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

2 2

1

2

R R

∗

=

1 1 1 0

1

0 1 0

2

0 3 2 0

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

( )

1 1 2

3 3 2

3

R R R

R R R

∗

∗

= +

= + −

1

1 0 0

2

1

0 1 0

2

1

0 0 0

2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

( )

3 3

2 R R

∗

= −

1

1 0 0

2

1

0 1 0

2

0 0 1 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 1 3

2 2 3

1

2

1

2

R R R

R R R

∗

∗

⎛ ⎞

= + −

⎜ ⎟

⎝ ⎠

= +

RREF

1 0 0 0

0 1 0 0

0 0 1 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

unique solution; 0 x y z = = =

SECTION 3.2 Systems of Linear Equations 221

32.

1 1 2 0

2 1 1 0

4 1 5 0

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

2 2 1

3 3 1

2

4

R R R

R R R

∗

∗

= + −

= + −

1 1 2 0

0 3 3 0

0 3 3 0

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

2 2

1

3

R R

∗

= −

1 1 2 0

0 1 1 0

0 3 3 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

( )

( )

1 1 2

3 3 2

1

3

R R R

R R R

∗

∗

= + −

= +

RREF

1 0 1 0

0 1 1 0

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

nonunique solutions; , x z y z = − = − , z is arbitrary

33.

1 1 2 1

2 1 1 2

4 1 5 4

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

( )

2 2 1

3 3 1

2

4

R R R

R R R

∗

∗

= + −

= + −

1 1 2 1

0 3 3 0

0 3 3 0

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

2 2

1

3

R R

∗

= −

1 1 2 1

0 1 1 0

0 3 3 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

( )

( )

1 1 2

3 3 2

1

3

R R R

R R R

∗

∗

= + −

= +

RREF

1 0 1 1

0 1 1 0

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

nonunique solutions; 1 x z = − , y z = − , z is arbitrary

34.

2 2

2 4 3 0

6 4 2

4

x y z

x y z

x y z

x y

+ + =

− − =

− + − =

− =

1 2 1 2

2 4 3 0

1 6 4 2

1 1 0 4

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ − −

⎢ ⎥

−

⎣ ⎦

*

2 1 2

*

4 2 4

2 R R R

R R R

= − +

= − +

1 2 1 2

2 4 3 2

0 8 5 2

0 3 1 2

⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥ − − −

⎢ ⎥

− −

⎣ ⎦

*

3 2 3

*

4 2 4

3

8

R R R

R R R

= +

= − +

1 2 1 2

0 8 5 2

0 0 8 2

7 5

0 0

8 4

⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥ −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

2 2

*

3 3

*

4 4

1

8

1

8

8

7

R R

R R

R R

= −

= −

=

1 2 1 2

5 1

0 1

8 4

1

0 0 1

4

10

0 0 1

7

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

Clearly inconsistent at this point so the RREF =

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

222 CHAPTER 3 Linear Algebra

35.

2 2

4

2 2 0

3 2

x x z

x y

x y z

y z

+ + =

− =

− + =

+ = −

1 2 1 2

1 1 0 4

2 1 2 0

0 3 1 2

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎢ ⎥

−

⎣ ⎦

*

2 1 2

*

3 1 3

2

R R R

R R R

= − +

= − +

1 2 1 2

1 3 1 2

0 5 0 4

0 3 1 2

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ − −

⎢ ⎥

−

⎣ ⎦

*

3 3

*

4 2 4

1

5

R R

R R R

= −

= +

1 2 1 2

1 3 1 2

4

0 1 0

5

0 0 0 0

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

4 3

R R ↔

1 2 1 2

4

0 1 0

5

0 3 1 2

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

1 2 1

*

3 2 3

2

3

R R R

R R R

= − +

= +

2

1 0 1

5

4

0 1 0

5

22

0 0 1

5

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

1 1 3

*

3 3

R R R

R R

= +

= −

24

1 0 0

5

4

0 1 0

5

22

0 0 1

5

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

There is a unique solution: x =

24 4

,

5 5

y = and z =

22

5

− .

36.

3 4

2 3 4

2 4 1

3 2

x x x

x x x

+ − =

+ − =

1 0 2 4 1

0 1 1 3 2

− ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

is in RREF

∴ infinitely many solutions x

1

= −2r + 4s + 1

x

2

= −r + 3s + 2, r, s ∈R

x

3

= r, x

4

= s

Using the Nonhomogenous Principle

37. In Problem 24,

2

2

⎡ ⎤

⎢ ⎥

⎣ ⎦

is a unique solution so W =

{ }

0

and

x

=

2

2

⎡ ⎤

⎢ ⎥

⎣ ⎦

+ 0

**SECTION 3.2 Systems of Linear Equations 223
**

38. In Problem 25,

3

6

⎡ ⎤

⎢ ⎥

⎣ ⎦

is a unique solution so W =

{ }

0

and

x

=

3

6

⎡ ⎤

⎢ ⎥

⎣ ⎦

+ 0

**39. In Problem 26, infinitely many solutions,
**

1

1 z

z

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

, and W =

0

1 :

1

r

⎧ ⎫ ⎡ ⎤

⎪ ⎪

⎢ ⎥

− ∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

R

hence x

=

1 0

1 1

0 1

r

− ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

+ −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

for any r ∈R

40. In Problem 27, (already a homogeneous system), infinitely many solutions,

W =

3

5 :

7

r r

⎧ ⎫ − ⎡ ⎤

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

R and x

=

0

0

0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

+

3

5

7

r

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

for any r ∈R

41. In Problem 28,

2

3

1

3

1

3

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

is a unique solution so that W =

{ }

0

and = x

2

3

1

3

1

3

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

+ 0

**42. In Problem 29, infinitely many solutions: x
**

1

= 4 − 3x

3

x

2

= −1 + 2x

3

,

x

3

arbitrary

W =

3

2 :

1

r r

⎧ ⎫ − ⎡ ⎤

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

R

4

1

0

⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥

⎣ ⎦

x

+

3

2

1

r

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

for any r ∈R

43. In Problem 30, unique solution

1

1

1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

so W =

{ }

0

and

1

1

1

⎡ ⎤

⎢ ⎥

= +

⎢ ⎥

⎢ ⎥

⎣ ⎦

x 0

**224 CHAPTER 3 Linear Algebra
**

44. In Problem 31, unique solution

0

0

0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

so W =

{ }

0

and

= + = x 0 0 0

.

45. In Problem 32, infinitely many solutions x = −z, y = −z, z arbitrary, so

W =

1

1 :

1

r r

⎧ ⎫ − ⎡ ⎤

⎪ ⎪

⎢ ⎥

− ∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

R

1

= 1

1

r

− ⎡ ⎤

⎢ ⎥

+ −

⎢ ⎥

⎢ ⎥

⎣ ⎦

x 0

for any r ∈R

46. In Problem 33, nonunique solutions: x = 1 − z, y = −z, z arbitrary,

W =

1

1 :

1

r r

⎧ ⎫ − ⎡ ⎤

⎪ ⎪

⎢ ⎥

− ∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎣ ⎦ ⎩ ⎭

R

1 1

0 1

0 1

r

− ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

= + −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

x

for any r ∈R

47. In Problem 34, W =

0

0

:

0

1

r r

⎧ ⎫ ⎡ ⎤

⎪ ⎪ ⎢ ⎥

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎩ ⎭

R . However, the system is inconsistent so that there is no x

p

and no general solution.

48. In Problem 35, there is a unique solution: x =

24

5

, y =

4

5

, z =

22

5

− so

W =

{ }

0

and

24

5

4

5

22

5

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

= +

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

x 0

.

SECTION 3.2 Systems of Linear Equations 225

49. In Problem 36, there are infinitely many solutions: x

1

= 1 − 2x

3

+ 4x

4

, x

2

= 2 − x

3

+ 3x

4

,

x

3

is arbitrary, x

4

is arbitrary.

W =

2 4

1 3

: ,

1 0

0 1

r s r s

⎧ ⎫ − ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥

−

⎪ ⎪

⎢ ⎥ ⎢ ⎥

+ ∈

⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎩ ⎭

R ,

1 2 4

2 1 3

0 1 0

0 0 1

r s

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

x

for , r s ∈R

The RREF Example

50. Starting with the augmented matrix, we carry out the following steps

1 0 2 0 1 4 8

0 2 0 2 4 6 6

0 0 1 0 0 2 2

3 0 0 1 5 3 12

0 2 0 0 0 0 6

⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− −

⎣ ⎦

( )

2 2

4 4 1

1

2

3

R R

R R R

∗ ∗

∗

=

= + −

1 0 2 0 1 4 8

0 1 0 1 2 3 3

0 0 1 0 0 2 2

0 0 6 1 2 9 12

0 2 0 0 0 0 6

⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥

− −

⎣ ⎦

(We leave the next steps for the reader)

RREF =

1 0 0 0 1 0 4

0 1 0 0 0 0 3

0 0 1 0 0 2 2

0 0 0 1 2 3 0

0 0 0 0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

More Equations Than Variables

51. Converting the augmented matrix to RREF yields

3 5 0 1 1 0 0 2

3 7 3 8 0 1 0 1

0 5 0 5 0 0 1 3

0 2 3 7 0 0 0 0

1 4 1 1 0 0 0 0

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

−

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ → −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

consistent system; unique solution 2, 1 x y = = − , 3 z = .

Consistency

52. A homogeneous system = Ax 0

always has at least one solution, namely the zero vector = x 0

.

226 CHAPTER 3 Linear Algebra

Homogeneous Systems

53. The equations are

2 5 0

2 0

w x z

y z

− + =

+ =

If we let and x r z s = = , we can solve 2 y s = − , 2 5 w r s = − . The solution is a plane in

4

R given

by

2 5 2 5

1 0

2 0 2

0 1

w r s

x r

r s

y s

z s

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

,

for r, s any real numbers.

54. The equations are

2 0

0

x z

y

+ =

=

If we let , z s = we have 2 x s = − and hence the solution is a line in

3

R given by

2 2

0 0

1

x s

y s

z s

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

55. The equation is

1 2 3 4

4 3 0 0 x x x x − + + = .

If we let

2

x r = ,

3

x s = ,

4

x t = , we can solve

1 2 3

4 3 4 3 x x x r s = − = − .

Hence

1

2

3

4

4 3 4 3 0

1 0 0

0 1 0

0 0 1

x r s

x r

r s t

x s

x t

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

where r, s, t are any real numbers.

Making Systems Inconsistent

56.

1 0 3

0 2 4

1 0 5

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

2 2

*

3 1 3

1

2

R R

R R R

=

= − +

1 0 3

0 1 2

0 0 2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

3 3

1

2

R R =

1 0 3

0 1 2

0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

Rank = 3 because every column is a pivot column.

SECTION 3.2 Systems of Linear Equations 227

57.

4 5

1 6

3 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

Rank = 2

4 5

1 6

3 1

a

b

c

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

2 1

R R ↔

1 6

4 5

3 1

b

a

c

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 1 2

*

3 1 3

4 +R

3

R R

R R R

= −

= − +

1 0

0 5 5

0 1 3

b

b a

b c

⎡ ⎤

⎢ ⎥

− +

⎢ ⎥

⎢ ⎥ − − +

⎣ ⎦

*

3 3

2 3

R R

R R

= −

↔

1 6

0 1 3

0 5 5

b

b c

b a

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − +

⎣ ⎦

*

3 2 3

5 R R R = − +

1 6

0 1 3

0 0 20 5

b

b c

a b c

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − +

⎣ ⎦

Thus the system is inconsistent for all vectors

a

b

c

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

for which a − 20b + 5c ≠ 0.

58. Find the RREF:

1 2 1

1 0 3

0 1 2

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

1 2

*

3 3

R R

R R

↔

= −

1 0 3

1 2 1

0 1 2

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 1 2

R R R = − +

1 0 3

0 2 2

0 1 2

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 2

1

2

R R =

1 0 3

0 1 1

0 1 2

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

3 2 3

R R R = − +

1 0 3

0 1 1

0 0 1

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

1 3 1

*

2 3 2

*

3 3

3 R R R

R R R

R R

= − +

= − +

= −

1 0 0

0 1 0

0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

∴ rank A = 3

59. Find the RREF:

1 1 2

2 1 1

4 1 5

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

2 1 2

*

3 1 3

2

4

R R R

R R R

= − +

= − +

1 1 2

0 3 3

0 3 3

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

*

3 2 3

*

2 3

1

3

R R R

R R

= − +

= −

1 1 2

0 1 1

0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

∴ rank A = 2

For arbitrary a, b, and c:

1 1 2

2 1 1

4 1 5

a

b

c

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

2 1 2

*

3 1 3

2

4

R R R

R R R

= − +

= − +

1 1 2

0 3 3 2

0 3 3 4

a

a b

a c

⎡ ⎤

⎢ ⎥

− − − +

⎢ ⎥

⎢ ⎥ − − − +

⎣ ⎦

228 CHAPTER 3 Linear Algebra

*

3 2 3

R R R = − +

1 1 2

0 3 3 2

0 0 0 2

a

a b

a b c

⎡ ⎤

⎢ ⎥

− − − +

⎢ ⎥

⎢ ⎥ − − +

⎣ ⎦

Any vector

a

b

c

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

for which −2a − b + c ≠ 0

60.

1 1 1

1 1 0

1 2 1

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 1 2

*

3 1 3

R R R

R R R

= − +

= − +

1 1 1

0 2 1

0 3 2

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 2

1

2

R R =

1 1 1

1

0 1

2

0 3 2

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

*

1 2 1

*

3 2 3

3

R R R

R R R

= +

= − +

1

1 0

2

1

0 1

2

2

0 0

3

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

3 2

3

2

R R =

1

1 0

2

1

0 1

8

0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

1 3 1

*

2 3 2

1

2

1

8

R R R

R R R

= − +

= − +

1 0 0

0 1 0

0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

∴ rank A = 3

Seeking Consistency

61. 4 k ≠

62. Any k will produce a consistent system

63. 1 k ≠ ±

64. The system is inconsistent for all k because the last two equations are parallel and distinct.

65.

1 0 0 1 2

0 2 4 0 6

1 1 2 1 1

2 2 4 2 k

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − − −

⎢ ⎥

⎣ ⎦

*

2 2

*

3 1 3

*

4 1 4

1

2

2

R R

R R R

R R R

=

= − +

= − +

1 0 0 1 2

0 1 2 0 3

0 1 2 0 3

0 2 4 0 4 k

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − − −

⎢ ⎥

− +

⎣ ⎦

*

3 2 3

*

4 1 4

2

R R R

R R R

= +

= − +

1 0 0 1 2

0 1 2 0 3

0 0 0 0 0

0 0 0 0 10 k

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− +

⎣ ⎦

Consistent if k = 10.

SECTION 3.2 Systems of Linear Equations 229

Not Enough Equations

66. a.

2 1 0 0 3

1 1 1 1 3

2 3 4 4 9

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

1 2

R R ↔

1 1 1 1 3

2 1 0 0 3

2 3 4 4 9

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

2 1 2

*

3 1 3

2

2

R R R

R R R

= − +

= − +

1 1 1 1 3

0 3 2 2 3

0 1 2 2 3

− ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

2 3

R R ↔

1 1 1 1 3

0 1 2 2 3

0 3 2 2 3

− ⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

3 2 3

3 R R R = − +

1 0 1 1 6

0 1 2 2 3

0 0 8 8 12

− − ⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

*

3 3

1

8

R R =

1 0 1 1 6

0 1 2 2 3

3

0 0 1 1

2

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

This matrix is in row-echelon form and has 3 pivot colums Rank = 3

Consequently, there are infinitely many solutions because it represents a consistent

system.

b.

2 1 0 0 3

1 1 1 1 3

1 2 1 1 6

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

1 2

R R ↔

1 1 1 1 3

2 1 0 0 3

1 2 1 1 6

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − − −

⎣ ⎦

*

2 1 2

*

3 1 2

2 R R R

R R R

= − +

= − +

1 1 1 1 3

0 3 2 2 3

0 3 2 2 9

− ⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥ − − −

⎣ ⎦

1 1 1 1 3

0 3 2 2 3

0 0 0 0 6

− ⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

Clearly inconsistent, no solutions.

Not Enough Variables

67. Matrices with the following RREF’s

1 0

0 1

0 0 0

0 0 0

a

b

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

,

1 0 1

0 0 0

0 0 0

0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

and

1 0

0 0

0 0 0

0 0 0

a

b

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

, where a and b are nonzero real numbers,

will have, respectively, a unique solution, infinitely many solutions, and no solutions.

True/False Questions

68. a) False.

1 2

3 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

and

1 0

0 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

have the same RREF.

b) False. A =

1

0

⎡ ⎤

⎢ ⎥

⎣ ⎦

has rank 1 [ ]

1 2

0 1

a

⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

⇒

1 2

0 1

a

a

=

=

contradiction

∴ no solutions

230 CHAPTER 3 Linear Algebra

c) False. Consider the matrix

1 1

1 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A

and

1

2

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

b

Then

1 1 1

1 1 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

has RREF

1 1 1

0 0 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

so the system is inconsistent.

However, the system = Ax c

where

2

2

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

c

is consistent.

Equivalence of Systems

69. Inverse of

i j

R R ↔ : The operation that puts the system back the way it was is

j i

R R ↔ . In other

words, the operation

3 1

R R ↔ will undo the operation

1 3

R R ↔ .

Inverse of

i i

R cR = : The operation that puts the system back the way it was is

1

i i

R R

c

= . In other

words, the operation

1 1

1

3

R R = will undo the operation

1 1

3 R R = .

Inverse of

i i j

R R cR = + : The operation that puts the system back is

i i j

R R cR = − . In other words

i i j

R R cR = − will undo the operation

i i j

R R cR = + .This is clear because if we add

j

cR to row i

and then subtract

j

cR from row i, then row i will be unchanged. For example,

1

2

1 2 3

2 1 1

R

R

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

1 1 2

2 R R R

∗

= + ,

5 4 5

2 1 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

, ( )

1 1 2

2 R R R

∗

= + − ,

1 2 3

2 1 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

Homogeneous versus Nonhomogeneous

70. For the homogeneous equation of Problem 32, we can write the solution as

1

1 ,

1

c c

− ⎡ ⎤

⎢ ⎥

= − ∈

⎢ ⎥

⎢ ⎥

⎣ ⎦

h

x

R

where c is an arbitrary constant.

For the nonhomogeneous equation of Problem 33, we can write the solution as

1 1

1 0 ,

1 0

x

y c c

z

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = − + ∈

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

x

R.

In other words, the general solution of the nonhomogeneous algebraic system, Problem 33, is the

sum of the solutions of the associated homogeneous equation plus a particular solution.

Solutions in Tandem

71. There is nothing surprising here. By placing the two right-hand sides in the last two columns of

the augmented matrix, the student is simply organizing the material effectively. Neither of the last

two columns affects the other column, so the last two columns will contain the respective

solutions.

SECTION 3.2 Systems of Linear Equations 231

Tandem with a Twist

72. (a) We place the right-hand sides of the two systems in the last two columns of the

augmented matrix

1 1 0 3 5

0 2 1 2 4

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

Reducing this matrix to RREF, yields

1

1 0 2 3

2

1

0 1 1 2

2

⎡ ⎤

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

Hence, the first system has solutions

1

2

2

x z = + ,

1

1

2

y z = − , z arbitrary, and the second

system has solutions

1

3

2

x z = + ,

1

2

2

y z = − , z arbitrary.

(b) If you look carefully, you will see that the matrix equation

11 12

21 22

31 32

1 1 0 3 5

0 2 1 2 4

x x

x x

x x

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

is equivalent to the two systems of equations

11

21

31

12

22

32

1 1 0 3

0 2 1 2

1 1 0 5

.

0 2 1 4

x

x

x

x

x

x

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

We saw in part (a) that the solution of the system on the left was

11 31

1

2

2

x x = + ,

21 31

1

1

2

x x = − ,

31

x arbitrary,

and the solution of the system on the right was

12 32

1

3

2

x x = + ,

22 32

1

2

2

x x = − ,

32

x arbitrary.

Putting these solutions in the columns of our unknown matrix X and calling

31

x α = ,

32

x β = , we have

11 12

21 22

31 32

1 1

2 3

2 2

1 1

1 2

2 2

x x

x x

x x

α β

α β

α β

⎡ ⎤

+ +

⎢ ⎥

⎡ ⎤ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

= = − −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥

⎢ ⎥

⎣ ⎦

X .

232 CHAPTER 3 Linear Algebra

Two Thousand Year Old Problem

73. Letting

1

A and

2

A be the areas of the two fields in square yards, we are given the two equations

1 2

1 2

1800 square yards

2 1

1100 bushels

3 2

A A

A A

+ =

+ =

The areas of the two fields are 1200 and 600 square yards.

Computerizing

74. 2 2 × Case. To solve the 2 2 × system

11 1 12 2 1

21 1 22 2 2

a x a x b

a x a x b

+ =

+ =

we start by forming the augmented matrix

11 12 1

21 22 2

a a b

a a b

⎡ ⎤

⎡ ⎤ =

⎢ ⎥

⎣ ⎦

⎣ ⎦

A b .

Step 1: If

11

1 a ≠ , factor it out of row 1. If

11

0 a = , interchange the rows and then factor the new

element in the 11 position out of the first row. (This gives a 1 in the first position of the first row.)

Step 2: Subtract from the second row the first row times the element in the 21 position of the new

matrix. (This gives a zero in the first position of the second row).

Step 3: Factor the element in the 22 position from the second row of the new matrix. If this

element is zero and the element in the 23 position is nonzero, there are no solutions. If both this

element is zero and the element in the 23 position is zero, then there are an infinite number of

solutions. To find them write out the equation corresponding to the first row of the final matrix.

(This gives a 1 in the first nonzero position of the second row).

Step 4: Subtract from the first row the second row times the element in the 12 position of the new

matrix. This operation will yield a matrix of the form matrix

1

2

1 0

0 1

r

r

⎡ ⎤

⎢ ⎥

⎣ ⎦

where

1 1

x r = ,

2 2

x r = . (This gives a zero in the second position of the first row.)

75. The basic idea is to formalize a strategy like that used in Example 3. The augmented matrix for

= Ax bis

11 12 13 1

21 22 23 2

31 32 33 3

a a a b

a a a b

a a a b

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

A pseudocode might begin:

1. To get a one in first place in row 1, multiply every element of row 1 by

11

1

a

.

2. To get a zero in first place in row 2, replace row 2 by

( )

21

row2 row 1 . a −

.

SECTION 3.2 Systems of Linear Equations 233

Electrical Circuits

76. (a) There are four junctions in this multicircuit, and Kirchhoff’s current law states that the

sum of the currents flowing in and out of any junction is zero. The given equations

simply state this fact for the four junctions

1

J ,

2

J ,

3

J , and

4

J , respectively. Keep in

mind that if a current is negative in sign, then the actual current flows in the direction

opposite the indicated arrow.

(b) The augmented system is

1 1 1 0 0 0 0

0 1 0 1 1 0 0

0 0 1 1 0 1 0

1 0 0 0 1 1 0

− − ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎢ ⎥

−

⎣ ⎦

.

Carrying out the three elementary row operations, we can transform this system to RREF

1 0 0 0 1 1 0

0 1 0 1 1 0 0

0 0 1 1 0 1 0

0 0 0 0 0 0 0

− − ⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥ − −

⎢ ⎥

⎣ ⎦

.

Solving for the lead variables

1

I ,

2

I ,

3

I in terms of the free variables

4

I ,

5

I ,

6

I , we

have

1 5 6

I I I = + ,

2 4 5

I I I = − + ,

3 4 6

I I I = + . In matrix form, this becomes

1

2

3

4 5 6

4

5

6

0 1 1

1 1 0

1 0 1

1 0 0

0 1 0

0 0 1

I

I

I

I I I

I

I

I

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

where

1

I ,

2

I , and

3

I are arbitrary. In other words, we need three of the six currents to

uniquely specify the remaining ones.

234 CHAPTER 3 Linear Algebra

More Circuit Analysis

77.

1 2 3

1 2 3

0

0

I I I

I I I

− − =

− + + =

78.

1 2 3 4

1 2 3 4

0

0

I I I I

I I I I

− − − =

− + + + =

79.

1 2 3 4

1 2 5

3 4 5

0

0

0

I I I I

I I I

I I I

− − − =

− + + =

+ − =

80.

1 2 3

2 4 5

3 4 6

1 5 6

0

0

0

0

I I I

I I I

I I I

I I I

− − =

− − =

+ − =

− + + =

Suggested Journal Entry I

81. Student Project

Suggested Journal Entry II

82. Student Project

SECTION 3.3 The Inverse of a Matrix 235

3.3

The Inverse of a Matrix

Checking Inverses

1.

( )( ) ( )( ) ( )( ) ( )( )

( )( ) ( )( ) ( )( ) ( )( )

5 1 3 2 5 3 3 5 5 3 1 3 1 0

2 1 1 2 2 3 1 5 2 1 2 5 0 1

⎡ − + + − ⎤ − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− + + − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ ⎦

2.

( )( ) ( ) ( ) ( )

( )( ) ( ) ( ) ( )

1 1 1

1

2 0 4 2 4

0

2 4 1 0 4 2 4

2

2 0 1 1 0 1 1 1 1

2 0 0 2 0

4 4 4 2 4

⎡ ⎤ ⎛ ⎞

⎡ ⎤

+ − − + −

⎜ ⎟ ⎢ ⎥

⎢ ⎥

− ⎡ ⎤ ⎡ ⎤ ⎝ ⎠

⎢ ⎥

= = ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎛ ⎞ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥

− + − +

⎢ ⎥ ⎜ ⎟

⎢ ⎥

⎣ ⎦

⎝ ⎠ ⎣ ⎦

3. Direct multiplication as in Problems 1–2.

4. Direct multiplication as in Problems 1–2.

Matrix Inverses

5. We reduce ⎡ ⎤

⎣ ⎦

A I to RREF.

2 0 1 0

1 1 0 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

1 1

1

2

R R

∗

=

1

1 0 0

2

1 1 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎣ ⎦

( )

2 2 1

1 R R R

∗

= + −

1

1 0 0

2

1

0 1 1

2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

.

Hence,

1

1

0

2

1

1

2

−

⎡ ⎤

⎢ ⎥

= ⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

A .

6. We reduce ⎡ ⎤

⎣ ⎦

A I to RREF.

1 3 1 0

2 5 0 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

( )

2 2 1

2 R R R

∗

= + −

1 3 1 0

0 1 2 1

⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

( )

2 2

1 R R

∗

= −

1 3 1 0

0 1 2 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

( )

1 1 2

3 R R R

∗

= + −

1 0 5 3

0 1 2 1

− ⎡ ⎤

⎢ ⎥

−

⎣ ⎦

.

Hence,

1

5 3

2 1

−

− ⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

A .

236 CHAPTER 3 Linear Algebra

7. Starting with

0 1 1 1 0 0

5 1 1 0 1 0

3 3 3 0 0 1

⎡ ⎤

⎢ ⎥

⎡ ⎤ = −

⎣ ⎦

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

A I

1 2

R R ↔

5 1 1 0 1 0

0 1 1 1 0 0

3 3 3 0 0 1

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

1 1

1

5

R R

∗

=

1 1 1

1 0 0

5 5 5

0 1 1 1 0 0

3 3 3 0 0 1

⎡ ⎤

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− −

⎢ ⎥

⎣ ⎦

( )

3 3 1

3 R R R

∗

= + −

1 1 1

1 0 0

5 5 5

0 1 1 1 0 0

18 12 3

0 0 1

5 5 5

⎡ ⎤

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− − − ⎢ ⎥

⎣ ⎦

1 1 2

3 3 2

1

5

18

5

R R R

R R R

∗

∗

⎛ ⎞

= + −

⎜ ⎟

⎝ ⎠

= +

2 1 1

1 0 0

5 5 5

0 1 1 1 0 0

6 18 3

0 0 1

5 5 5

⎡ ⎤

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

3 3

5

6

R R

∗

=

2 1 1

1 0 0

5 5 5

0 1 1 1 0 0

1 5

0 0 1 3

2 6

⎡ ⎤

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

( )

1 1 3

2 2 3

2

5

1

R R R

R R R

∗

∗

= +

= + −

1

1 0 0 1 0

3

1 5

0 1 0 2

2 6

1 5

0 0 1 3

2 6

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

.

Hence,

1

1

1 0

3

1 5

2

2 6

1 5

3

2 6

−

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

= − −

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

A .

SECTION 3.3 The Inverse of a Matrix 237

8. Interchanging the first and third rows, we get

1

1

0 0 1 1 0 0 1 0 0 0 0 1 0 0 1

0 1 0 0 1 0 0 1 0 0 1 0 so 0 1 0

1 0 0 0 0 1 0 0 1 1 0 0 1 0 0

−

−

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎡ ⎤ = → = =

⎣ ⎦

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A I I A A

9. Dividing the first row by k gives

1

1

1 0 0 0 0

0 0 1 0 0

0 1 0 0 1 0 0 1 0 0 1 0

0 0 1 0 0 1 0 0 1 0 0 1

k

k

−

⎡ ⎤

⎢ ⎥

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎡ ⎤

⎡ ⎤ = → =

⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎣ ⎦

⎢ ⎥

⎢ ⎥

⎣ ⎦

⎢ ⎥

⎣ ⎦

A I I A

Hence

1

1

0 0

0 1 0

0 0 1

k

−

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

A .

10.

1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0

1 1 0 0 1 0 0 1 1 1 1 0 0 1 1 1 1 0

0 2 1 0 0 1 0 2 1 0 0 1 0 2 1 0 0 1

1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 2 1

0 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 1 1

0 0 1 2 2 1 0 0 1 2 2 1 0 0 1 2 2 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎡ ⎤ = − → − − − → −

⎣ ⎦

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

→ − → − → −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − − − − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A I

Hence

1

1 2 1

1 1 1

2 2 1

−

− ⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

A .

11.

1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0

0 1 0 0 1 0 0 0 1 0 0 0 1 0

0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0

0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1

k k

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

−

⎢ ⎥ ⎢ ⎥

→

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

Hence

1

1 0 0 0

0 1 0

0 0 1 0

0 0 0 1

k

−

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

A .

238 CHAPTER 3 Linear Algebra

12.

1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0

0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 0

1 1 1 0 0 0 1 0 0 1 0 1 1 0 1 0 0 0 1 0 0 1 0 0

1 0 0 2 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1 0 0 1

1 0 0 1 1 1 0 0 1 0 0 0 2

0 1 0 1 1 0 1 0

0 0 1 0 0 1 0 0

0 0 0 1 1 1 0 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

→ →

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− − ⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

→ →

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

1

2 0 1 2 2 0 1

0 1 0 0 2 1 1 1 2 1 1 1

so

0 0 1 0 0 1 0 0 0 1 0 0

0 0 0 1 1 1 0 1 1 1 0 1

−

− − − ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

− −

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦

A

13. Starting with the augmented matrix

1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0

0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0

0 1 2 0 0 0 1 0 0 1 2 0 0 0 1 0

1 1 3 3 0 0 0 1 0 1 3 3 1 0 0 1

1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0

0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0

0 1 2 0 0 0 1 0 0 0 2 0 0 1 1 0

0 1 3 3 1 0 0 1 0 0 3 3 1 1 0 1

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

− −

⎢ ⎥ ⎢ ⎥

⎡ ⎤ = →

⎣ ⎦

⎢ ⎥ ⎢ ⎥ − −

⎢ ⎥ ⎢ ⎥

− − −

⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

− −

⎢ ⎥ ⎢ ⎥

→ →

⎢ ⎥ ⎢ ⎥ − −

⎢ ⎥ ⎢

− − − −

⎣ ⎦ ⎣ ⎦

A I

1 0 0 0 1 0 0 0

1 0 0 0 1 0 0 0

0 1 0 0 0 1 0 0

0 1 0 0 0 1 0 0

1 1

1 1 0 0 1 0 0 0

0 0 1 0 0 0

2 2

2 2

1 3

0 0 3 3 1 1 0 1 0 0 0 3 1 1

2 2

1 0 0 0 1 0 0 0

0 1 0 0 0 1 0 0

1 1

.

0 0 1 0 0 0

2 2

1 1 1 1

0 0 0 1

3 6 2 3

⎥

⎡ ⎤

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

→ →

⎢ ⎥

− −

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− − − ⎢ ⎥

⎣ ⎦

⎢ ⎥

⎣ ⎦

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

→

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

Hence

1

1 0 0 0

0 1 0 0

1 1

0 0

2 2

1 1 1 1

3 6 2 3

−

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

=

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

A .

SECTION 3.3 The Inverse of a Matrix 239

14.

0 1 2 1 1 0 0 0

4 0 1 2 0 1 0 0

0 1 0 0 0 0 1 0

0 2 0 1 0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 2

R R ↔

4 0 1 2 0 1 0 0

0 1 2 1 1 0 0 0

0 1 0 0 0 0 1 0

0 2 0 1 0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

3 2

R R ↔

4 0 1 2 0 1 0 0

0 1 0 1 0 0 1 0

0 1 2 0 1 0 0 0

0 2 0 1 0 0 0 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

*

1 1

*

3 2 3

*

4 2 4

1

4

2

R R

R R R

R R R

=

= − +

= − +

1 1 1

1 0 0 0 0

4 2 4

0 1 0 0 0 0 1 0

0 0 2 1 1 0 1 0

0 0 0 1 0 0 2 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

*

3 3

1

2

R R =

1 1 1

1 0 0 0 0

4 2 4

0 1 0 0 0 0 1 0

1 1 1

0 0 1 0 0

2 2 2

0 0 0 1 0 0 2 1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

*

1 4 1

*

3 4 3

1

2

1

2

R R R

R R R

− +

= − +

1 1 1

1 0 0 0 1

4 4 2

0 1 0 0 0 0 1 0

1 1 1

0 0 1 0 0

2 2 2

0 0 0 1 0 0 2 1

⎡ ⎤

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

*

1 3 1

1

4

R R R = +

1 1 7 3

1 0 0 0

8 4 8 8

0 1 0 0 0 0 1 0

1 1 1

0 0 1 0 0

2 2 2

0 0 0 1 0 0 2 1

⎡ ⎤

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

Hence A

−1

=

1 1 7 3

8 4 8 8

0 0 1 0

1 1 1

0

2 2 2

0 0 2 1

⎡ ⎤

− −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

240 CHAPTER 3 Linear Algebra

Inverse of the × 2 2 Matrix

15. Verify

1 1 − −

= = A A I AA . We have

1

1

0

1 1

0

0

1 1

0

d b a b ad bc

c a c d ad bc ad bc ad bc

a b d b ad bc

c d c a ad bc ad bc ad bc

−

−

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A A I

AA I

Note that we must have 0 ad bc = − ≠ A .

Brute Force

16. To find the inverse of

1 3

1 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

we seek the matrix

a b

c d

⎡ ⎤

⎢ ⎥

⎣ ⎦

that satisfies

1 3 1 0

1 2 0 1

a b

c d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Multiplying this out we get the equations

1

3 2 0

0

3 2 1.

a b

a b

c d

c d

+ =

+ =

+ =

+ =

The top two equations involve a and b, and the bottom two involve c and d, so we write the two

systems

1 1 1

3 2 0

1 1 0

.

3 2 1

a

b

c

d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Solving each system, we get

1

1

1 1 1 2 1 1 2

1

3 2 0 3 1 0 3 1

1 1 0 2 1 0 1

1

.

3 2 1 3 1 1 1 1

a

b

c

d

−

−

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Because a and b are the elements in the first row of

1 −

A , and c and d are the elements in the

second row, we have

1

1

1 3 2 3

1 2 1 1

−

−

− ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

A .

SECTION 3.3 The Inverse of a Matrix 241

Finding Counterexamples

17. No. Consider

1 0 1 0 0 0

0 1 0 1 0 0

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

which is not invertible.

18. No. Consider

0 0 0 0 0 0

0 1 0 1 0 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Unique Inverse

19. We show that if B and C are both inverse of A, then = B C. Because B is an inverse of A, we can

write = BA I . If we now multiply both sides on the right by C, we get

( ) = = BA C IC C.

But then we have

( ) ( ) = = = BA C B AC BI B , so = B C.

Invertible Matrix Method

20. Using the inverse found in Problem 6, yields

1 1

2

5 3 4 50

2 1 10 18

x

x

−

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A b

.

Solution by Invertible Matrix

21. Using the inverse found in Problem 7, yields

1

1

1 0

3

5 5

1 5

2 2 9

2 6

0 14

1 5

3

2 6

x

y

z

−

⎡ ⎤

⎢ ⎥

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = − − = −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

A b

.

More Solutions by Invertible Matrices

22. A =

1 1 1

1 1 0

1 2 1

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

Use row reduction to obtain

1

1 1 1

1 2 1

1 3 2

−

− ⎡ ⎤

⎢ ⎥

= − −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

A

1

4 3

1 2

0 1

−

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

= ⋅ = −

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦

x A

**242 CHAPTER 3 Linear Algebra
**

23.

4 3 2

5 6 0

3 5 2

− ⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A Use row reduction to obtain

1

3 4 3

5 7 5

2 2 2

7 11 9

4 4 4

−

⎡ ⎤

⎢ ⎥ −

⎢ ⎥

⎢ ⎥

= − −

⎢ ⎥

⎢ ⎥

⎢ ⎥

−

⎣ ⎦

A

1

0 40 6 34

10 35 5 30

2 110 18 23

4 4

−

⎡ ⎤

⎢ ⎥

− + − ⎡ ⎤ ⎡ ⎤

⎢ ⎥

⎢ ⎥ ⎢ ⎥

= ⋅ = − =

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦

− + ⎢ ⎥

⎣ ⎦

x A

Noninvertible × 2 2 Matrices

24. If we reduce

a b

c d

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A to RREF we get

1

0

b

a

ad bc

a

⎡ ⎤

⎢ ⎥

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎣ ⎦

,

which says that the matrix is invertible when 0

ad bc

a

−

≠ , or equivalently when ad bc ≠ .

Matrix Algebra with Inverses

25.

( ) ( )

1 1

1 1 1 1

− −

− − − −

= = AB B A BA

26.

( ) ( ) ( )

1 1 1

2 2 2 2

− − −

= B A B B B A

( ) ( )( )

( ) ( ) ( )

1

1 1 1 1 1

2 2 2

1 1 1 2 2 1

(

where means

−

− − − − −

− − − − − −

= =

= =

B BB AA) B B B A A

B B A B A A A

27. Suppose A(BA)

−1

= x b

( )

( ) ( )

1

1

1 1

(

−

−

− −

⎡ ⎤

= ⋅

⎣ ⎦

= =

=

x A BA b

BA A b B AA b

Bb

28.

( )

( )

( ) ( )

( )

1 1

1 1 1

1 1 1 1 1 1

− −

− − −

− − − − − −

= A BA BA BA A BA

.

.

( )

( )( )

1 1

1 1

− −

− −

=

= =

AB BA A

A B B A A A

Question of Invertibility

29. To solve (A + B) = x b

requires that A + B be invertible

Then (A + B)

−1

(A + B) x

= (A + B)

−1

b

so that x

= (A + B)

−1

b

**SECTION 3.3 The Inverse of a Matrix 243
**

Cancellation Works

30. Given that = AB AC and A are invertible, we premultiply by

1 −

A , getting

1 1

.

− −

=

=

=

A AB A AC

IB IC

B C

An Inverse

31. If A is an invertible matrix and = AB I , then we can premultiply each side of the equation by

1 −

A getting

( )

( )

1 1

1 1

1

.

− −

− −

−

=

=

=

A AB A I

AA B A

B A

Making Invertible Matrices

32.

1 0

0 1 0

0 0 1

k

= 1 so k may be any number.

33.

1 0

0 1 0

0 1

k

k

= 1 − k

2

k ≠ ±1

Products and Noninvertibility

34. a) Let A and B be n × n matrices such that BA = I

n

.

First we will show that A

−1

exists by showing Ax

= 0

**has a unique solution = x 0
**

.

Suppose = Ax 0

Then = = BAx B0 0

so I =

n

x 0

= x 0

so that A

−1

exists

BA = I

n

BAA

−1

= I

n

A

−1

so B = A

−1

∴ AB = I

n

b)

Let A, B, be n × n matrices such that AB is invertible. We will show that A must be

invertible

AB invertible means that AB(AB)

−1

= I

n

so that

( )

1

( )

−

A B AB = I

n

By problem 34a,

( )

1

( )

−

B AB A = I

n

so that A is invertible.

244 CHAPTER 3 Linear Algebra

Invertiblity of Diagonal Matrices

35. Proof for (⇒): (Contrapositive)

Suppose D is a diagonal matrix with one diagonal element = 0, say a

ii

= 0. Then D has a row of

zeros and consequently RREF (D) has at least one row of zeros. Therefore, D is not invertible.

Proof for (⇐): Let D be a diagonal matrix such that every a

ii

≠ 0. Then the diagonal matrix

B = [b

ii

] such that b

ii

=

1

ii

a

is D

−1

.

That is:

1

0

0 1 0

0 1 0 1

0

ii ii

nn

ii

a a

a

a

⎡ ⎤

⎢ ⎥

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

= I

n

Invertiblity of Triangular Matrices

36. Proof for (⇒): (Contrapositive) Let T be an upper triangular matrix with at least one diagonal

element = 0, say a

jj

. Then there is one column without a pivot. Therefore RREF (T) has a zero

row.

Consequently T is not invertible.

Proof for (⇐): Let T be an upper n × n triangular matrix with no nonzero diagonal elements.

Then every column is a pivot column so RREF(T) = I

n

.

Therefore T is invertible.

Inconsistency

37. If = Ax b

is inconsistent for some vector b

then

1 −

A does not exist—because if

1 −

A did exist,

then

1 −

= x A b

would exists for all b

**, which would be a contradiction.
**

Inverse of an Inverse

38. To prove: If A is invertible, so is A

−1

Proof: Let A be an invertible n × n matrix, then there exists A

−1

so that:

AA

−1

= I

n

A

−1

A = I

n

so A = (A

−1

)

−1

by definition of inverse and the fact that inverses are unique.

(3.3 Problem 19)

Inverse of a Transpose

39. To prove: If A is invertible, so is A

T

and (A

T

)

−1

= (A

−1

)

T

.

Proof: Let A be an invertible n × n matrix.

Then (A

T

)(A

−1

)

T

= (A

−1

A)

T

=

T

n

I = I

n

because (A

T

)

T

= A and (AB)

T

= B

T

A

T

(A

−1

)

T

A

T

= (AA

−1

)

T

=

T

n

I = I

n

Therefore (A

T

)

−1

= (A

−1

)

T

Elementary Matrices

40. (a)

0 1 0

1 0 0

0 0 1

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

int

E (b)

1 0 0

0 1 0

0 1 k

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

repl

E (c)

1 0 0

0 0

0 0 1

k

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

scale

E

SECTION 3.3 The Inverse of a Matrix 245

Invertibility of Elementary Matrices

41. Because the inverse of any elementary row operation is also an elementary row operation, and

because elementary matrices are constructed from elementary row operations starting with the

identity matrix, we can convert any elementary row operation to the identity matrix by

elementary row operations.

For example, the inverse of

int

E can be found by performing the operation

1 2

R R ↔ on

the augmented matrix

0 1 0 1 0 0 1 0 0 0 1 0

1 0 0 0 1 0 0 1 0 1 0 0

0 0 1 0 0 1 0 0 1 0 0 1

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤ = →

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

int

E I .

Hence,

1

0 1 0

1 0 0

0 0 1

−

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

int

E . In other words

1 −

=

int int

E E . We leave finding

1 −

repl

E and

1 −

scale

E for the

reader.

Similar Matrices

42. Pick P as the identity matrix.

43. If ~ B A , then there exists a nonsingular matrix P such that

1 −

= B P AP . Premultiplying by P and

postmultiplying by

1 −

P gives

( )

1

1 1 1

−

− − −

= = A PBP P BP ,

which shows that A is similar to B.

44. Suppose ~ C A and ~ C B . Then there exist invertible matrices

A

P and

B

P

1 1 − −

= =

A A B B

C P AP P BP

so

( ) ( ) ( )

1 1 1 1 − − − −

= =

A B B A A B B A

A P P BP P P P B P P .

Let

1 −

=

B A

Q P P .

Therefore

1 −

= A Q BQ, so ~ A B .

45. Informal Discussion

B

n

= (P

−1

AP)(P

−1

AP) ⋅⋅⋅ (P

−1

AP)

n factors

By generous application of the associative property of matrix multiplication we obtain.

B

n

= P

−1

A(PP

−1

) A(PP

−1

) ⋅⋅⋅ (PP

−1

)AP

= P

−1

A

n

P by the facts that PP

−1

= I and AI = A

Induction Proof

To Prove: B

n

= P

−1

A

n

P for all positive integers n

Pf: 1) B

1

= P

−1

AP by definition of B

2) Assume for some k: B

k

= P

−1

A

k

P

246 CHAPTER 3 Linear Algebra

Now for k + 1:

B

k+1

= BB

k

= (P

−1

AP)(P

−1

A

k

P)

= (P

−1

A)(PP

−1

)(A

k

P)

= (P

−1

A)I(A

k

P)

= P

−1

AA

k

P = P

−1

A

k+1

P

So the case for k ⇒ the case for k + 1

By Mathematical Induction, B

n

= P

−1

A

n

P for all n.

46. True/False Questions

a) True If all diagonal elements are nonzero, then every column has a pivot and the

matrix is invertible. If a diagonal element is zero, then the corresponding column

is not a pivot column, so the matrix is not invertible.

b) True Same argument as a)

c) False Consider this example:

A =

1 0

0 2

⎡ ⎤

⎢ ⎥

⎣ ⎦

B =

0 1

0 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

A

−1

=

1 0

1

0

2

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎣ ⎦

1 0

1 0 0 1

1

0 2 0 0 0

2

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎣ ⎦

=

1

0

2

0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎣ ⎦

≠ B

Leontief Model

47.

0.5 0

0 0.5

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

T ,

10

10

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

d

**The basic equation is
**

Total Output = External Demand + Internal Demand,

so we have

1 1

2 2

10 0.5 0

10 0 0.5

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving these equations yields

1 2

20 x x = = . This should be obvious because for every 20 units of

product each industry produces, 10 goes back into the industry to produce the other 10.

48.

0 0.1

0.2 0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

T ,

10

10

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

d

**The basic equation is
**

Total Output = External Demand + Internal Demand,

so we have

1 1

2 2

10 0 0.1

10 0.2 0

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving these equations yields

1

11.2 x = ,

2

12.2 x = .

SECTION 3.3 The Inverse of a Matrix 247

49.

0.2 0.5

0.5 0.2

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

T ,

10

10

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

d

**The basic equation is
**

Total Output = External Demand + Internal Demand,

so we have

1 1

2 2

10 0.2 0.5

10 0.5 0.2

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving these equations yields

1

1

33

3

x = ,

2

1

33

3

x = .

50.

0.5 0.2

0.1 0.3

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

T ,

50

50

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

d

**The basic equation is
**

Total Output = External Demand + Internal Demand,

so we have

1 1

2 2

50 0.5 0.2

50 0.1 0.3

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving these equations yields

1

136.4 x = ,

2

90.9 x = .

How Much Is Left Over?

51. The basic demand equation is

Total Output = External Demand + Internal Demand,

so we have

1

2

150 0.3 0.4 150

250 0.5 0.3 250

d

d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving for

1

d ,

2

d yields

1

5 d = ,

2

100 d = .

Israeli Economy

52. (a)

0.70 0.00 0.00

0.10 0.80 0.20

0.05 0.01 0.98

⎡ ⎤

⎢ ⎥

− = − −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

I T (b) ( )

1

1.43 0.00 0.00

0.20 1.25 0.26

0.07 0.01 1.02

−

⎡ ⎤

⎢ ⎥

− =

⎢ ⎥

⎢ ⎥

⎣ ⎦

I T

(c) ( )

1

1.43 0.00 0.00 140, 000 $200, 200

0.20 1.25 0.26 20, 000 $53, 520

0.07 0.01 1.02 2, 000 $12, 040

−

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= − = =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

x I T d

** Suggested Journal Entry
**

53. Student Project

248 CHAPTER 3 Linear Algebra

3.4

Determinants and Cramer’s Rule

Calculating Determinants

1. Expanding by cofactors down the first column we get

0 7 9

1 1 7 9 7 9

2 1 1 0 2 5 0

6 2 6 2 1 1

5 6 2

−

− = − + =

−

.

2. Expanding by cofactors across the middle row we get

1 2 3

2 3 1 3 1 2

0 1 0 0 1 0 6

0 3 1 3 1 0

1 0 3

= + − = −

− −

−

.

3. Expanding by cofactors down the third column we get

1 3 0 2

1 3 2 1 3 2

0 1 1 5

1 1 2 7 0 1 5 6 6 12

1 2 1 7

1 1 6 1 1 6

1 1 0 6

−

− −

−

= − − + = + =

− −

− −

−

.

4. Expanding by cofactors across the third row we get

( ) ( )

1 4 2 2

4 2 2 1 4 2

4 7 3 5

3 7 3 5 8 4 7 5 3 14 8 250 2042

3 0 8 0

1 6 9 5 1 9

5 1 6 9

− −

− − − −

−

= − + = + =

− − −

− −

.

5. By row reduction, we can write

1 1 1 1 1 1

2

2 2 2 2 2 2

3

3 3 3 2 2 2

= = 0

6.

0 0 1

0 2

0 2 1 1

3 1

3 1 1

= = −6

SECTION 3.4 Determinants and Cramer’s Rule 249

7. Using row reduction

1 2 2 4

2 2 2 2

2 1 1 2

1 4 4 2

− −

− −

− −

1 2 2 4

0 3 3 0

2 1 1 2

0 2 6 6

−

=

− −

−

*

2 3 2

*

4 1 4

by row operations:

R R R

R R R

= +

= +

1 2 2 4

0 3 3 0

0 3 5 10

0 2 6 6

−

=

− − −

−

*

3 1 3

by row operation:

2 R R R = − +

3 3 0

1 3 5 10

2 6 6

−

= − − −

−

5 10 3 10

3 ( 3)

6 6 2 6

− − − −

= − −

−

= −24

Find the Properties

8. Subtract the first row from the second row in the matrix in the first determinant to get the matrix

in the second determinant.

9. Factor out 3 from the second row of the matrix in the first determinant to get the matrix in the

second determinant.

10. Interchange the two rows of the matrix.

Basketweave for × 3 3

11. Direct computation as in Problems 1–4.

12.

0 7 9

2 1 1 0 35 108 45 0 28 0

5 6 2

− = − + − − − =

13.

1 2 3

0 1 0 3 0 0 3 0 0 6

1 0 3

= − + + − − − = −

−

250 CHAPTER 3 Linear Algebra

14. By an extended basketweave hypothesis,

0 1 1 0

1 1 0 1

0 0 0 0 0 0 1 0 1

0 0 0 1

0 1 1 0

= + + + − − − − = − .

However, the determinant is clearly 0 (because rows 1 equals row 4), so the basketweave method

does not generalize to dimensions higher than 3.

Triangular Determinants

15. We verify this for 4 4 × matrices. Higher-order matrices follow along the same lines. Given the

upper-triangular matrix

11 12 13 14

22 23 24

33 34

44

0

0 0

0 0 0

a a a a

a a a

a a

a

= A ,

we expand down the first column, getting

11 12 13 14

22 23 24

22 23 24 33 34

11 33 34 11 22 11 22 33 44

33 34 44

44

44

0

0

0 0 0

0 0

0 0 0

a a a a

a a a

a a a a a

a a a a a a a a a

a a a

a

a

= = = .

Think Diagonal

16. The matrix is upper triangular, hence the determinant is the product of the diagonal elements

( )( )( )

3 4 0

0 7 6 3 7 5 105

0 0 5

−

= − = − .

17. The matrix is a diagonal matrix, hence the determinant is the product of the diagonal elements.

( )( )

4 0 0

1

0 3 0 4 3 6

2

1

0 0

2

− = − = − .

18. The matrix is lower triangular, hence the determinant is the product of the diagonal elements.

( )( )( )( )

1 0 0 0

3 4 0 0

1 4 1 2 8

0 5 1 0

11 0 2 2

−

= − = −

−

−

.

SECTION 3.4 Determinants and Cramer’s Rule 251

19. The matrix is upper triangular, hence the determinant is the product of the diagonal elements.

( )( )( )( )

6 22 0 3

0 1 0 4

6 1 13 4 312

0 0 13 0

0 0 0 4

−

−

= − = − .

Invertibility

20. Not invertible if

3

1 0

0 1 4 0

0 4

k

k k k

k

= − = if k(4 − k

2

) = 0, so that k = 0 or k = ±2

Invertible if k ≠ 0 and k ≠ ±2

21. Not invertible if

1

0

k

k k

=

−

−k + k

2

= 0

k(k − 1) = 0

Invertible if k ≠ 0 and k ≠ 1

22. Not invertible if

1 0

0 1 0 1 0

0 1

m

km

k

= − = i.e. km = 1, k =

1

m

Invertible if km ≠ 1

Invertibility Test

23. The matrix does not have an inverse because its determinant is zero.

24. The matrix has an inverse because its determinant is nonzero.

25. The matrix has an inverse because its determinant is nonzero.

26. The matrix has an inverse because its determinant is nonzero.

252 CHAPTER 3 Linear Algebra

Product Verification

27.

1 2

3 4

1 0

1 1

3 2

7 4

1 2

2

3 4

1 0

1

1 1

3 2

2

7 4

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

⎡ ⎤

= = −

⎢ ⎥

⎣ ⎦

⎡ ⎤

= =

⎢ ⎥

⎣ ⎦

⎡ ⎤

= = −

⎢ ⎥

⎣ ⎦

A

B

AB

A

B

AB

Hence = AB A B .

28.

0 1 0

1 0 0 2

1 2 2

1 2 3

1 2 0 7

0 1 1

1 2 0

1 2 3 14

1 8 1

⎡ ⎤

⎢ ⎥

= ⇒ = −

⎢ ⎥

⎢ ⎥

⎣ ⎦

⎡ ⎤

⎢ ⎥

= − ⇒ = −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

− ⎡ ⎤

⎢ ⎥

= ⇒ =

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A A

B B

AB AB

Hence = AB A B .

Determinant of an Inverse

29. We have

1 1

1

− −

= = = I AA A A

and hence

1

1

−

= A

A

.

Do Determinants Commute?

30. = = = AB A B B A BA ,

because A B is a product of real or complex numbers.

Determinant of Similar Matrices

31. The key to the proof lies in the determinant of a product of matrices. If

1 −

= A P BP ,

we use the general properties

1

1

−

= A

A

, = AB A B

shown in Problems 23 and 24, and write

1 1

1 1

− −

= = = = = A P BP P B P B P B P B

P P

.

SECTION 3.4 Determinants and Cramer’s Rule 253

Determinant of

n

A

32. (a) If 0

n

= A for some integer n, we have

0

n

n

= = A A .

Because

n

A is the product of real or complex numbers,

0 = A .

Hence, A is noninvertible.

(b) If 0

n

≠ A for some integer n, then

0

n

n

= ≠ A A

for some integer n. This implies 0 ≠ A , so A is invertible. In other words, for every

matrix A either 0

n

= A for all positive integers n or it is never zero.

Determinants of Sums

33. An example is

1 0

0 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A ,

1 0

0 1

− ⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

B ,

so

0 0

0 0

⎡ ⎤

+ =

⎢ ⎥

⎣ ⎦

A B ,

which has the determinant

0 + = A B ,

whereas

1 = = A B ,

so 2 + = A B . Hence,

+ ≠ + A B A B .

Determinants of Sums Again

34. Letting

1 1

0 0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A ,

1 1

0 0

− − ⎡ ⎤

=

⎢ ⎥

⎣ ⎦

B ,

we get

0 0

0 0

⎡ ⎤

+ =

⎢ ⎥

⎣ ⎦

A B .

Thus 0 + = A B . Also, we have 0 = A , 0 = B , so

0 + = A B .

Hence,

+ = + A B A B .

254 CHAPTER 3 Linear Algebra

Scalar Multiplication

35. For a 2 2 × matrix, we see

11 12 11 12 2 2 2

11 22 21 12

21 22 21 22

ka ka a a

k a a k a a k

ka ka a a

= − = .

For an n n × matrix, A, we can factor a k out of each row getting

n

k k = A A .

Inversion by Determinants

36. Given the matrix

1 0 2

2 2 3

1 1 1

⎡ ⎤

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

A

the matrix of minors can easily be computed and is

1 1 0

2 1 1

4 1 2

− − ⎡ ⎤

⎢ ⎥

= − −

⎢ ⎥

⎢ ⎥ − −

⎣ ⎦

M .

The matrix of cofactors A

**, which we get by multiplying the minors by ( ) 1
**

i j +

− , is given by

( )

1 1 0

1 2 1 1

4 1 2

i j +

− ⎡ ⎤

⎢ ⎥

= − = − −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A M

.

Taking the transpose of this matrix gives

T

1 2 4

1 1 1

0 1 2

− − ⎡ ⎤

⎢ ⎥

= −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A

.

Computing the determinant of A, we get 1 = − A . Hence, we have the inverse

1 T

1 2 4

1

1 1 1

0 1 2

−

− ⎡ ⎤

⎢ ⎥

= = − −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

A A

A

.

Determinants of Elementary Matrices

37. (a) If we interchange the rows of the 2 2 × identity matrix, we change the sign of the

determinant because

1 0

1

0 1

= ,

0 1

1

1 0

= − .

For a 3 3 × matrix if we interchange the first and second rows, we get

0 1 0

1 0 0 1

0 0 1

= − .

You can verify yourself that if any two rows of the 3 3 × identity matrix are interchanged,

the determinant is –1.

SECTION 3.4 Determinants and Cramer’s Rule 255

For a 4 4 × matrix suppose the ith and jth rows are interchanged and that we

compute the determinant by expanding by minors across one of the rows that was not

interchanged. (We can always do this.) The determinant is then

11 11 12 12 13 13 14 14

a a a a = − + − A M M M M .

But the minors

11

M ,

12

M ,

13

M ,

14

M are 3 3 × matrices, and we know each of these

determinants is –1 because each of these matrices is a 3 3 × elementary matrix with two

rows changed from the identity matrix. Hence, we know 4 4 × matrices with two rows

interchanged from the identity matrix have determinant –1. The idea is to proceed

inductively from 4 4 × matrices to 5 5 × matrices and so on.

(b) The matrix

1 0 0

1 0

0 0 1

k

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

shows what happens to the 3 3 × identity matrix if we add k times the 1st row to the 2nd

row. If we expand this matrix by minors across any row we see that the determinant is the

product of the diagonal elements and hence 1. For the general n n × matrix adding k

times the ith row to the jth row places a k in the jith position of the matrix with all other

entries looking like the identity matrix. This matrix is an upper-triangular matrix, and its

determinant is the product of elements on the diagonal or 1.

(c) Multiplying a row, say the first row, by k of a 3 3 × matrix

0 0

0 1 0

0 0 1

k ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

and expanding by minors across any row will give a determinant of k. Higher-order

matrices give the same result.

Determinant of a Product

38. (a) If A is not invertible then 0 = A . If A is not invertible then neither is AB, so 0 = AB .

Hence, it yields = AB A B because both sides of the equation are zero.

(b) We first show that = EA E A for elementary matrices E. An elementary matrix is one

that results in changing the identity matrix using one of the three elementary operations.

There are three kinds of elementary matrices. In the case when E results in multiplying a

row of the identity matrix I by a constant k, we have:

11 12 1 11 12 1

21 22 2 21 22 2

1 2 1 2

1 0 0

0 0

0 0 1

n n

n n

n n nn n n nn

a a a a a a

a a a ka ka ka k

a a a a a a

k

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= ⋅ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

= =

EA

A E A

In those cases when E is a result of interchanging two rows of the identity or by adding a

multiple of one row to another row, the verification follows along the same lines.

256 CHAPTER 3 Linear Algebra

Now if A is invertible it can be written as the product of elementary matrices

1 1 p p−

= A E E E … .

If we postmultiply this equation by B, we get

1 1

p p−

= AB E E E B … ,

so

1 1 1 1

p p p p − −

= = = AB E E E B E E E B A B … … .

Cramer’s Rule

39. 2 2

2 5 0

x y

x y

+ =

+ =

To solve this system we write it in matrix form as

1 2 2

2 5 0

x

y

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Using Cramer’s rule, we compute the determinants

1 2

1 2 2 2 1 2

1, 10, 4.

2 5 0 5 2 0

= = = = = = − A A A

Hence, the solution is

1 2

10 4

10, 4.

1 1

x y = = = = = − = −

A A

A A

40.

2 1

x y

x y

λ + =

+ =

To solve this system we write it in matrix form as

1 1

1 2 1

x

y

λ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Using Cramer’s rule, we compute the determinants

1 2

1 1 1 1

1, 2 1, 1 .

1 2 1 2 1 1

λ λ

λ λ = = = = − = = − A A A

Hence, the solution is

1 2

2 1 1

2 1, 1 .

1 1

x y

λ λ

λ λ

− −

= = = − = = = −

A A

A A

SECTION 3.4 Determinants and Cramer’s Rule 257

41. 3 5

2 5 7

2 3

x y z

y z

x z

+ + =

+ =

+ =

To solve this system, we write it in matrix form as

1 1 3 5

0 2 5 7

1 0 2 3

x

y

z

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Using Cramer’s rule, we compute the determinants

1 2 3

1 1 3 5 1 3 1 5 3 1 1 5

0 2 5 3, 7 2 5 3, 0 7 5 3, 0 2 7 3.

1 0 2 3 0 2 1 3 2 1 0 3

= = = = = = = = A A A A

All determinants are 3, so

1 2 3

3 3 3

1, 1, 1.

3 3 3

x y z = = = = = = = = =

A A A

A A A

42.

1 2 3

1 2 3

1 2 3

2 6

3 8 9 10

2 2 2

x x x

x x x

x x x

+ − =

+ + =

− + = −

To solve this system, we write it in matrix form as

1

2

3

1 2 1 6

3 8 9 10

2 1 2 2

x

x

x

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Using Cramer’s rule, we compute the determinants

1 2 3

1 2 1 6 2 1 1 6 1 1 2 6

3 8 9 68, 10 8 9 68, 3 10 9 136, 3 8 10 68.

2 1 2 2 1 2 2 2 2 2 1 2

− − −

= = = = = = = = −

− − − − − −

A A A A

Hence, the solution is

1 2 3

1 2 3

68 136 68

1, 2, 1.

68 68 68

x x x = = = = = = = = − = −

A A A

A A A

The Wheatstone Bridge

43. (a) Each equation represents the fact that the sum of the currents into the respective nodes A,

B, C, and D is zero. For example

1 2 1 2

1 1

3 3

2 3 3 2

0

0

0

0 .

g x g x

x x

g g

I I I I I I

I I I I I I

I I I I I I

I I I I I I

− − = ⇒ = +

− − = ⇒ = +

− + + = ⇒ = +

+ − = ⇒ = +

node :

node :

node :

node :

A

B

C

D

258 CHAPTER 3 Linear Algebra

(b) If a current I flows through a resistance R, then the voltage drop across the resistance is

RI. Applying Kirchhoff’s voltage law, the sum of the voltage drops around each of the

three circuits is set to zero giving the desired three equations:

voltage drop around the large circuit

0 1 1

0

x x

E R I R I − − = ,

voltage drop around the upper-left circuit

1 1 2 2

0

g g

R I R I R I + − = ,

voltage drop around the upper-right circuit

3 3

0

x x g g

R I R I R I − − = .

(c) Using the results from part (a) and writing the three currents

3

I ,

x

I , and I in terms of

1

I ,

2

I ,

g

I . gives

3 2

1

1 2

.

g

x g

I I I

I I I

I I I

= +

= −

= +

We substitute these into the three given equations to obtain the 3 3 × linear system for the

currents

1

I ,

2

I ,

g

I :

3 3 1

1 2 2

1 0

0

0

0

x x g

g

x x g

R R R R R I

R R R I

R R R I E

⎡ ⎤ − − − − ⎡ ⎤ ⎡ ⎤

⎢ ⎥

⎢ ⎥ ⎢ ⎥

− =

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥ + −

⎣ ⎦ ⎣ ⎦

⎣ ⎦

.

Solving for

g

I (we only need to solve for one of the three unknowns) using Cramer’s

rule, we find

1

g

I =

A

A

where

( )

3

1 1 2 0 2 1 3

1 0

0

0

0

x

x

x

R R

R R E R R R R

R R E

− ⎡ ⎤

⎢ ⎥

= − = − +

⎢ ⎥

⎢ ⎥ +

⎣ ⎦

A .

Hence, 0

g

I = if

2 1 3 x

R R R R = . Note: The proof of this result is much easier if we assume

the resistance

g

R is negligible, and we take it as zero.

Least Squares Derivation

44. Starting with

( ) ( )

2

1

,

n

i i

i

F m k y k mx

=

= ⎡ − + ⎤

⎣ ⎦

∑

,

we compute the equations

0

F

k

∂

=

∂

, 0

F

m

∂

=

∂

yielding

( ) ( ) ( )

( ) ( ) ( )

2

1 1

2

1 1

2 1 0

2 0.

n n

i i i i

i i

n n

i i i i i

i i

F

y k mx y k mx

k k

F

y k mx y k mx x

m m

= =

= =

∂ ∂

= ⎡ − + ⎤ = ⎡ − + ⎤ − =

⎣ ⎦ ⎣ ⎦

∂ ∂

∂ ∂

= ⎡ − + ⎤ = ⎡ − + ⎤ − =

⎣ ⎦ ⎣ ⎦

∂ ∂

∑ ∑

∑ ∑

SECTION 3.4 Determinants and Cramer’s Rule 259

Carrying out a little algebra, we get

1 1

2

1 1 1

n n

i i

i i

n n n

i i i i

i i i

kn m x y

k x m x x y

= =

= = =

+ =

+ =

∑ ∑

∑ ∑ ∑

or in matrix form

1 1

2

1 1 1

n n

i i

i i

n n n

i i i i

i i i

n x y

k

m

x x x y

= =

= = =

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

∑ ∑

∑ ∑ ∑

.

Alternative Derivation of Least Squares Equations

45. (a) Equation (9) in the text

1.7 1.1

2.3 3.1

3.1 2.3

4.0 3.8

k m

k m

k m

k m

+ =

+ =

+ =

+ =

can be written in matrix form

1 1.7 1.1

1 2.3 3.1

1 3.1 2.3

1 4.0 3.8

k

m

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

which is the form of = Ax b

.

(b) Given the matrix equation = Ax b

, where

1

2

3

4

1

1

1

1

x

x

x

x

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

A ,

k

m

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

,

1

2

3

4

y

y

y

y

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

b

**if we premultiply each side of the equation by
**

T

A , we get

T T

= A Ax A b

, or

1 1

2 2

1 2 3 4 1 2 3 4 3 3

4 4

1

1 1 1 1 1 1 1 1 1

1

1

x y

x y k

x x x x x x x x x y m

x y

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

or

4 4

1 1

4 4 4

2

1 1 1

4

i i

i i

i i i i

i i i

x y

k

m

x x x y

= =

= = =

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

∑ ∑

∑ ∑ ∑

.

260 CHAPTER 3 Linear Algebra

Least Squares Calculation

46. Here we are given the data points

x y

0 1

1 1

2 3

3 3

so

4

1

4

2

1

4

1

4

1

6

14

8

16.

i

i

i

i

i

i

i i

i

x

x

y

x y

=

=

=

=

=

=

=

=

∑

∑

∑

∑

The constants m, k in the least squares line

y mx k = +

satisfy the equations

4 6 8

6 14 16

k

m

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

,

which yields 0.80 k m = = . The least squares line

is 0.8 0.8 y x = + .

4 2

2

1

3

4

1

3

y x = +

08 08 . .

x

y

Computer or Calculator

47. To find the least-squares approximation of the form y k mx = + , we solve to a set of data points

( ) { }

, : 1, 2, ,

i i

x y i n = … , to get the system

1 1

2

1 1 1

n n

i i

i i

n n n

i i i i

i i i

n x y

k

m

x x x y

= =

= = =

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

∑ ∑

∑ ∑ ∑

.

Using a spreadsheet to compute the element of the coefficient matrix and the right-hand-side

vector, we get

Spreadsheet to compute least squares

x y x^2 xy

1.6 1.7 2.56 2.72

3.2 5.3 10.24 16.96

6.9 5.1 47.61 35.19

8.4 6.5 70.56 54.60

9.1 8.0 82.81 72.80

sum x sum y sum x^2 sum xy

29.2 26.6 213.78 182.27

SECTION 3.4 Determinants and Cramer’s Rule 261

We must solve the system

5.0 29.20 26.60

29.2 213.78 182.27

k

m

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

getting 1.68 k = , 0.62 m = . Hence, we have the

least squares line

0.62 1.68 y x = + ,

whose graph is shown next.

8 4

4

2

6

8

2

6

x

y

10

10

y x = +

062 168 . .

48. To find the least-square approximation of the form y k mx = + , we solve to a set of data points

( ) { }

, : 1, 2, ,

i i

x y i n = … to get the system

1 1

2

1 1 1

n n

i i

i i

n n n

i i i i

i i i

n x y

k

m

x x x y

= =

= = =

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

∑ ∑

∑ ∑ ∑

.

Using a spreadsheet to compute the elements of the coefficient matrix and the right-hand-side

vector, we get

Spreadsheet to compute least squares

x y x^2 xy

0.91 1.35 0.8281 1.2285

1.07 1.96 1.1449 2.0972

2.56 3.13 6.5536 8.0128

4.11 5.72 16.8921 23.5092

5.34 7.08 28.5156 37.8072

6.25 8.14 39.0625 50.8750

sum x sum y sum x^2 sum xy

20.24 27.38 92.9968 123.5299

We must solve the system

6.00 20.2400 27.3800

20.24 92.9968 123.5299

k

m

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

getting 0.309 k = , 1.26 m = . Hence, the least-

squares line is 0.309 1.26 y x = + .

5

5

x

y

10

10

y x = +

126 0309 . .

262 CHAPTER 3 Linear Algebra

Least Squares in Another Dimension

49. We seek the constants α,

1

β , and

2

β that minimize

( ) ( )

2

1 2 1 2

1

, ,

n

i i i

i

F y T P α β β α β β

=

= ⎡ − + + ⎤

⎣ ⎦

∑

.

We write the equations

( ) ( )

( ) ( )

( ) ( )

1 2

1

1 2

1 1

1 2

1 2

2 1 0

2 0

2 0.

n

i i i

i

n

i i i i

i

n

i i i i

i

F

y T P

F

y T P T

F

y T P P

α β β

α

α β β

β

α β β

β

=

=

=

∂

= ⎡ − + + ⎤ − =

⎣ ⎦

∂

∂

= ⎡ − + + ⎤ − =

⎣ ⎦

∂

∂

= ⎡ − + + ⎤ − =

⎣ ⎦

∂

∑

∑

∑

Simplifying, we get

1 1 1

2

1

1 1 1 1

2

2

1 1 1 1

n n n

i i i

i i i

n n n n

i i i i i i

i i i i

n n n n

i i i i i i

i i i i

n T P y

T T T P T y

P T P P Py

α

β

β

= = =

= = = =

= = = =

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎡ ⎤

⎢ ⎥ ⎢ ⎥

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

∑ ∑ ∑

∑ ∑ ∑ ∑

∑ ∑ ∑ ∑

Solving for α,

1

β , and

2

β , we get the least-squares plane

1 2

y T P α β β = + + .

Least Squares System Solution

50. Premultiplying each side of the system = Ax b by

T

A gives

T T

= A Ax A b , or

1 1 1

1 0 1 1 0 1

0 1 2

1 1 1 1 1 1

1 1 1

x

y

⎡ ⎤ ⎡ ⎤

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦

or simply

2 0 0

0 3 4

x

y

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving this 2 2 × system, gives

0 x = ,

4

3

y = ,

which is the least squares approximation to the

original system.

2

–1

–3

1

3

2

–2

–2

least squares

solution

y

=

2

0 4 3 , a f

− + = x y 1

x y

+ =

1

y

x

Suggested Journal Entry

51. Student Project

SECTION 3.5 Vector Spaces and Subspaces 263

3.5

Vector Spaces and Subspaces

They Don’t All Look Like Vectors

1. A typical vector is [ ] , x y , with negative [ ] , x y − − ; the zero vector is [ ] 0, 0 .

2. A typical vector is [ ] , , x y z , with negative [ ] , , x y z − − − ; the zero vector is [ ] 0, 0, 0 .

3. A typical vector is [ ] , , , a b c d , with negative [ ] , , , a b c d − − − − ; the zero vector is [ ] 0, 0, 0, 0 .

4. A typical vector is [ ] , , a b c , with negative [ ] , , a b c − − − ; the zero vector is [ ] 0, 0, 0 .

5. A typical vector is

a b c

d e f

⎡ ⎤

⎢ ⎥

⎣ ⎦

, with negative

a b c

d e f

− − − ⎡ ⎤

⎢ ⎥

− − −

⎣ ⎦

; the zero vector is

0 0 0

0 0 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

6. A typical vector is

a b c

d e f

g h i

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

, with negative

a b c

d e f

g h i

− − − ⎡ ⎤

⎢ ⎥

− − −

⎢ ⎥

⎢ ⎥ − − −

⎣ ⎦

; the zero vector is

0 0 0

0 0 0

0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

7. A typical vector is a linear function ( ) p t at b = + , the zero vector is 0 p ≡ and the negative of

( ) p t is ( ) p t − .

8. A typical vector is a quadratic function

( )

2

p t at bt c = + + ,

the zero vector is 0 p ≡ and the negative of ( ) p t

is ( ) p t − .

4 –4

–2

2

y

t

4

–4

–2 2

p t

1

( )

p t

3

( )

p t

2

( )

p t

4

( )

Segments of typical vectors in

2

P

9. A typical vector is a continuous and differenti-

able function, such as ( ) sin f t t = , ( )

2

g t t = .

The zero vector is ( )

0

0 f t ≡ and the negative of

( ) f t is ( ) f t − .

4 –4

–2

2

y

x

4

–4

–2 2

g t ( )

f t ( )

264 CHAPTER 3 Linear Algebra

10.

[ ]

2

0, 1 e : Typical vectors are continuous and

twice differentiable functions such as

( ) sin f t t = , ( )

2

2 g t t t = + − ,

and so on. The zero vector is the zero function

( )

0

0 f t ≡ , and the negative of a typical vector,

say ( ) sin

t

h t e t = , is ( ) sin

t

h t e t − = − .

4

y

t

8

–4

–8

g t ( )

f t ( )

–4 4

h t ( )

Are They Vector Spaces?

11. Not a vector space; there is no additive inverse.

12. First octant of space: No, the vectors have no negatives. For example, [ ] 1, 3, 3 belongs to the set

but [ ] 1, 3, 3 − − − does not.

13. Not a vector space; e.g., the negative of [ ] 2, 1 does not lie in the set.

14. Not a vector space; e.g.,

2

x x + and ( )

2

1 x − each belongs, but their sum ( )

2 2

1 x x x x + + − = does

not.

15. Not a vector space since it is not closed under vector addition. See the example for Problem 14.

16. Yes, the vector space of all diagonal 2 × 2 matrices.

17. Not a vector space; the set of 2 × 2 matrices with zero deteriminant is not closed under vector

addition as indicated by

1 0 0 1 1 1

1 0 0 3 1 3

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

18. Not a vector space; the set of all 2 × 2 invertible matrices is not closed under vector addition. For

instance,

1 0 1 0 0 0

0 1 0 1 0 0

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

19. Yes, the vector space of all 3 × 3 upper-triangular matrices.

20. Not a vector space, does not contain the zero function.

21. Not a vector space; not closed under scalar multiplication; no additive inverse.

22. Yes, the vector space of all differentiable functions on ( ) , −∞ ∞ .

23. Yes, the vector space of all integrable functions on [ ] 0, 1 .

SECTION 3.5 Vector Spaces and Subspaces 265

A Familiar Vector Space

24. Yes a vector space. Straightforward verification of the 10 commandments of a vector space; that

is, the sum of two vectors (real numbers in this case) is a vector (another real number), the

product of a real number by a scalar (another real number) is a real number. The zero vector is the

number 0. Every number has a negative. The distributivity and associatively properties are simply

properties of the real numbers, and so on.

Not a Vector Space

25. Not a vector space; not closed under scalar multiplication.

DE Solution Space

26. Properties A3, A4, S1, S2, S3, and S4 are basic properties that hold for all functions; in particular,

solutions of a differential equation.

Another Solution Space

27. Yes, the solution space of the linear homogeneous DE

( ) ( )( ) ( )( ) ( ) ( ) 0 y p t y q t y y p t y q t y

′′

′ ′′ ′ − + − + − = −⎡ + + ⎤ =

⎣ ⎦

is indeed a vector space; the linearity properties are sufficient to prove all the vector space

properties.

The Space ( ) , −∞ ∞ C

28. This result follows from basic properties of continuous functions; the sum of continuous func-

tions is continuous, scalar multiples of continuous functions are continuous, the zero function is

continuous, the negative of a continuous function is continuous, the distributive properties hold

for all functions, and so on.

Vector Space Properties

29. Unique Zero: We prove that if a vector z

satisfies + = v z v

for any vector v

, then = z 0

. We

can write

( ) ( ) ( ) ( ) ( ) ( ) ( ) = + = + + − = + + − = + + − = + − = z z 0 z v v z v v v z v v v 0

.

30. Unique Negative: We show that if v

**is an arbitrary vector in some vector space, then there is
**

only one vector n

(which we call −v

**) in that space that satisfies + = v n 0
**

. Suppose another

vector n*

also satisfies + = v n* 0

. Then

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

.

= + = + + − = + + − = + − = + + − = + − +

= + =

n n 0 n v v n v v 0 v v n* v v v n*

0 n* n*

**31. Zero as Multiplier: We can write
**

( ) 0 1 0 1 0 1 + = + = + = = v v v v v v v

.

Hence, by the result of Problem 30, we can conclude that 0 = v 0

.

266 CHAPTER 3 Linear Algebra

32. Negatives as Multiples: From Problem 30, we know that −v

**is the only vector that satisfies
**

( ) + − = v v 0

. Hence, if we write

( ) ( ) ( ) ( )

1 1 1 1 1 0 + − = + − = + − = = v v v v v v 0

.

Hence, we conclude that ( ) 1 − = − v v

.

A Vector Space Equation

33. Let v

be an arbitrary vector and c an arbitrary scalar. Set c = v 0

. Then either 0 c = or = v 0

. For

0 c ≠ ,

( ) ( )

1 1

1 c

c c

= = = = v v v 0 0

,

which proves the result.

Nonstandard Definitions

34. ( ) ( ) ( )

1 1 2 2 1 2

, , , 0 x y x y x x + ≡ + and ( ) ( ) , , c x y cx y ≡

All vector space properties clearly hold for these operations. The set

2

R with indicated vector

addition and scalar multiplication is a vector space.

35. ( ) ( ) ( )

1 1 2 2 2

, , 0, x y x y x + ≡ and ( ) ( ) , , c x y cx cy ≡

Not a vector space because, for example, the new vector addition is not commutative:

( ) ( ) ( ) 2, 3 4, 5 0, 4 + = , ( ) ( ) ( ) 4, 5 2, 3 0, 2 + = .

36. ( ) ( ) ( )

1 1 2 2 1 2 1 2

, , , x y x y x x y y + ≡ + + and ( )

( )

, , c x y cx c y ≡

Not a vector space, for example,

( ) c d c d + ≠ + x x x

.

For 4 c = , 9 d = and vector ( )

1 2

, x x = x

, we have

( ) ( )

( )

( ) ( ) ( ) ( ) ( )

1 2 1 2

1 2 1 2 1 2 1 2 1 2

13 , 13 , 13

4 , 9 , 2 , 2 3 , 3 5 , 5 .

c d x x x x

c d x x x x x x x x x x

+ = =

+ = + = + =

x

x x

** Sifting Subsets for Subspaces
**

37. ( ) { }

, 0 x y y = = W is a subspace of

2

R .

38. ( )

{ }

2 2

, 1 x y x y = + = W is not a subspace of

2

R because it does not contain the zero vector

(0, 0). It is also not closed under vector addition and scalar multiplication.

39. ( ) { }

1 2 3 3

, , 0 x x x x = = W is a subspace of

3

R .

40. ( ) ( ) { }

degree 2 p t p t = = W is not a subspace of

2

P because it does not contain the zero vector

( ) 0 p t ≡ .

41. ( ) ( ) { }

0 0 p t p = = W is a subspace of

3

P .

SECTION 3.5 Vector Spaces and Subspaces 267

42. ( ) ( ) { }

0 0 f t f = = W is a subspace of [ ] 0, 1 C .

43. ( ) ( ) ( ) { }

0 1 0 f t f f = = = W is a subspace of [ ] 0, 1 C .

44. ( ) ( )

{ }

0

b

a

f t f t dt = =

∫

W is a subspace of [ ] , a b C .

45. ( ) { }

0 f t f f ′′ = + = W is a subspace of [ ]

2

0, 1 C .

46.

{ }

( ) 1 f t f f ′′ = + = W is not a subspace of [ ]

2

0, 1 C . It does not contain the zero vector

( ) 0 y t ≡ . It is also not closed under vector addition and scalar multiplication because the sum of

two solutions is not necessarily a solution. For example,

1

1 sin y t = + and

2

1 cos y t = + are both

solutions, but the sum

1 2

2 sin cos y y t t + = + +

is not a solution. Likewise

1

2 2 2sin y t = + is not a solution.

47. Not a subspace because = ∉ x 0

W

48. W is a subspace:

Nonempty: Note that = A0 0

so ∈ 0

W

Closure: Suppose , ∈ x y

W

= Ax 0

and = Ay 0

,

so ) + A(ax by

= ) ( ) + A(ax A by

= + = + = + = aAx bAy a0 b0 0 0 0

Hyperplanes as Subspaces

49. We select two arbitrary vectors

[ ]

1 1 1 1

, , , x y z w = u

, [ ]

2 2 2 2

, , , x y z w = v

**from the subset W. Hence, we have
**

1 1 1 1

2 2 2 2

0

0.

ax by cz dw

ax by cz dw

+ + + =

+ + + =

Adding, we get

( ) ( ) ( ) ( ) ( ) ( )

1 2 1 2 1 2 1 2 1 1 1 1 2 2 2 2

0

a x x b y y c z z d w w ax by cz dw ax by cz dw + + + + + + + = + + + + + + +

=

which says that + ∈ u v

W. To show k ∈ u

**W, we must show that the scalar multiple
**

[ ]

1 1 1 1

, , , k kx ky kz kw = u

satisfies

1 1 1 1

0 akx bky ckz dkw + + + = .

But this follows from

( )

1 1 1 1 1 1 1 1

0 akx bky ckz dkw k ax by cz dw + + + = + + + = .

268 CHAPTER 3 Linear Algebra

Are They Subspaces of R?

50. W = {[ , , , ]: , } a b a b a b a b − + ∈ is a subspace.

Nonempty: Let a = b = 0 Then (0, 0, 0, 0) ∈ W

Closure: Suppose [ ] [ ]

2 2 2 2 2 2

, , , and , , , a b a b a b a b a b a b = − + = − + ∈ x y

W

[ ] [ ]

[ ]

[ ]

1 1 1 1 1 1 2 2 2 2 2 2

1 2 1 2 1 1 2 2 1 1 2 2

1 2 1 2 1 2 1 2 1 2 1 2

Then , , ( ), ( ) , , ( ), ( )

, , ( ) ( ), ( ) ( )

, , ( ) ( ), ( ) ( )

for any , ,

k ka kb k a b k a b a b a b a b

ka a kb b k a b a b k a b a b

ka a kb b ka a kb b ka a kb b

k

+ = − + + − +

= + + − + + + + +

= + + + − + + + +

∈ ∈

x y

W

51. No [0, 0, 0, 0, 0] ∉ {[a, 0, b, 1, c]: a, b, c ∈ }

because the 4

th

coordinate ≠ 0 for all a, b, c ∈

52. No For [a, b, a

2

, b

2

], the last two coordinates are not linear functions of a and b.

Consider [1, 3, 1, 9] Note that 2[1, 3, 1, 9] is not in the subset.

i.e., 2[1, 3, 1, 9] = [2, 6, 2, 18] ≠ [2 ⋅ 1, 2 ⋅ 3, (2 ⋅ 1)

2

, (2 ⋅ 3)

2

] = [2, 3, 4, 36]

Differentiable Subspaces

53. ( ) { }

0 f t f ′ = . It is a subspace.

54. ( ) { }

1 f t f ′ = . It is not a subspace, because it does not contain the zero vector and is not closed

under vector addition. Hence, ( ) f t t = , ( ) 2 g t t = + belongs to the subset but ( )( ) f g t + does

not belong. It is also not closed under scalar multiplication. For example ( ) f t t = belongs to the

subset, but ( ) 2 2 f t t = does not.

55. ( ) { }

f t f f ′ = . It is a subspace.

56. ( )

{ }

2

f t f f ′ = . It is not a subspace; e.g., not closed under scalar multiplication. ( f may satisfy

equation

2

f f ′ = , but 2f will not, since

2

2 4 f f ′ ≠ .

Property Failures

57. The first quadrant (including the coordinate axes) is closed under vector addition, but not scalar

multiplications.

58. An example of a set in

2

R that is closed under scalar multiplication but not under vector addition

is that of two different lines passing through the origin.

59. The unit circle is not closed under either vector addition or scalar multiplication.

SECTION 3.5 Vector Spaces and Subspaces 269

Solution Spaces of Homogenous Linear Algebraic Systems

60. x

1

− x

2

+ 4x

4

+ 2x

5

− x

6

= 0

2x

1

− 2x

2

+ x

3

+ 2x

4

+ 4x

5

− x

6

= 0

The matrix of coefficients A =

1 1 0 4 2 1

2 2 1 2 4 1

− −

− −

has RREF =

1 1 0 4 2 1

0 0 1 6 0 1

− −

−

x

1

− x

2

+ 4x

4

+ 2x

5

− x

6

= 0

x

3

− 6x

4

+ x

6

= 0

Let x

2

= r, x

4

= s, x

5

= t, x

6

= u x

1

= r − 4s − 2t + u

x

3

= 6s − u

S =

1 4 2 1

1 0 0 0

0 6 0 1

: , , ,

0 1 0 0

0 0 1 0

0 0 0 1

r s t u r s t u

⎧ ⎫ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −

⎪ ⎪

+ + + ∈

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

61. 2x

1

− 2x

2

+ 4x

3

− 2x

4

= 0

2x

1

+ x

2

+ 7x

3

+ 4x

4

= 0

x

1

− 4x

2

− x

3

+ 7x

4

= 0

4x

1

− 12x

2

− 20x

4

= 0

The matrix of coefficients A =

2 2 4 2

2 1 7 4

1 4 1 7

4 12 0 20

− − ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ − −

⎢ ⎥

− −

⎣ ⎦

RREF(A) =

1 0 3 0

0 1 1 0

0 0 0 1

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

x

1

+ 3x

3

= 0

x

2

+ x

3

= 0

x

4

= 0

S =

3

1

:

1

0

r r

⎧ ⎫ − ⎡ ⎤

⎪ ⎪ ⎢ ⎥

−

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎩ ⎭

Let x

3

= r x

1

= −3r

x

2

= −r

x

3

= r

x

4

= 0

270 CHAPTER 3 Linear Algebra

62. 3x

1

+ 6x

3

+ 3x

4

+ 9x

5

= 0

x

1

+ 3x

2

− 4x

3

− 8x

4

+ 3x

5

= 0

x

1

− 6x

2

+ 14x

3

+ 19x

4

+ 3x

5

= 0

The matrix of cooefficients A =

3 0 6 3 9

1 3 4 8 3

1 6 14 19 3

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥ −

⎣ ⎦

has RREF =

1 0 2 1 3

0 1 2 3 0

0 0 0 0 0

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎣ ⎦

x

1

+ 2x

3

+ x

4

+ 3x

5

= 0

x

2

− 2x

3

− 3x

4

= 0

x

1

= −2x

3

− x

4

− 3x

5

x

2

= 2x

3

+ 3x

4

Let x

3

= r, x

4

= s, x

5

= t x

1

= −2r − s − 3t

x

2

= 2r + 3s

S =

2 1 3

2 3 0

: , , 1 0 0

0 1 0

0 0 1

r s t r s t

⎧ ⎫ − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ + + ∈

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

Nonlinear Differential Equations

63.

2

y y ′ = . Writing the equation in differential form, we have

2

y dy dt

−

= . We get the general

solution

1

y

c t

=

−

. Hence, from 0 c = and 1, we have two solutions

( )

1

1

y t

t

= − , ( )

2

1

1

y t

t

=

−

.

But, if we compute

( ) ( )

1 2

1 1

1

y t y t

t t

+ = − +

−

it would not be a solution of the DE. So the solution set of this nonlinear DE is not a vector space.

64. sin 0 y y ′′ + = . Assume that y is a solution of the equation. Hence, we have the equation

sin 0 y y ′′ + = .

But cy does not satisfy the equation because we have

( ) ( ) ( ) sin sin 0 cy cy c y y

′′

′′ + ≠ + = .

65.

1

0 y

y

′′ + = . From the DE we can see that the zero vector is not a solution, so the solution space of

this nonlinear DE is not a vector space.

DE Solution Spaces

66. 2

t

y y e ′ + = . Not a vector space, it doesn’t contain the zero vector.

SECTION 3.5 Vector Spaces and Subspaces 271

67.

2

0 y y ′ + = . The solutions are

1

y

t c

=

−

, and the sum of two solutions is not a solution, so the set

of all solutions of this nonlinear DE do not form a vector space.

68. 0 y ty ′′ + = . If

1

y ,

2

y satisfy the equation, then

1 1

2 2

0

0.

y ty

y ty

′′+ =

′′ + =

By adding, we obtain

( ) ( )

1 1 2 2

0 y ty y ty ′′ ′′ + + + = ,

which from properties of the derivative is equivalent to

( ) ( )

1 1 2 2 1 1 2 2

0 c y c y t c y c y

′′

+ + + = .

This shows the set of solutions is a vector space.

69. ( ) 1 sin 0 y t y ′′ + + = . If

1

y ,

2

y satisfy the equation then

( )

( )

1 1

2 2

1 sin 0

1 sin 0.

y t y

y t y

′′+ + =

′′ + + =

By adding, we have

( ) ( )

1 1 1 1

1 sin 1 sin 0 y t y y t y ′′ ′′ ⎡ + + ⎤ + ⎡ + + ⎤ =

⎣ ⎦ ⎣ ⎦

,

which from properties of the derivative is equivalent to

( ) ( )( )

1 1 2 2 1 1 2 2

1 sin 0 c y c y t c y c y

′′

+ + + + = ,

which shows the set of solutions is a vector space. This is true for the solution set of any linear

homogeneous DE.

Line of Solutions

70. (a) [ ] [ ] [ ] 0, 1 2, 3 2 , 1 3 t t t t = + = + = + x p h

Hence, calling

1

x ,

2

x the coordinates of the vector [ ]

1 2

, x x = x

, we have

1

2 x t = ,

2

1 3 x t = + .

(b) [ ] [ ] 2, 1, 3 2, 3, 0 t = + − x

**(c) Showing that solutions of 0 y y ′ + = are closed under vector addition is a result of the
**

fact that the sum of two solutions is a solution. The fact that solutions are closed under

scalar multiplication is a result of the fact that scalar multiples of solutions are also

solutions. The zero vector (zero function) is also a solution because the negative of a

solution is a solution. Computing the solution of the equation gives ( )

t

y t ce

−

= , which is

scalar multiple of

t

e

−

. We will later discuss that this collection of solutions is a one-

dimensional vector space.

(d) The solutions of y y t ′ + = are given by ( ) ( ) 1

t

y t t ce

−

= − + . The abstract point of view is

a line through the vector 1 t − (remember functions are vectors here) in the direction of

the vector

t

e

−

.

272 CHAPTER 3 Linear Algebra

(e) The solution of any linear equation Ly f = can be interpreted as a line passing through

any particular solution

p

y in the direction of any homogeneous solution

h

y ; that is,

p h

y y cy = + .

Orthogonal Complements

71. To prove: V

⊥

= { 0

n

∈ ⋅ = u u v

for every ∈ v

V} is a subspace of

n

Nonempty: 0 ⋅ = 0 v

for every ∈ v

V

Closure: Let a and b ∈ and , ,∈ u w

V

⊥

Let ∈ v

V

( ) ( ) ( )

( ) ( )

a b a b

a b

a b

+ ⋅ = ⋅ + ⋅

= ⋅ + ⋅

= +

u w v u v w v

u v w v

0 0

72. To prove: V ∩ V

⊥

= { } 0

and

⊥

∈ ∈ 0 0

V V since

V is a subspace and ⋅ 0 v

= 0 for every ∈ v

V , so

⊥

∈ 0

V

∴{ } 0

⊂ V ∩ V

⊥

Now suppose

⊥

∈ ∩ w

V V where w

= [w

1

, w

2

, …, w

n

]

Then ⋅ = w v 0

for all ∈ v

V

However ∈ w

V so ⋅ w w

= 0

2 2 2

1 2

... 0

n

w w w + + + =

∴ w

1

= w

2

= … = w

n

= 0

∴ = w 0

** Suggested Journal Entry
**

73. Student Project

SECTION 3.6 Basis and Dimension 273

3.6

Basis and Dimension

The Spin on Spans

1.

2

= V R . Let

[ ] [ ] [ ]

[ ]

[ ]

1 2

2 2

2

, 0, 0 1, 1

,

1, 1

x y c c

c c

c

= +

=

=

.

The given vectors do not span

2

R , although they span the one-dimensional subspace

[ ] { }

1, 1 k k ∈R .

2.

3

= V R . Letting

[ ] [ ] [ ] [ ]

1 2 3

, , 1, 0, 0 0, 1, 0 2, 3, 1 a b c c c c = + +

yields the system of equations

1 3

2 3

3

2

3

c c a

c c b

c c

+ =

+ =

=

or

3

2 3

1 3

3 3

2 2 .

c a

c b c b c

c a c a c

=

= − = −

= − = −

Hence, W spans

3

R .

3.

3

= V R . Letting

[ ] [ ] [ ] [ ] [ ]

1 2 3 4

, , 1, 0, 1 2, 0, 4 5, 0, 2 0, 0, 1 a b c c c c c = − + + − +

yields

1 2 3

1 2 3 4

2 5

0

4 2 .

a c c c

b

c c c c c

= + −

=

= − + + +

These vectors do not span

3

R because they cannot give any vector with 0 b ≠ .

4.

2

= V P . Let

( ) ( ) ( )

2 2

1 2 3

1 1 2 3 at bt c c c t c t t + + = + + + − + .

Setting the coefficients of

2

t , t, and 1 equal to each other gives

2

3

2 3

1 2 3

:

: 2

1: 3

t c a

t c c b

c c c c

=

− =

+ + =

which has the solution

1

5 c c b a = − − ,

2

2 c b a = + ,

3

c a = . Any vector in V can be written as a

linear combination of vectors in W. Hence, the vectors in W span V.

274 CHAPTER 3 Linear Algebra

5.

2

= V P . Let

( ) ( ) ( )

2 2 2

1 2 3

1 1 at bt c c t c t c t t + + = + + + + − .

Setting the coefficients of

2

t , t, and 1 equal to each other gives

2

2 3

1 3

1 2

:

:

1: .

t c c a

t c c b

c c c

+ =

− =

+ =

If we add the first and second equations, we get

1 2

1 2

c c a b

c c c

+ = +

+ =

This means we have a solution only if c a b = + . In other words, the given vectors do not span

2

P ; they only span a one-dimensional vector space of

3

R .

6.

22

= V M . Letting

1 2 3 4

1 1 0 0 1 0 0 1

0 0 1 1 1 0 0 1

a b

c c c c

c d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= + + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

we have the equations

1 3

1 4

2 3

2 4

.

c c a

c c b

c c c

c c d

+ =

+ =

+ =

+ =

If we add the first and last equation, and then the second and third equations, we obtain the

equations

1 2 3 4

1 2 3 4

.

c c c c a d

c c c c b c

+ + + = +

+ + + = +

Hence, we have a solution if and only if a d b c + = + . This means we can solve for

1

c ,

2

c ,

3

c ,

4

c

for only a subset of vectors in V. Hence, W does not span

4

R .

Independence Day

7.

2

= V R . Setting

[ ] [ ] [ ]

1 2

1, 1 1, 1 0, 0 c c − + − =

we get

1 2

1 2

0

0

c c

c c

− =

− + =

which does not imply

1 2

0 c c = = . For instance, if we choose

1

1 c = , then

2

1 c = also. Hence, the

vectors in W are linearly dependent.

SECTION 3.6 Basis and Dimension 275

8.

2

= V R . Setting

[ ] [ ] [ ]

1 2

1, 1 1, 1 0, 0 c c + − =

we get

1 2

1 2

0

0

c c

c c

+ =

− =

which implies

1 2

0 c c = = . Hence, the vectors in W are linearly independent.

9.

3

= V R . Setting

[ ] [ ] [ ] [ ]

1 2 3

1, 0, 0 1, 1, 0 1, 1, 1 0, 0, 0 c c c + + =

we get

1 2 3

2 3

3

0

0

0

c c c

c c

c

+ + =

+ =

=

which implies

1 2 3

0 c c c = = = . Hence vectors in W are linearly independent.

10.

3

= V R . Setting

[ ] [ ] [ ]

1 2

2, 1, 4 4, 2, 8 0, 0, 0 c c − + − =

we get

1 2

1 2

1 2

2 4 0

2 0

4 8 0

c c

c c

c c

+ =

− − =

+ =

which (the equations are all the same) has a nonzero solution

1

2 c = − ,

2

1 c = . Hence, the vectors

in W are linearly dependent.

11.

3

= V R . Setting

[ ] [ ] [ ] [ ]

1 2 3

1, 1, 8 3, 4, 2 7, 1, 3 0, 0, 0 c c c + − + − =

we get

1 2 3

1 2 3

1 2 3

3 7 0

4 0

8 2 3 0

c c c

c c c

c c c

− + =

+ − =

+ + =

which has only the solution

1 2 3

0 c c c = = = . Hence, the vectors in W are linearly independent.

12.

1

= V P . Setting

1 2

0 c c t + = ,

we get

1

0 c = ,

2

0 c = . Hence, the vectors in W are linearly independent.

276 CHAPTER 3 Linear Algebra

13.

1

= V P . Setting

( ) ( )

1 2

1 1 0 c t c t + + − =

we get

1 2

1 2

0

0

c c

c c

+ =

− =

which has a unique solution

1 2

0 c c = = . Hence, the vectors in W are linearly independent.

14.

2

= V P . Setting

( )

1 2

1 0 c t c t + − = ,

we get

1 2

2

0

0

c c

c

− =

=

which implies

1 2

0 c c = = . Hence, the vectors in W are linearly independent.

15.

2

= V P . Setting

( ) ( )

2

1 2 3

1 1 0 c t c t c t + + − + =

we get

1 2

1 2

3

0

0

0

c c

c c

c

+ =

− =

=

which implies

1 2 3

0 c c c = = = . Hence, the vectors in W are linearly independent.

16.

2

= V P . Setting

( ) ( ) ( )

2 2

1 2 3

3 1 2 5 0 c t c t c t t + + − + − − =

we get

1 2 3

1 3

2 3

3 5 0

0

2 0

c c c

c c

c c

− − =

− =

+ =

which has a nonzero solution

1

1 c = − ,

2

2 c = ,

3

1 c = − . Hence, the vectors in W are linearly

dependent.

17.

22

= V D . Setting

1 2

0 1 0 0 0

0 0 0 0 1

a

c c

b

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

we get

1

c a = ,

2

c b = . Hence, these vectors are linearly independent and span

22

D .

SECTION 3.6 Basis and Dimension 277

18.

22

= V D . Setting

1 2

0 1 0 1 0

0 0 1 0 1

a

c c

b

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

we get

1 2

c c a + = ,

1 2

c c b − = . We can solve these equations for

1

c ,

2

c , and hence these vectors

are linearly independent and span

22

D .

Function Space Dependence

19.

{ }

,

t t

S e e

−

= . We set

1 2

0

t t

c e c e

−

+ = .

Because we assume this holds for all t, it holds in particular for 0 t = , 1, so

1 2

1

1 2

0

0

c c

ec e c

−

+ =

+ =

which has only the zero solution

1 2

0 c c = = . Hence, the functions are linearly independent.

20.

{ }

2

, ,

t t t

S e te t e = . We assume

2

1 2 3

0

t t t

c e c te c t e + + =

for all t. We let 0 t = , 1, 2, so

1

1 2 3

2 2 2

1 2 3

0

0

2 4 0

c

ec ec ec

e c e c e c

=

+ + =

+ + =

which has only the zero solution

1 2 3

0 c c c = = = . Hence, these vectors are linearly independent.

21. { } sin , sin 2 , sin3 S t t t = . Letting

1 2 3

sin sin 2 sin3 0 c t c t c t + + =

for all t. In particular if we choose three values of t, say

6

π

,

4

π

,

2

π

, we obtain three equations to

solve for

1

c ,

2

c ,

3

c , namely,

1 2 3

1 2 3

1 3

1 3

0

2 2

2 2

0

2 2

0.

c c c

c c c

c c

⎛ ⎞

⎛ ⎞

+ + =

⎜ ⎟

⎜ ⎟

⎜ ⎟

⎝ ⎠

⎝ ⎠

⎛ ⎞ ⎛ ⎞

+ + =

⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

− =

We used Maple to compute the determinant of this coefficient matrix and found it to be

3 1

6

2 2

− + . Hence, the system has a unique solution

1 2 3

0 c c c = = = . Thus, sint , sin 2t , and

sin3t are linearly independent.

278 CHAPTER 3 Linear Algebra

22.

{ }

2 2

1, sin , cos S t t = . Because

2 2

1 sin cos 0 t t − − =

the vectors are linearly dependent.

23. ( )

{ }

2

1, 1, 1 S t t = − − . Setting

( ) ( )

2

1 2 3

1 1 0 c c t c t + − + − =

we get for the coefficients of 1, t,

2

t the system of equations

1 2 3

2 3

3

0

2 0

0

c c c

c c

c

− + =

− =

=

which has the only zero solution

1 2 3

0 c c c = = = . Hence, these vectors are linearly independent.

24.

{ }

, , cosh

t t

S e e t

−

= . Because

( )

1

cosh

2

t t

t e e

−

= +

we have that 2cosh 0

t t

t e e

−

− − = is a nontrivial linear combination that is identically zero for all

t. Hence, the vectors are linearly dependent.

25.

{ }

2

sin , 4, cos 2 S t t = . Recall the trigonometric identity

( )

2

1

sin 1 cos 2

2

t t = − ,

which can be rewritten as

( )

2

1

2sin 4 cos 2 0

4

t t − + = .

Hence, we have found a nontrivial linear combination of the three vectors that is identically zero.

Hence, the three vectors are linearly dependent.

Independence Testing

26. We will show the only values for which

2

1 2

0 2

0

t t

t t

e e

c c

e e

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

for all t are

1 2

0 c c = = and, hence, conclude that the vectors are linearly independent. If it is true

for all t, then it must be true for 0 t = (which is the easiest place to test), which yields the two

linear equations

1 2

1 2

2 0

0

c c

c c

+ =

+ =

whose only solution is

1 2

0 c c = = . Hence, the vectors are linearly independent. (This test works

only for linear independence.)

Another approach is to say the vectors are linearly independent because clearly there is

no constant k such that one vector is k times the other vector for all t.

SECTION 3.6 Basis and Dimension 279

27. We will show that

1 2

sin cos 0

cos sin 0

t t

c c

t t

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

for all t implies

1 2

0 c c = = , and hence, the vectors are linearly independent. If it is true for all t,

then it must be true for 0 t = , which gives the two equations

2

0 c = ,

1

0 c = . This proves the

vectors are linearly independent.

Another approach is to say that the vectors are linearly independent because clearly there

is no constant k such that one vector is k times the other vector for all t.

28. We write

2

2

1 2 3

2

0

2 2 3 0

0

t t t

t t t

t t t

e e e

c e c e c e

e e e

−

−

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥

+ + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

for all t and see if there are nonzero solutions for

1

c ,

2

c , and

3

c .

2

2 2 4

2

2 2 3 0

t t t

t t t t t

t t t

e e e

e e e e e

e e e

−

−

= − ≠ for all 0 t ≠ .

We see by Cramer’s Rule that there is a unique solution

1 2 3

0 c c c = = = . Therefore the vectors are

linearly independent.

29. We write

8

8

1 2 3

8

2 0

4 0 0

2 0

t t t

t t

t t t

e e e

c e c c e

e e e

− −

−

− −

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥

− + + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ −

⎣ ⎦

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

for all t and see if there are nonzero solutions for

1

c ,

2

c ,

3

c . Because the above equation is

assumed true for all t, it must be true for 0 t = (the easy case), or

1 2 3

1 3

1 2 3

2 0

4 0

2 0.

c c c

c c

c c c

+ + =

− + =

− + =

Writing this in matrix form gives

1

2

3

1 1 2 0

4 0 1 0

1 1 2 0

c

c

c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

The determinant of the coefficient matrix is 18, so the only solution of this linear system is

1 2 3

0 c c c = = = , and thus the vectors are linearly independent.

280 CHAPTER 3 Linear Algebra

Twins?

30. We have { } ( ) ( ) { }

1 2

span cos sin , cos sin cos sin cos sin t t t t c t t c t t + − = + + −

( ) ( ) { }

{ }

{ }

1 2 1 2

1 2

cos sin

cos sin

span sin , cos .

c c t c c t

C t C t

t t

= + + −

= +

=

A Questionable Basis

31. The set

1 0 2

1 , 1 , 1

0 1 1

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is not a basis since

1 0 2

1 1 0 2

1 1 1 1 1 2 2 0

1 1 1 1

0 1 1

= − = − + =

− −

−

One of the many possible answers to the second part is:

1 0 1

1 , 1 0

0 1 0

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

Wronskian

32. We assume that the Wronskian function

[ ]( ) ( ) ( ) ( ) ( ) , 0 W f g t f t g t f t g t ′ ′ = − ≠

for every [ ] 0, 1 t ∈ . To show f and g are linearly independent on [0, 1], we assume that

( ) ( )

1 2

0 c f t c g t + = for all t in the interval [0, 1]. Differentiating, we have

( ) ( )

1 2

0 c f t c g t ′ ′ + =

on [0, 1]. Hence, we have the two equations

( ) ( )

( ) ( )

1

2

0

0

f t g t c

f t g t c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ ′

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

The determinant of the coefficient matrix is the Wronskian of f and g, which is assumed to be

nonzero on [0, 1]. Since

1 2

0 c c = = , the vectors are linearly independent.

Zero Wronskian Does Not Imply Linear Dependence

33. a) f(t) = t

2

g(t) =

2

2

0

0

t t

t t

⎧ ≥

⎪

⎨

− < ⎪

⎩

f ′(t) = 2t g ′(t) =

2 0

2 0

t t

t t

≥ ⎧

⎨

− <

⎩

For t ≥ 0 W =

2 2

2 2

t t

t t

= 0

For t < 0 W =

2 2

2 2

t t

t t

−

−

= −2t

3

+ 2t

3

= 0

∴W = 0 on (−∞, ∞)

b) f and g are linearly independent because f(t) ≠ kg(t) on (−∞, ∞) for every k ∈ R.

SECTION 3.6 Basis and Dimension 281

Linearly Independent Exponentials

34. We compute the Wronskian of f and g:

[ ]( )

( ) ( )

( ) ( )

( ) ( ) ( )

( ) , 0

at bt

a b t a b t a b t

at bt

f t g t e e

W f g t be ae e b a

f t g t ae be

+ + +

= = = − = − ≠

′ ′

for any t provided that b a ≠ . Hence, f and g are linearly independent if b a ≠ and linearly

dependent if b a = .

Looking Ahead

35. The Wronksian is

( )

( )

2 2 2

1 0

1

t t

t t t

t t

e te

W e t te e

e e t

= = + − = ≠

+

.

Hence, the vectors are linearly independent.

Revisiting Linear Independence

36. The Wronskian is

( ) ( ) ( )

3

3 3 3

3

3 3 3

3

3 3

5

5 3 5 5

5 3

5 9 5 9 5 3

5 9

45 15 45 5 15 5 80 0

t t t

t t t t t t

t t t t t t

t t t t t t

t t t

t t

e e e

e e e e e e

W e e e e e e

e e e e e e

e e e

e e

−

− − −

−

− − −

−

−

= − = − +

−

= ⎡ − − − − + + ⎤ = − ≠

⎣ ⎦

Hence, the vectors are linearly independent.

Independence Checking

37. W =

2 2

5 cos sin

sin cos

0 sin cos 5 5(sin cos )

cos sin

0 cos sin

t t

t t

t t t t

t t

t t

−

− = = +

− −

− −

= 5 ≠ 0 ∴ The set {5, cos t, sin t} is linearly independent on

38. W =

1

0 1 1(1 1) 2 0

0

t t

t t

t t

t t

t t

e e

e e

e e

e e

e e

−

−

−

−

−

−

− = = + = ≠

The set {e

t

, e

−t

, 1} is linearly independent on

39. W =

2

2 2

2 2

1 1 2 2

1 1

0 0 2

t t t

t t

t

+ −

+ −

− =

−

= 2(−2 − t − 2 + t)

= −8 ≠ 0

∴{2 + t, 2 − t, t

2

} is linearly independent on

40. W =

2 2

2 2

3 4 2 1

2 1 3 4 2

6 2 2 6 2

2 2 6 2

6 0 2

t t t

t t t t

t t

t t

− −

− −

= +

=

( ) ( )

2 2 2 2

6 4 2( 1) 2 6 8 12 t t t t − − + − − = −4 ≠ 0

{3t

2

− 4, 2t, t

2

− 1} is linearly independent on

282 CHAPTER 3 Linear Algebra

41. W =

2 2

cosh sinh

cosh sinh

sin cosh

t t

t t

t t

= −

2 2

2 2 2 2

2 2

2 2 2 2

1 0

4 4 4 4

t t t t

t t t t

e e e e

e e e e

− −

− −

⎛ ⎞ ⎛ ⎞ + −

= −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎛ ⎞ + + − +

= − = + = ≠

⎜ ⎟

⎝ ⎠

{cosh t, sinh t} is linearly independent on

42. W =

cos sin

( sin ) cos cos sin

t t

t t t t

e t e t

e t e t e t e t − + +

=

2 2 2 2 2

cos cos sin sin sin cos

t t t t

e t e t t e t e t t + + −

=

2 2 2 2

(cos sin ) 0

t t

e t t e + = ≠ for all t

{ cos , sin }

t t

e t e t is linearly independent on

Getting on Base in

2

43. Not a basis because [ ] { }

1, 1 does not span

2

.

44. A basis because [ ] [ ] { }

1, 2 , 2, 1 are linearly independent and span

2

.

45. ( ) ( ) { }

1, 1 , 1, 1 − − is not a basis because [ ] [ ] 1, 1 1, 1 − − = − , hence they are linearly dependent.

46. [ ] [ ] { }

1, 0 , 1, 1 is a basis because the vectors are linearly independent and span

2

.

47. [ ] [ ] [ ] { }

1, 0 , 0, 1 , 1, 1 is not a basis because the vectors are linearly dependent.

48. [ ] [ ] [ ] [ ] { }

0, 0 , 1, 1 , 2, 2 , 1, 1 − − is not a basis because the vectors are linearly dependent.

The Base for the Space

49.

3

= V : S is not a basis because the two vectors are not enough to span

3

.

50.

3

= V : Yes, S is a basis because the vectors are linearly independent and span

3

.

51.

3

= V : S is not a basis because four vectors are linearly dependent in

3

.

52.

2

= V P : Clearly the two vectors

2

3 1 t t + + and

2

2 4 t t − + are linearly independent because they

are not constant multiples of one another. They do not span the space because

2

dim 3 = P .

53.

3

= V P : The

3

dim 4 = P ; i.e.,

{ }

3 2 1

, , , 1 t t t is a basis for

3

P .

SECTION 3.6 Basis and Dimension 283

54.

4

= V P : We assume that

( ) ( ) ( ) ( )

4 3 2

1 2 3 4 5

3 4 1 5 1 0 c t c t c t c t c t t + + + + + − + − + =

and compare coefficients. We find a homogeneous system of equations that has only the zero

solution

1 2 3 4 5

0 c c c c c = = = = = . Hence, the vectors are linearly independent. To show the

vectors span

4

P , we set the above linear combination equal to an arbitrary vector

4 3 2

at bt ct dt e + + + + , and compare coefficients to arrive at a system of equations, which can

besolved for

1

c ,

2

c ,

3

c ,

4

c , and

5

c in terms of a, b, c, d, e. Hence, the vectors span

4

P so that

they are a basis for

4

P .

55.

22

= V M : We assume that

1 2 3 4

1 0 0 1 0 0 1 1 0 0

0 0 0 0 1 0 1 1 0 0

c c c c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ + + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

yields the equations

1 4

2 4

3 4

4

0

0

0

0

c c

c c

c c

c

+ =

+ =

+ =

=

which has the zero solution

1 2 3 4

0 c c c c = = = = . Hence, the vectors are linearly independent. If

we replace the zero vector on the right of the preceding equation by an arbitrary vector

a b

c d

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

we get the four equations

1 4

2 4

3 4

4

c c a

c c b

c c c

c d

+ =

+ =

+ =

=

This yields the solution

4 3 2 1

, , , c d c c d c b d c a d = = − = − = −

Hence, the four given vectors span

22

M . Because they are linearly independent and span

22

M ,

they are a basis.

56.

23

= V M : If we set a linear combination of these vectors to an arbitrary vector, like

1 2 3 4 5

1 0 1 1 1 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 1 0 1 1 1 0 1 1 1

a b c

c c c c c

d e f

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ + + + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

we arrive at the algebraic equations

284 CHAPTER 3 Linear Algebra

1 2

2

1

3 4 5

4 5

3 5

.

c c a

c b

c c

c c c d

c c e

c c f

+ =

=

=

+ + =

+ =

+ =

Looking at the first three equations gives

1

c a b = − ,

1

c c = . If we pick an arbitrary matrix such

that a b c − ≠ , we have no solution. Hence, the vectors do not span

22

M and do not form a basis.

(They are linearly independent however.)

Sizing Them Up

57.

{ }

1, 2 3 1 2 3

, 0 x x x x x x ⎡ ⎤ = + + =

⎣ ⎦

W

Letting

2

x α = ,

3

x β = , we can write

1

x α β = − − . Any vector in W can be written as

1

2

3

1 1

1 0

0 1

x

x

x

α β

α α β

β

− − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

where α and β are arbitrary real numbers. Hence, The dimension of W is 2; a basis is

{[ ] 1, 1, 0 − , [ ] 1, 0, 1 − }.

58.

{ }

1, 2 3 4 1 3 2 4

, , 0, x x x x x x x x ⎡ ⎤ = + = =

⎣ ⎦

W

Letting

3

x α = ,

4

x β = , we have

1

2

3

4

.

x

x

x

x

α

β

α

β

= −

=

=

=

Any vector in W can be written as

1

2

3

4

1 0

0 1

1 0

0 1

x

x

x

x

α

β

α β

α

β

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

where α and β are arbitrary real numbers. Hence, the two vectors [ ] 1, 0, 1, 0 − and [ ] 0, 1, 0, 1

form a basis of W, which is only two-dimensional.

SECTION 3.6 Basis and Dimension 285

Polynomial Dimensions

59. { } , 1 t t − . We write

( )

1 2

1 at b c t c t + = + −

yielding the equations

1 2

2

:

1: .

t c c a

c b

+ =

− =

We can represent any vector at b + as some linear combination of t and 1 t − . Hence, we have

that { } , 1 t t − spans a two-dimensional vector space.

60.

{ }

2

, 1, 1 t t t − + . We write

( ) ( )

2 2

1 2 3

1 1 at bt c c t c t c t + + = + − + +

yielding the equations

2

3

1 2

2 3

:

:

1: .

t c a

t c c b

c c c

=

+ =

− + =

Because we can solve this system for

1

c ,

2

c ,

3

c in terms of a, b, c getting

1

2

3

c a c b

c a c

c a

= − + +

= −

=

the subspace spans the entire three-dimensional vector space

2

P .

61.

{ }

2 2

, 1, 1 t t t t − − + . We can see that

( ) ( )

2 2

1 1 t t t t = − − + + ,

so that the dim of the subspace is 2 and is spanned by any two of the vectors in the set.

Solution Basis

62. Letting z α = we solve for x and y, obtaining 4 x α = − , 5 y α = . An arbitrary solution of the

system can be expressed as

4 4

5 5

1

x

y

z

α

α α

α

− − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Hence, the vector [–4, 5, 1] is a basis for the solutions.

Solution Spaces for Linear Algebraic Systems

63. The matrix of coefficients for the system in Problem 61, Section 3.5

has RREF

1 0 3 0

0 1 1 0

0 0 0 1

0 0 0 0

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

, so

1 3

2 3

4

3 0

0

0

x x

x x

x

+ =

+ =

=

.

Let r = x

3

; then

286 CHAPTER 3 Linear Algebra

W =

3

1

:

1

0

r

⎧ ⎫ ⎡ ⎤

⎪ ⎪ ⎢ ⎥

−

⎪ ⎪

⎢ ⎥

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎩ ⎭

so a basis is

3

1

1

0

⎧ ⎫ ⎡ ⎤

⎪ ⎪ ⎢ ⎥

−

⎪ ⎪

⎢ ⎥

⎨ ⎬

⎢ ⎥

⎪ ⎪

⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎩ ⎭

.

Dim W = 1.

64. The matrix of coefficients for the system, by Problem 62, Section 3.5,

has RREF =

1 0 2 1 3

0 1 2 3 0

0 0 0 0 0

⎡ ⎤

⎢ ⎥

− −

⎢ ⎥

⎢ ⎥

⎣ ⎦

, so

1 3 4 5

2 3 4

2 3 0

2 3 0

x x x x

x x x

+ + + =

− − =

or

1 3 4 5

2 3 4

2 3

2 3

x x x x

x x x

= − − −

= +

Therefore a basis for W is

2 1 3

0 2 3

, , 1 0 0

0 1 0

0 0 1

⎧ ⎫ − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

.

Dim W = 3.

DE Solution Spaces

65.

n

n

dy

dt

= 0 a) By successive integration we obtain

y = c

n−1

t

n−1

+ c

n−2

t

n−2

+ ⋅⋅⋅ + c

1

t + c

0

for c

n−1

⋅⋅⋅, c

0

∈

which is a general description of all

elements in P

n−1

= the solution space ( )

n

⊆ C

b) A basis for P

n−1

is {1, t, …, t

n−1

}

Dim P

n−1

= n

66. y′ − 2y = 0 This is a first order linear DE with solution (by either method of Section 2.2)

y = Ce

2t

a) The solution space S = {Ce

2t

: C ∈ R} ( )

n

⊆ C

b) A basis B = {e

2t

}, dim S = 1.

67. y′ − 2ty = 0 By the methods of Section 2.2

2

t

y Ce =

a) S =

2

{ : } ( )

t n

Ce C∈ ⊆ C

B =

2

{ },

t

e dim S = 1.

SECTION 3.6 Basis and Dimension 287

68. y′ + (tan t)y = 0 a) By the methods of Section 2.2

y = C cos t , t ,

2 2

π π ⎛ ⎞

∈ −

⎜ ⎟

⎝ ⎠

S =

1

cos : , , ,

2 2 2 2

C t C t

π π π π ⎧ ⎫ ⎛ ⎞ ⎛ ⎞

∈ ∈ − ⊆ −

⎨ ⎬

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎩ ⎭

C

b) A basis B = cos : ,

2 2

t t

π π ⎧ ⎫ ⎛ ⎞

∈ −

⎨ ⎬

⎜ ⎟

⎝ ⎠ ⎩ ⎭

;

dim S = 1.

69. y′ + y

2

= 0 y

2

is not a linear function so

y′ + y

2

= 0 is not a linear differential equation.

By separation of variables

y′ = − y

2

2

dy

dt

y

= −

∫ ∫

1

1

1

1

y

t c

t c

y

y

t c

−

= − +

−

= −

=

−

But these solutions do not form a vector space

Let k ∈ R, k ≠ 1; then

k

t c −

is not a solution of the ODE.

Hence

1

: c

t c

⎧ ⎫

∈

⎨ ⎬

−

⎩ ⎭

is not a vector space.

70. y′ + (cos t)y = 0 By the method of Section 2.2 y = Ce

−sin t

a) S =

sin 2

{ : } ( )

t

Ce C

−

∈ ⊆ C

b) B =

{ }

sint

e

−

is a basis for S;

dim S = 1.

288 CHAPTER 3 Linear Algebra

Basis for Subspaces of R

n

71. W = {(a, 0, b, a− b + c): a, b, c ∈ R}

=

1 0 0

0 0 0

: , ,

0 1 0

1 1 1

a b c a b c

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ + ∈

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

so

1 0 0

0 0 0

, ,

0 1 0

1 1 1

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is a basis for W.

Dim W = 3.

72. W = {(a, a− b, 2a + 3b): a, b ∈ R}

=

1 0

1 1 : ,

2 3

a b a b

⎧ ⎫ ⎡ ⎤ ⎡ ⎤

⎪ ⎪

⎢ ⎥ ⎢ ⎥

+ − ∈

⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎩ ⎭

so

1 0

1 , 1

2 3

⎧ ⎫ ⎡ ⎤ ⎡ ⎤

⎪ ⎪

⎢ ⎥ ⎢ ⎥

−

⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is a basis for W.

Dim W = 2.

73. W = {(x + y + z, x + y, 4z, 0): x, y, z ∈ R}

=

1 1 1

1 1 0

: , , ,

0 0 4

0 0 0

x y z x y z

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ + ∈

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

(Note that x + y can be a single element of R.)

so

1 1

1 0

,

0 4

0 0

⎧ ⎫ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is a basis for W.

Dim W = 2.

SECTION 3.6 Basis and Dimension 289

Two-by-Two Basis

74. Setting

1 2 3

1 0 0 1 0 0 0 0

0 0 1 0 1 1 0 0

c c c

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

gives

1

0 c = ,

2

0 c = , and

3

0 c = . Hence, the given vectors are linearly independent. If we add the

vector

0 0

1 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

,

then the new vectors are still linearly independent (similar proof), and an arbitrary 2 2 × matrix

can be written as

1 2 3 4

1 0 0 1 0 0 0 0

0 1 1 0 1 1 1 0

a b

c c c c

c d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= + + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

because it reduces to

1

2

3

2 3 4

.

c a

c b

c d

c c c c

=

=

=

+ + =

This yields

1

2

3

4

c a

c b

c d

c b d

=

=

=

= − −

in terms of a, b, c, and d and form a basis for

22

M , which is four-dimensional.

Basis for Zero Trace Matrices

75. Letting

1 2 3

1 0 0 1 0 0

0 1 0 0 1 0

a b

c c c

c d

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

+ + =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

we find

1

a c = ,

2

b c = ,

3

c c = ,

1

d c = − . Given 0 a b c d = = = = implies

1 2 3 4

0 c c c c = = = = ,

which shows the vectors (matrices) are linearly independent. It also shows they span the set of

2 2 × matrices with trace zero because if 0 a d + = , we can solve for

1

c a d = = − ,

2

c b = ,

3

c c = .

In other words we can write any zero trace 2 2 × matrix as follows as a linear combination of the

three given vectors (matrices):

1 0 0 1 0 0

0 1 0 0 1 0

a b

a b c

c a

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Hence, the vectors (matrices) form a basis for the 2 2 × zero trace matrices.

290 CHAPTER 3 Linear Algebra

Hyperplane Basis

76. Solving the equation

3 2 6 0 x y z w + − + =

for x we get

3 2 6 x y z w = − + − .

Letting y α = , z β = and w γ = , we can write

3 2 6 x α β γ = − + − .

Hence, an arbitrary vector ( ) , , , x y z w in the hyperplane can be written

3 2 6 3 2 6

1 0 0

0 1 0

0 0 1

x

y

z

w

α β γ

α

α β γ

β

γ

− + − − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= = + +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

The set of four-dimensional vectors

3 2 6

1 0 0

, ,

0 1 0

0 0 1

⎧ ⎫ − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is a basis for the hyperplane.

77. Symmetric Matrices

W = : , ,

a b

a b c

b c

⎧ ⎫ ⎡ ⎤ ⎪ ⎪

∈

⎨ ⎬

⎢ ⎥

⎪ ⎪ ⎣ ⎦ ⎩ ⎭

is the subspace of all symmetric 2 × 2 matrices

A basis for W is

1 0 0 1 0 0

, ,

0 0 1 0 0 1

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎪ ⎪

⎨ ⎬

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

.

Dim W = 3.

Making New Basis From Old:

78. B

1

= { , , } i j k

(Many correct answers)

A typical answer is B

2

= { , , } + + i i j i k

To show linear independence:

Set

1

c i

+ c

2

( ) + i j

+ c

3

( ) + i k

= 0

c

1

+ c

2

+ c

3

= 0

c

1

+ c

2

= 0

c

1

+ c

3

= 0

1 1 1

1 1 0 1 0

1 0 1

= − ≠

∴ B

2

is a basis since dim R

3

= 3

SECTION 3.6 Basis and Dimension 291

79. B

1

=

1 0 0 0

,

0 0 0 1

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎪ ⎪

⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is a basis for D

A typical answer:

B

2

=

1 0 1 0

,

0 1 0 1

⎧ ⎫ ⎡ ⎤ ⎡ ⎤ ⎪ ⎪

⎨ ⎬

⎢ ⎥ ⎢ ⎥

−

⎪ ⎪ ⎣ ⎦ ⎣ ⎦ ⎩ ⎭

Both elements are diagonal and B

2

is linearly independent

dim D = 2

80. B

1

= {sin t, cos t} is a basis for the solution space S

so dim S = 2

Typical answer

B

2

= {sin t + cos t, sin t − cos t}

Both elements are in S and B

2

is linearly independent

Basis for

2

P

81. We first show the vectors span

2

P by selecting an arbitrary vector from

2

P and show it can be

written as a linear combination of the three given vectors. We set

( ) ( )

2 2

1 2 3

1 1 at bt c c t t c t c + + = + + + + +

and try to solve for

1

c ,

2

c ,

3

c in terms of a, b, c. Setting the coefficients of

2

t , t, and 1 equal to

each other yields

2

1

1 2

1 2 3

:

:

1: ,

t c a

t c c b

c c c c

=

+ =

+ + =

giving the solution

1 2 3

, , . c a c a b c b c = = − + = − +

Hence, the set spans

2

P . We also know that the vectors

{ }

2

1, 1, 1 t t t + + +

are independent because setting

( ) ( )

2

1 2 3

1 1 0 c t t c t c + + + + + =

we get

1

1 2

1 2 3

0

0

0

c

c c

c c c

=

+ =

+ + =

which has only the solution

1 2 3

0 c c c = = = . Hence, the vectors are a basis for

2

P , for example,

( ) ( ) ( )

2 2

3 2 1 3 1 1 1 1 1 t t t t t + + = + + − + − .

292 CHAPTER 3 Linear Algebra

82. True/False Questions

a) True

b) False, dim W = 2

c) False, The given set is made up of vectors in R

2

, not R

4

. The basis for W is made up of

vectors in R

4

.

83. Essay Question

Points to be covered in the essay.

1. Elements of W are linear combinations of

1

1

0

0

− ⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

and

2

0

1

1

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥ −

⎢ ⎥

⎣ ⎦

which span W, a subspace of the vector space R

4

The set

1 2

1 0

,

0 1

0 1

⎧ ⎫ − ⎡ ⎤ ⎡ ⎤

⎪ ⎪ ⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎨ ⎬

⎢ ⎥ ⎢ ⎥ −

⎪ ⎪

⎢ ⎥ ⎢ ⎥

⎪ ⎪

⎣ ⎦ ⎣ ⎦ ⎩ ⎭

is linearly independent and in consequence, it is a basis for W.

Convergent Sequence Space

84. V is a vector space since the addition and scalar multiplication operations follow the rules for R

and the operations

{ } { } { } and { } { }

n n n n n n

a b a b c a ca + = + =

are the precise requirements for closure under vector addition and scalar multiplication.

Zero element is {0} where a

n

= 0 for all n

Additive Inverse for {a

n

} is {−a

n

}

Let W = { } {2 }:{ }

n n

a a ∈V Clearly {0} = W and {2a

n

} + {2b

n

} = {2a

n

+ 2b

n

} = 2{a

n

+ b

n

}

Also k{2a

n

} = {2ka

n

} for every k ∈ R

∴ W is a subspace

dim W = ∞

A basis is { } {1, 0, 0, 0,...}{0,1, 0, 0} and so forth .

Cosets in

3

R

85. [ ] { }

1 2 3 1 2 3

, , 0 x x x x x x = + + = W , [ ] 0, 0, 1 = v

**We want to write W in parametric form, so we solve the equation
**

1 2 3

0 x x x + + =

by letting

2

x β = ,

3

x γ = and solving for

1

x β γ = − − . These solutions can be written as

[ ] [ ] { }

1, 1, 0 1, 0, 1 : , β γ β γ − + − ∈R ,

SECTION 3.6 Basis and Dimension 293

so the coset of [0, 0, 1] in W is the collection of vectors

[ ] [ ] [ ] { }

0, 0, 1 1, 1, 0 1, 0, 1 , β γ β γ + − + − ∈R .

Geometrically, this describes a plane passing through (0, 0, 1) and parallel to

1 2 3

0 x x x + + = .

86. [ ] { }

1 2 3 3

, , 0 x x x x = = W , [ ] 1, 1, 1 = v

**Here a coset through the point (1, 1, 1) is given by the points
**

[ ] [ ] [ ] { }

1, 1, 1 1, 0, 0 0, 1, 0 β γ + +

where β and γ are arbitrary real numbers. This describes the plane through (1, 1, 1) parallel to

the

1

x

2

x plane (i.e., the subspace W).

More Cosets

87. The coset through the point (1, –2, 1) is given by the points

( ) ( ) { }

1, 2, 1 1, 3, 2 t − + ;

t is an arbitrary number. This describes a line through (1, –2, 1) parallel to the line ( ) 1, 3, 2 t .

Line in Function Space

88. The general solution of

2

2

t

y y e

−

′ + = is

( )

2 2 t t

y t ce te

− −

= + .

We could say the solution is a “line” in the vector space of solutions, passing through

2t

te

−

in the

direction of

2t

e

−

.

Mutual Orthogonality

Proof by Contradiction

89. Let

1

{ ,..., }

n

v v

be a set of mutually orthogonal nonzero vectors

and suppose they are not linearly independent.

Then for some j,

j

v

**can be written as a linear combination of the others
**

1 1

...

j n n

c c = + + v v v

(excluding

j j

c v

)

1 1

...

0

j j j n j n

c c ⋅ = ⋅ + + ⋅

=

v v v v v v

*

j

v

cannot be zero

Suggested Journal Entry I

90. Student Project

Suggested Journal Entry II

91. Student Project

Suggested Journal Entry III

92. Student Project

294

CHAPTER

4

Second-Order Linear

Differential Equations

4.1

The Harmonic Oscillator

The Undamped Oscillator

1. 0 x x + = , ( ) 0 1 x = , ( ) 0 0 x =

The general solution of the harmonic oscillator equation 0 x x + = is given by

( )

( )

1 2

1 2

cos sin

sin cos .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 1 x = , ( ) 0 0 x = , gives

( )

( )

1

2

0 1

0 0

x c

x c

= =

= =

so

1

1 c = ,

2

0 c = . Hence, the IVP has the solution ( ) cos x t t = .

2. 0 x x + = , ( ) 0 1 x = , ( ) 0 1 x =

The general solution of the harmonic oscillator equation 0 x x + = is given by

( )

( )

1 2

1 2

cos sin

sin cos .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 1 x = , ( ) 0 1 x = , gives

( )

( )

1

2

0 1

0 1

x c

x c

= =

= =

or

1 2

1 c c = = . Hence, the IVP has the solution

( ) cos sin x t t t = + .

Higher-Order Linear

Differential Equations

SECTION 4.1 The Harmonic Oscillator 295

In polar form, this would be

( ) 2 cos

4

x t t

π ⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

.

3. 9 0 x x + = , ( ) 0 1 x = , ( ) 0 1 x =

The general solution of the harmonic oscillator equation 9 0 x x + = is given by

( )

( )

1 2

1 2

cos3 sin3

3 sin3 3 cos3 .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 1 x = , ( ) 0 1 x = , gives

( )

( )

1

2

0 1

0 3 1

x c

x c

= =

= =

so

1

1 c = ,

2

1

3

c = . Hence, the IVP has the solution

( )

1

cos3 sin3

3

x t t t = + .

In polar form, this would be

( ) ( )

10

cos 3

3

x t t δ = −

where

1

1

tan

3

δ

−

= . This would be in the first quadrant.

4. 4 0 x x + = , ( ) 0 1 x = , ( ) 0 2 x = −

The general solution of the harmonic oscillator equation 4 0 x x + = is given by

( )

( )

1 2

1 2

cos 2 sin 2

2 sin 2 2 cos 2 .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 1 x = , ( ) 0 2 x = − , gives

( )

( )

1

2

0 1

0 2 2

x c

x c

= =

= = −

so

1

1 c = ,

2

1 c = − . Hence, the IVP has the solution

( ) cos 2 sin 2 x t t t = − .

In polar form, this would be

( ) 2 cos 2

4

x t t

π ⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

.

296 CHAPTER 4 Higher-Order Linear Differential Equations

5. 16 0 x x + = , ( ) 0 1 x = − , ( ) 0 0 x =

The general solution of the harmonic oscillator equation 16 0 x x + = is given by

( )

( )

1 2

1 2

cos 4 sin 4

4 sin 4 4 cos 4 .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 1 x = − , ( ) 0 0 x = , gives

( )

( )

1

2

0 1

0 4 0

x c

x c

= = −

= =

so

1

1 c = − ,

2

0 c = . Hence, the IVP has the solution

( ) cos 4 x t t = − .

6. 16 0 x x + = , ( ) 0 0 x = , ( ) 0 4 x =

The general solution of the harmonic oscillator equation 16 0 x x + = is given by

( )

( )

1 2

1 2

cos 4 sin 4

4 sin 4 4 cos 4 .

x t c t c t

x t c t c t

= +

= − +

Substituting the initial conditions ( ) 0 0 x = , ( ) 0 4 x = , we get

( )

( )

1

2

0 0

0 4 4

x c

x c

= =

= =

so

1

0 c = ,

2

1 c = . The IVP has the solution

( ) sin 4 x t t = .

7.

2

16 0 x x π + = , x (0) = 0, ( ) 0 x π =

2

0

16

1

π

ω = = 4π

x = c

1

cos 4π t + c

2

sin 4π t

x = −4π c

1

sin 4π t + 4π c

2

cos 4π t

x(0) = 0 = c

1

(0) x = π = 4π c

2

c

2

=

1

4

x =

1

sin 4

4

t π

SECTION 4.1 The Harmonic Oscillator 297

8.

2

4 0 x x π + = , x (0) = 1, ( ) 0 x π =

2

0

4 2

π π

ω = =

x = c

1

cos 4π t + c

2

sin 4π t

x =

1 2

sin cos

2 2 2 2

c t c t

π π π π

+

x(0) = 1 = c

1

(0) x = π =

2

2

c

π

, c

2

= 2

x = cos 2sin

2 2

t t

π π

+

Graphing by Calculator

9.

cos sin y t t = +

The equation tells us 2 T π = and because

0

2

T

π

ω

= ,

0

1 ω = . We then measure the delay

0

0.8

δ

ω

≈ which we can compute as the phase

angle ( ) 0.8 1 0.8 δ ≈ = . The amplitude A can be

measured directly giving 1.4 A ≈ . Hence,

( ) cos sin 1.4cos 0.8 t t t + ≈ − .

Compare with the algebraic form in Problem 15.

π

Š1.5

1.5

t

T = 2π

A ≈1.4

δ

ω

≈ 0.8

y

0

10.

2cos sin y t t = +

The equation tells us 2 T π = and because

0

2

T

π

ω

= ,

0

1 ω = . We then measure the delay

0

0.5

δ

ω

≈ , which we can compute as the phase

angle ( ) 0.5 1 0.5 δ ≈ = . The amplitude A can be

measured directly giving 2.2 A ≈ . Hence,

( ) 2cos sin 2.2cos 0.5 t t t + ≈ − .

8 Š4

Š2.5

2.5

4

T = 2π

A ≈2.2

y

t

δ

ω

≈ 0.5

0

298 CHAPTER 4 Higher-Order Linear Differential Equations

11.

5cos3 sin3 y t t = +

The equation tells us that period is

2

3

T

π

= and

because

0

2

T

π

ω

= ,

0

3 ω = . We then measure the

delay

0

0.05

δ

ω

≈ , which we can compute as the

phase angle ( ) 3 0.05 0.15 δ ≈ = . The amplitude A

can be measured directly giving 5.1 A ≈ . Hence,

( ) 5cos3 sin3 5.1cos 3 0.15 t t t + ≈ − .

–5

5

t

π

T = 2π A ≈5.1 /3

y

δ

ω

≈ 0.05

0

12.

cos3 5sin3 y t t = +

The equation tells us the period is

2

3

T

π

= and

because

0

2

T

π

ω

= ,

0

3 ω = . We then measure the

delay

0

0.5

δ

ω

≈ , which we can compute as the

phase angle ( ) 0.5 3 1.5 δ ≈ = . The amplitude A

can be measured directly giving 5.1 A ≈ . Hence,

( ) cos3 5sin3 5.1cos 3 1.5 t t t + ≈ − .

3

Š5

5

A ≈5.1

y

T = 2π /3

t

δ

ω

≈ 0.5

0

13.

cos5 2sin5 y t t = − +

equation tells us that period is

2

5

T

π

= and

because

0

2

T

π

ω

= ,

0

5 ω = . We then measure the

delay

0

or 0.4

8

δ π

ω

≈ , which we can compute as

the phase angle ( ) 5 0.4 2 δ ≈ = . The amplitude A

can be measured directly giving 2.2 A ≈ . Hence,

( ) cos5 2sin 2.2cos 5 2 t t t − + ≈ − .

–2

2

t

2 1

y

A ≈2.2 T = 2π /5

δ

ω

≈ 0.4

0

SECTION 4.1 The Harmonic Oscillator 299

Alternate Forms for Sinusoidal Oscillations

14. We have

( ) ( ) ( ) ( )

0 0 0 0 0

1 0 2 0

cos cos cos cos cos cos cos sin sin

cos sin

A t A t t A t A t

c t c t

ω δ ω δ ω δ δ ω δ ω

ω ω

− = + = +

= +

where

1

cos c A δ = ,

2

sin c A δ = .

Single-Wave Forms of Simple Harmonic Motion

15. cos sin t t +

By Equation (4)

1

1 c = ,

2

1 c = , and

0

1 ω = . By Equation (5)

2 A = ,

4

π

δ =

yielding

cos sin 2 cos

4

t t t

π ⎛ ⎞

+ = −

⎜ ⎟

⎝ ⎠

.

(Compare with solution to Problem 9.)

16. cos sin t t −

By Equation (4)

1

1 c = ,

2

1 c = − , and

0

1 ω = . By Equation (5)

2 A = ,

4

π

δ = −

yielding

cos sin 2 cos

4

t t t

π ⎛ ⎞

− = +

⎜ ⎟

⎝ ⎠

.

Because

1

c is positive and

2

c is negative the phase angle is in the 4th quadrant.

17. cos sin t t − +

By Equation (4)

1

1 c = − ,

2

1 c = , and

0

1 ω = . By Equation (5)

2 A = ,

3

4

π

δ =

yielding

3

cos sin 2 cos

4

t t t

π ⎛ ⎞

− + = −

⎜ ⎟

⎝ ⎠

.

Because

1

c is negative and

2

c is positive the phase angle is in the 2nd quadrant.

300 CHAPTER 4 Higher-Order Linear Differential Equations

18. cos sin t t − −

By Equation (5)

1

1 c = − ,

2

1 c = − , and

0

1 ω = . By Equation (6)

2 A = ,

5

4

π

δ =

yielding

5

cos sin 2 cos

4

t t t

π ⎛ ⎞

− − = −

⎜ ⎟

⎝ ⎠

.

Because

1

c and

2

c are negative, the phase angle is in the 3rd quadrant.

Component Form of Harmonic Motion

Using ( ) cos cos cos sin sin A B A B A B + = − , we write:

19. ( ) ( ) ( ) { }

2cos 2 2 cos 2 cos sin 2 sin 2cos 2 t t t t π π π − = − − − = −

20.

1 3

cos cos cos sin sin cos sin

3 3 3 2 2

t t t t t

π π π ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

+ = − = −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

21. { }

2 2 3 2

3cos 3 cos cos sin sin 3 cos sin cos sin

4 4 4 2 2 2

t t t t t t t

π π π

⎧ ⎫

⎧ ⎫ ⎪ ⎪ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

− = − − − = + = +

⎨ ⎬ ⎨ ⎬

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎪ ⎪ ⎩ ⎭

⎩ ⎭

22.

3 1

cos 3 cos3 cos sin3 sin cos3 sin3

6 6 6 2 2

t t t t t

π π π ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

− = − − − = +

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

Interpreting Oscillator Solutions

23. 0 x x + = , ( ) 0 1 x = , ( ) 0 0 x =

Because

0

1 ω = , we know the natural frequency is

1

2π

Hz and the period is 2π seconds. Using

the initial conditions, we find the solution (see Problem 1)

( ) cos x t t = ,

which tells us the amplitude is 1 and the phase angle 0 δ = radians.

24. 0 x x + = , ( ) 0 1 x = , ( ) 0 1 x =

Because

0

1 ω = radians per second, we know the natural frequency is

1

2π

Hz (cycles per

second), and the period is 2π . Using the initial conditions, we find the solution (see Problem 2)

( ) 2 cos

4

x t t

π ⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

,

which tells us the amplitude is 2 and the phase angle is

4

π

δ = radians.

SECTION 4.1 The Harmonic Oscillator 301

25. 9 0 x x + = , ( ) 0 1 x = , ( ) 0 1 x =

Because

0

3 ω = radians per second, we know the natural frequency is

3

2π

Hz (cycles per

second), and the period is

2

3

π

. Using the initial conditions, we find the solution (see Problem 3)

( ) ( )

10

cos 3

3

x t t δ = −

where

1

1

tan

3

δ

−

= , which tells us the amplitude is

10

3

and the phase angle is

1

1

tan 0.3218

3

δ

−

= ≈ radians.

26. 4 0 x x + = , ( ) 0 1 x = , ( ) 0 2 x = −

Because

0

2 ω = radians per second, we know the natural frequency is

1

π

Hz (cycles per second),

and the period is π. Using the initial conditions, we find the solution (see Problem 4)

( ) 2 cos 2

4

x t t

π ⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

,

which tells us the amplitude is 2 and the phase angle is

4

π

δ = − radians.

27. 16 0 x x + = , ( ) 0 1 x = − , ( ) 0 0 x =

Because

0

4 ω = radians per second, we know the natural frequency is

2

π

Hz (cycles per second),

and the period is

2

π

. Using the initial conditions, we find the solution (see Problem 5)

( ) ( ) cos 4 x t t π = − ,

which tells us the amplitude is 1 and the phase angle is δ π = radians.

302 CHAPTER 4 Higher-Order Linear Differential Equations

28. 16 0 x x + = , ( ) 0 0 x = , ( ) 0 4 x =

Because

0

4 ω = radians per second, we know the natural frequency is

2

π

Hz (cycles per second),

and the period is

2

π

. Using the initial conditions, we find the solution (see Problem 6)

( ) cos 4

2

x t t

π ⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

,

which tells us the amplitude is 1 and the phase angle is

2

π

δ = radians.

29.

2

16 0 x x π + = , x (0) = 0, ( ) 0 x π =

From Problem 7, x =

1

sin 4

4

t π

Amplitude =

1

4

, x =

1

cos 4

4 2

t

π

π

⎛ ⎞

−

⎜ ⎟

⎝ ⎠

, phase angle δ =

2

π

, and period T =

0

2 2

4

π

π

ω

= = 8.

30.

2

4 0 x x π + = , x (0) = 1, ( ) 0 x π =

4r

2

+ π

2

= 0

r =

2

i

π

± x =

1 2

cos sin

2 2

c t c t

π π

+ 1 = c

1

1 2

sin cos

2 2 2 2

x c t c t

π π π π

= − + π =

2

2

c

π

x = cos 2sin

2 2

t t

π π

+ c

2

= 2

Amplitude: A =

2

1 2 5 + =

x = 5 cos 1.11

2

t

π ⎛ ⎞

−

⎜ ⎟

⎝ ⎠

Relating Graphs

31. (a) See graph, next page.

(b) 0.25 0 x x + =

0

k

m

ω = = 0.5

From (4) x = c

1

cos 0.5t + c

2

sin 0.5t

x(0) = 0 ⇒ c

1

= 0 so x(t) = c

2

sin

2

t ⎛ ⎞

⎜ ⎟

⎝ ⎠

and

2

( ) cos

2 2

c t

x t

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

.

SECTION 4.1 The Harmonic Oscillator 303

Alternatively, you could use (5)

x = A cos (0.5t − δ)

x(0) = 0 ⇒ δ =

2

π

so x(t) = A cos

2 2

t π ⎛ ⎞

−

⎜ ⎟

⎝ ⎠

= A sin

2

t ⎛ ⎞

⎜ ⎟

⎝ ⎠

,

and ( ) cos

2 2

A t

x t

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

.

(c) See graph

Graph for b) and d) Graph for a)

(d) Amplitudes are approximately

2 5

, , ,

3 2 3 6

A A A A

, and A

Phase Portraits

For comparison of phase portraits, the main observation is that the elliptical shape depends on ω

0

,

which is k in all of these problems because 0 x kx + = .

If ω

0

= 1, trajectories are circular. As ω

0

increases above 1, ellipses become taller and thinner.

As ω

0

decreases from 1 to 0, ellipses become shorter and wider.

The aspect ratio of

max

max

x

x

=

1

ω

.

Other observations include:

• All these phase portraits show closed elliptical trajectories that circulate clockwise.

• The trajectory of Problem 33 has a greater radius than that of Problem 32 because the initial

condition is further from the origin.

• The trajectories in Problems 36 and 37 are on the same ellipse with different starting points

that give different solution equations.

304 CHAPTER 4 Higher-Order Linear Differential Equations

32.

0 x x + =

0

1

0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 1 x(t) = cos(t),

so ( ) x t = −sin(t).

33.

0 x x + =

0

1

1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 2 x(t) = cos(t) + sin(t),

so ( ) x t = −sin(t) + cos(t).

34.

9 0 x x + =

0

1

1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 3, x(t) = cos 3t +

1

sin3 ,

3

t

so ( ) x t = −3 sin 3t + cos 3t.

SECTION 4.1 The Harmonic Oscillator 305

35.

4 0 x x + =

0

1

2

⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

x

G

From Problem 4, x(t) = cos 2t − sin 2t,

so ( ) x t = −2 sin 2t − 2 cos 2t.

36.

16 0 x x + =

0

1

0

− ⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 5, x(t) = −cos 4t,

so ( ) x t ′ = 4 sin 4t.

37.

16 0 x x + =

0

0

4

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 6, x(t) = sin 4t,

so ( ) x t = 4 cos 4t.

306 CHAPTER 4 Higher-Order Linear Differential Equations

38. 2

16 0 x x π + =

0

0

π

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 7, x(t) =

1

sin 4 ,

4

t π

so ( ) x t = cos 4π t.

39. 2

4 0 x x π + =

0

1

π

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

x

G

From Problem 8, x(t) = cos 2sin ,

2 2

t t

π π

+

so ( ) x t = sin cos .

2 2 2

t t

π π π

π − +

Matching Problems

40. B

41. A

42. D

43. C

SECTION 4.1 The Harmonic Oscillator 307

Changing Frequencies

44.

(a)

0

0.5 ω = gives tx curve with lowest fre-

quency (fewest humps);

0

2 ω = gives the

highest frequency (most humps).

(b)

0

0.5 ω = gives the innermost phase-

plane trajectory; as

0

ω increases, the

amplitude of x increases. In Figure 4.1.8

the trajectory that is not totally visible is

the one for

0

2 ω = .

3

–4

4

x

t

2

–2

2π π

ω

0

05 = .

ω

0

2 =

ω

0

1 =

Detective Work

45. (a) The curve

8

1.4cos

5

y t

π ⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

is a sinusoidal curve with period 2π , amplitude 1.4 A ≈ ,

and phase angle

8

5

π

δ ≈ .

(b) From this graph we estimate

0

1 ω = , 2.3 A ≈ , and

4

π

δ ≈ . Thus, we have

( ) ( )

( )

0 0

cos 2.3cos 2.3 cos cos sin sin

4 4 4

2 2

2.3 cos sin 1.6 cos sin .

2 2

x t A t t t t

t t t t

π π π

ω δ

⎡ ⎤ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

= − = − = − − −

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢ ⎥

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦

⎧ ⎫

⎪ ⎪

= + ≈ +

⎨ ⎬

⎪ ⎪

⎩ ⎭

Pulling a Weight

46. (a) The mass is 2 kg m = . Because a force of 8 nt stretches the spring 0.5 meters, we find

that

8

16 nt m

0.5

k = = . If we then release the weight, the IVP describing the motion of

the weight is 2 16 0 x x + = or

8 0 x x + = , ( ) 0 0.5 x = , ( ) 0 0 x = .

The solution of the differential equation is

( )

( )

cos 8 x t A t δ = − .

Using the initial conditions, we get the simple oscillation

( )

( )

0.5cos 8 x t t = .

308 CHAPTER 4 Higher-Order Linear Differential Equations

(b)

1

Amplitude

2

= m;

0

2 2

sec

8

T

π π

ω

= = ,

8

2

f

π

= cycles per second

(c) Setting

( )

cos 8 0 t = , we find that the weight will pass through equilibrium at

1

4

of the

period or after

0.56

2 8

t

π

= ≈ seconds.

At that time velocity is

( ) 0.56 2 sin 1.414m sec

2

x

π ⎛ ⎞

= − ≈ −

⎜ ⎟

⎝ ⎠

moving away from original displacement.

Finding the Differential Equation

47. (a) The mass is 500 m = gm, which means the force acting on the spring is 500 980 × dynes.

This stretches the spring 50 cm, so the spring constant is

500 980

9800 dynes cm

50

k

×

= = .

The mass is then pulled down 10 cm from its initial displacement, giving ( ) 0 10 x = (as

long as we measure downward to be the positive direction, which is typical in these

problems). The initial velocity of the mass is assumed to be zero, so ( ) 0 0 x = . Thus, the

IVP for the mass is

500 9800 0 x x + =

or

5 98 0 x x + = , ( ) 0 10 x = , ( ) 0 0 x = .

(b) The solution of the differential equation found in part (a) is

( )

98 98

cos 10cos

5 5

x t A t t δ

⎛ ⎞

= − =

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

(c) In part (b) the amplitude is 10 cm, phase angle is 0, the period is

5

2 2 1.4

98

m

T

k

π π = = ≈ sec,

and the natural frequency is given by the reciprocal

1

0.71

1.4

f = = oscillations per sec-

ond.

SECTION 4.1 The Harmonic Oscillator 309

Initial-Value Problems

48. (a) The weight is 16 lbs, so the mass is roughly

16 1

32 2

= slugs. (See Table 4.1.1 in text.) This

mass stretches the spring

1

2

foot, hence

1

2

16

32 lb ft k = = . This yields the equation

( )

1

32 0

2

x x + = , or

64 0 x x + = .

The initial conditions are that the mass is pulled down 4 inches (

1

3

foot) from equilib-

rium and then given an upward velocity of 4 ft sec . This gives the initial conditions of

( )

1

0

3

x = ft, ( ) 0 4 x = − ft/sec, using the engineering convention that for x, down is

positive.

(b) We have the same equation 64 0 x x + = , but the initial conditions are ( )

1

0

6

x = − ft,

( ) 0 1 x = ft/sec.

One More Weight

49. The mass is

12 3

32 8

m = = slugs. The spring is stretched

1

2

foot, so the spring constant is

1

2

12

24 lb ft k = = . The initial position of the mass is 4 inches (

1

3

ft) upward so ( )

1

0

3

x = − . The

initial motion is 2 ft sec upward, and thus ( ) 0 2 x = − . Hence, the equation for the motion of the

mass is

64 0 x x + = , ( )

1

0

3

x = − , ( ) 0 2 x = − ,

which has the solution

( )

1 1

cos8 sin8

3 4

x t t t = − − .

310 CHAPTER 4 Higher-Order Linear Differential Equations

Writing this in polar form, we have

( )

2 2

2 2

1 2

1 1 2

1

1 1 5

3 4 12

3

tan tan

4

3.78 radians angle in 3rd quadrant .

A c c

c

c

δ

− −

⎛ ⎞ ⎛ ⎞

= + = − + − =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎛ ⎞

⎛ ⎞

= =

⎜ ⎟ ⎜ ⎟

⎝ ⎠

⎝ ⎠

≈

Hence, we have the solution in polar form

( ) ( )

5

cos 8 3.78

12

x t t = − .

See figure.

1.4

Š0.4

0.4

x

1.2 1 0.8 0.6 0.4 0.2

t

x t t ( ) = − ( )

5

12

8 378 cos .

Spring oscillation

Comparing Harmonic Motions

50. The period of simple harmonic motion is given by

0

2

T

π

ω

= , where

0

k

m

ω = . Notice that this

does not depend at all on our initial conditions. Period is the same so is the frequency, but the

amplitude will be twice that in the first case.

Testing Your Intuition

51.

3

0 x x x + + =

Here we have a vibrating spring with no friction, but a nonlinear restoring force

3

F x x = − − that

is stronger than a purely linear force –x. For small displacement x the nonlinear F will not be

much different (for small x,

3

x is very small), but for larger x, the force F will be much stronger

than in a linear spring; as F increases, the frequency of the vibration increases. This equation is

called Duffing’s (strong) equation, and the associated springs are called strong springs.

52.

3

0 x x x + − =

Here we have a vibrating spring with no friction, and a nonlinear restoring force

3

F x x = − + . For

small displacement x the nonlinear term

3

x has little effect, but as x increases toward 1, the

restoring force F diminishes (i.e., the spring weakens when it is stretched a lot, and the restoring

force becomes zero when 1 x = ). The decreasing F causes decreasing frequency (and increasing

period). This equation is called Duffing’s (weak) equation, and the associated springs are called

weak springs.

SECTION 4.1 The Harmonic Oscillator 311

53. 0 x x − =

This equation describes a spring with no friction and a negative restoring force. You may wonder

if there are such physical systems. In the next two sections we will see that this equation describes

the motion of an inverted pendulum (4.3 Problems 58, 59), and it has solutions sinh t and cosh t

(4.2 Example 2), in contrast to 0 x x + = , which has solutions sin t and cos t. The restoring force

for the equation under discussion is always directed away from the equilibrium position; hence

the solution always moves away from the equilibrium, which us unstable.

54.

1

0 x x x

t

+ + =

This equation can be interpreted as describing the motion of a vibrating mass that has infinite

friction

1

x

t

at 0 t = , but friction immediately begins to diminish and approaches zero as t

becomes very large. You may simulate in your mind the motion of such a system. Do you think for

large t that the oscillation might behave much like simple harmonic motion? (See 4.3 Problem 68.)

55.

( )

2

1 0 x x x x + − + =

This is called van der Pol’s equation and describes oscillations (mostly electrical) where internal

friction depends on the value of the dependent variable x. Note that when 1 x < , we actually have

negative friction, so for a small displacement x we would expect the system to move away from

the zero solution (an unstable equilibrium) in the direction of 1 x = . But when 1 x > , we will

have positive friction causing damping. We will see in 4.3 Problem 70 and in Chapter 7 that there

is a periodic solution between small x and large x that attracts all these other solutions.

56. 0 x tx + =

Here we have a vibrating spring with no friction, but the restoring force –tx gets stronger as time

passes. Hence we expect to see no damping, but faster vibrations as t increases.

LR-Circuit

57. (a) Without having a capacitor to store energy, we do not expect the current in the circuit to

oscillate. If there had been a constant voltage

0

V on in the past, we would expect the

current to be (by Ohm’s law)

0

V

I

R

= . If we then shut off the voltage, we would expect

the current to die off in the presence of a resistance.

(b) If a current I passes through a resistor with resistance R, then the voltage drop is RI; the

voltage drop across an inductor of inductance L is LI

**. We obtain the IVP:
**

0 LI RI + =

, ( )

0

0

V

I

R

=

312 CHAPTER 4 Higher-Order Linear Differential Equations

(c) The solution of the IVP is

( )

( ) 0

R L t

V

I t e

R

−

= .

(d) If 40 R = ohms, 5 L = henries,

0

10 V = volts, then ( )

8

1

4

t

I t e

−

= ohms.

LC-Circuit

58. (a) With a nonzero initial current and no resistance, we do not expect the current to damp to

zero. We would expect an oscillatory current due to the capacitor. Thus the charge on the

capacitor would oscillate indefinitely. The exact behavior depends on the initial

conditions and the values of the inductance and capacitance.

(b) Kirchoff’s voltage law states that the sum of the voltage drops around the circuit is equal

to the impressed voltage source. Hence, we have

1

0 LI I

C

+ =

∫

**or, in terms of the charge across the capacitor, we have the IVP
**

1

0 LQ Q

C

+ =

, ( ) 0 0 Q = , ( ) 0 5 Q =

.

(c) The solution of the IVP is

( )

( )

1

1

sin

5

LC

LC

t

Q t = .

This agrees with the oscillatory behavior predicted in part (a).

(d) With values 10 L = henries,

3

10 C

−

= farads, the charge on the capacitor is

( )

( )

sin 100

1

5 sin10

2 100

t

Q t t = = .

A Pendulum Experiment

59. The pendulum equation is

sin 0

g

L

θ θ + =

.

For small θ, we can approximate sinθ θ ≈ , giving the differential equation

0

g

L

θ θ + =

.

SECTION 4.1 The Harmonic Oscillator 313

This is the equation of simple harmonic motion with circular frequency

0

g

L

ω = , and natural

frequency

0

1

2

g

f

L π

= . Hence, the period of motion is

0

1

2

L

T

f g

π = = .

earth sun

sun earth

400, 000 100 40 632

T g

T g

= = = ≈ .

Changing into Systems

60. 4 2 3 17 cos

1

( 3 2 17 cos )

4

x x x t

x y

y x y t

− + = −

=

= − + + −

61.

1

( )

1 1

( )

Lq Rq q V t

c

q I

I q RI V t

L c

+ + =

=

⎛ ⎞

= − − +

⎜ ⎟

⎝ ⎠

62.

1

5 15 5cos3

10

1

3 cos3

50

q q q t

q I

I q I t

+ + =

=

= − − +

63.

2

2

2

4 sin 2 0

4 1 sin 2

4 sin 2

t x tx x t t t

t

x x x

t t t

x y

x t

y y

t t t

+ + = >

+ + =

=

= − − +

64. 4 16 4sin

4 sin

4 sin

x x t

x x t

x y

y x t

+ =

+ =

=

= − +

Circular Motion

65. Writing the motion in terms of polar coordinates r and θ and using the fact that the angular

velocity is constant, we have

0

θ ω =

**(a constant). We also know the particle moves along a circle
**

of constant radius, which makes r a constant. We then have the relation cos x r θ = , and hence

314 CHAPTER 4 Higher-Order Linear Differential Equations

( )

( ) ( )

2

sin

sin cos .

x r

x r r

θ θ

θ θ θ θ

= −

= − −

Because 0 θ =

,

0

θ ω =

**, we arrive at the differential equation
**

2

0

0 x x ω + = .

Another Harmonic Motion

66. For simple harmonic motion the circular frequency

0

ω is

2

0

2

kR

mR I

ω =

+

,

so the natural frequency

0

f is

2

0

2

1

2

kR

f

mR I π

=

+

.

Motion of a Buoy

67. The buoy moves in simple harmonic motion, so the period is

0

2

2.7 2

m

T

k

π

π

ω

= = = .

We have one equation in two unknowns, but the buoyancy equation yields the second equation. If

we push the buoy down 1 foot, the force upwards will be F Vρ = , where V is the submerged

volume and ρ is the density of water. In this case,

2

V r h π = , 9 inches 0.75 ft r = = , 1 h = ft, and

62.5 ft sec ρ = , so the force required to push the buoy down 1 foot is ( )( )

9

1 62.5 110

16

π ≈ lbs.

But k is the force divided by distance, so

110

110 lbs ft

1

k = = . Finally, solving for m in the

equation for T, we get

2

2

4

kT

m

π

= , and substituting in all of our numbers, we arrive at 20.4 m ≈

slugs (see Table 4.1.1. in the text.) The buoy weighs ( )( ) 20.4 32.2 657 mg = = lbs.

Los Angeles to Tokyo

68. (a) Along the tunnel,

cos mx kr kx θ = − = −

x(0) = d if x is measured positive to the left of the center of the tunnel.

(0) x = 0 means that the train starts from rest (as soon as a brake is released).

SECTION 4.1 The Harmonic Oscillator 315

(b) The solution to the IVP in part (a) is

x(t) = c

1

cos ω

0

t + c

2

sin ω

0

t,

where ω

0

=

k

m

.

At the surface of the earth mg = kr

where r = R, so ω

0 =

k q

m R

=

.

Letting x(0) = d yields c

1

= d, while letting (0) 0 x = yields c

2

= 0.

Hence we have

x(t) = cos .

q

d t

R

For the train to go from L.A. to Tokyo, x(t) goes from d to −d.

and

q

t

R

goes from 0 to π.

Hence,

2

4000 mi 5280 ft/mi

32 ft/sec

2552 sec 42.5 minutes

f

R

t

q

π

π

=

×

=

= ≈

(c) The solution t

f

=

R

q

π from part (b) does not depend on the location of the points on the

earth’s surface; π, R, and q are all constant.

Factoring Out Friction

69. (a) Letting ( )

( )

( )

2 b m t

x t e X t

−

= , we have

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

2 2

2

2 2 2

2

2

.

4

b m t b m t

b m t b m t b m t

b

x t e X t e X t

m

b b

x t e X t e X t e X t

m m

− −

− − −

−

= +

−

= + +

**316 CHAPTER 4 Higher-Order Linear Differential Equations
**

Substituting this into the original equation (1) and dividing through by

( ) 2 b m t

e

−

, we arrive

at

[ ]

2

2

0

2 4

b b b

m X X X b X X k X

m m m

⎡ ⎤

⎡ ⎤

− + + − + + =

⎢ ⎥

⎢ ⎥

⎣ ⎦

⎣ ⎦

.

Rearranging terms gives

[ ]

2 2

0

4 2

b b

mX b b X k X

m m

⎡ ⎤

+ − + + − + =

⎢ ⎥

⎣ ⎦

or

2

0

4

b

mX k X

m

⎛ ⎞

+ − =

⎜ ⎟

⎝ ⎠

.

(b) If we assume

2

0

4

b

k

m

− > , then divide by m and let

2

0

1

4

2

mk b

m

ω = − )

we find the solution of this DE in X is

( ) ( )

1 0 2 0 0

cos sin cos X t c t c t A t ω ω ω δ = + = − .

Thus, we have

( )

( )

( )

( )

( )

2 2

0

cos

b m t b m t

x t e X t Ae t ω δ

− −

= = − .

Suggested Journal Entry

70. Student Project

SECTION 4.2 Real Characteristic Roots 317

4.2

Real Characteristic Roots

Real Characteristic Roots

1. 0 y′′ =

The characteristic equation is

2

0 r = , so there is double root at 0 r = . Thus, the general solution

is

( )

0 0

1 2 1 2

t t

y t c e c te c c t = + = + .

2. 0 y y ′′ ′ − =

The characteristic equation is

2

0 r r − = , which has roots 0, 1. Thus, the general solution is

( )

1 2

t

y t c c e = + .

3. 9 0 y y ′′ − =

The characteristic equation is

2

9 0 r − = , which has roots 3, –3. Thus, the general solution is

( )

3 3

1 2

t t

y t c e c e

−

= + .

4. 0 y y ′′ − =

The characteristic equation is

2

1 0 r − = , which has roots 1, –1. Thus, the general solution is

( )

1 2

t t

y t c e c e

−

= + .

5. 3 2 0 y y y ′′ ′ − + =

The characteristic equation is

2

3 2 0 r r − + = , which factors into ( )( ) 2 1 0 r r − − = , and hence has

roots 1, 2. Thus, the general solution is

( )

2

1 2

t t

y t c e c e = + .

6. 2 0 y y y ′′ ′ − − =

The characteristic equation is

2

2 0 r r − − = , which factors into ( )( ) 2 1 0 r r − + = , and hence has

roots 2, –1. Thus, the general solution is

( )

2

1 2

t t

y t c e c e

−

= + .

7. 2 0 y y y ′′ ′ + + =

The characteristic equation is

2

2 1 0 r r + + = , which factors into ( )( ) 1 1 0 r r + + = , and hence has

the double root –1, –1. Thus, the general solution is

( )

1 2

t t

y t c e c te

− −

= + .

318 CHAPTER 4 Higher-Order Linear Differential Equations

8. 4 4 0 y y y ′′ ′ − + =

The characteristic equation is

2

4 4 1 0 r r − + = , which factors into ( )( ) 2 1 2 1 0 r r − − = , and hence

has the double root

1

2

,

1

2

. Thus, the general solution is

( )

2 2

1 2

t t

y t c e c te = + .

9. 2 3 0 y y y ′′ ′ − + =

The characteristic equation is

2

2 3 1 0 r r − + = , which factors into ( )( ) 2 1 1 0 r r − − = , and hence

has roots

1

2

, 1. Thus, the general solution is

( )

2

1 2

t t

y t c e c e = + .

10. 6 9 0 y y y ′′ ′ − + =

The characteristic equation is

2

6 9 0 r r − + = , which factors into ( )( ) 3 3 0 r r − − = , and hence has

the double root 3, 3. Thus, the general solution is

( )

3 3

1 2

t t

y t c e c te = + .

11. 8 16 0 y y y ′′ ′ − + =

The characteristic equation is

2

8 16 0 r r − + = , which factors into ( )( ) 4 4 0 r r − − = , and hence

has the double root 4, 4. Thus, the general solution is

( )

4 4

1 2

t t

y t c e c te = + .

12. 6 0 y y y ′′ ′ − − =

The characteristic equation is

2

6 0 r r − − = , which factors into ( )( ) 2 3 0 r r + − = , and hence has

roots –2, 3. Thus, the general solution is

( )

2 3

1 2

t t

y t c e c e

−

= + .

13. 2 0 y y y ′′ ′ + − =

The characteristic equation is

2

2 1 0 r r + − = , which factors into

( )( )

1 2 1 2 0 r r + − + + = , and

hence has roots 1 2 − + , 1 2 − − . Thus, the general solution is

( )

( )

2 2

1 2

t t t

y t e c e c e

− −

= + .

14. 9 6 0 y y y ′′ ′ + + =

The characteristic equation is

2

9 6 1 0 r r + + = , which factors into ( )

2

3 1 0 r + = , and hence has the

double root

1

3

− ,

1

3

− . Thus, the general solution is

( )

3 3

1 2

t t

y t c e c te

− −

= + .

SECTION 4.2 Real Characteristic Roots 319

Initial Values Specified

15. 25 0 y y ′′ − = , ( ) 0 1 y = , ( ) 0 0 y′ =

The characteristic equation of the differential equation is

2

25 0 r − = , which factors into

( )( ) 5 5 0 r r − + = , and thus has roots 5, –5. Hence,

( )

5 5

1 2

t t

y t c e c e

−

= + .

Substituting in the initial conditions ( ) 0 1 y = gives

1 2

1 c c + = . Substituting in ( ) 0 0 y = gives

1 2

5 5 0 c c − = . Solving for

1

c ,

2

c gives

1 2

1

2

c c = = . Thus the general solution is

( )

5 5

1 1

2 2

t t

y t e e

−

= + .

16. 2 0 y y y ′′ ′ + − = , ( ) 0 1 y = , ( ) 0 0 y′ =

The characteristic equation of the differential equation is

2

2 0 r r + − = , which factors into

( )( ) 2 1 0 r r + − = , and thus has roots 1, –2. Thus, the general solution is

( )

2

1 2

t t

y t c e c e

−

= + .

Substituting into ( ) 0 1 y = , ( ) 0 0 y′ = yields

1

1

3

c = ,

2

2

3

c = , so

( )

2

1 2

3 3

t t

y t e e

−

= + .

17. 2 0 y y y ′′ ′ + + = , ( ) 0 0 y = , ( ) 0 1 y′ =

The characteristic equation is

2

2 1 0 r r + + = , which factors into ( )( ) 1 1 0 r r + + = , and hence has

the double root –1, –1. Thus, the general solution is

( )

1 2

t t

y t c e c te

− −

= + .

Substituting into ( ) 0 0 y = , ( ) 0 1 y′ = yields

1

0 c = ,

2

1 c = , so

( )

t

y t te

−

= .

18. 9 0 y y ′′ − = , ( ) 0 1 y = − , ( ) 0 0 y′ =

The characteristic equation is

2

9 0 r − = , which factors into ( )( ) 3 3 0 r r − + = , and hence has

roots are 3, –3. Thus, the general solution is

( )

3 3

1 2

t t

y t c e c e

−

= + .

Substituting into ( ) 0 1 y = − , ( ) 0 0 y′ = yields

1 2

1

2

c c = = − , so

( )

3 3

1 1

2 2

t t

y t e e

−

= − − .

320 CHAPTER 4 Higher-Order Linear Differential Equations

19. 6 9 0 y y y ′′ ′ − + = , ( ) 0 0 y = , ( ) 0 1 y′ = −

The characteristic equation is

2

6 9 0 r r − + = , which factors into ( )( ) 3 3 0 r r − − = , and hence has

the double root 3, 3. Thus, the general solution is

( )

3 3

1 2

t t

y t c e c te = + .

Substituting into ( ) 0 0 y = , ( ) 0 1 y′ = − yields

1

0 c = ,

2

1 c = − , so

( )

3t

y t te = − .

20. 6 0 y y y ′′ ′ + − = , ( ) 0 1 y = , ( ) 0 1 y′ =

The characteristic equation is

2

6 0 r r + − = , which factors into ( )( ) 3 2 0 r r + − = , and hence has

roots –3, 2. Thus, the general solution is

( )

3 2

1 2

t t

y t c e c e

−

= + .

Substituting into ( ) 0 1 y = , ( ) 0 1 y′ = yields

1

1

5

c = ,

2

4

5

c = , so

( )

3 2

1 4

5 5

t t

y t e e

−

= + .

21. 0 y y ′′ ′ − = y(0) = 2, y(0) = −1

r

2

− r = 0 (Characteristic equation)

r(r − 1) = 0 r = 0, 1

y = c

1

+ c

2

e

t

⇒ 2 = c

1

+ c

2

y′ = c

2

e

t

⇒ −1 = c

2

, c

1

= 3

y = 3 − e

t

22. 4 12 0 y y y ′′ ′ − − = y(0) = 1, (0) y′ = −1

r

2

− 4r − 12 = 0 (Characteristic equation)

(r + 2)(r − 6) = 0 r = −2, 6

2 6

1 2

2 6

1 2

2 6

t t

t t

y c e c e

y c e c e

−

−

= +

′ = − +

1 2

1 2

(0) 1 1

(0) 1 2 6 1

y c c

y c c

= ⇒ + = ⎫

⎬

′ = − ⇒ − + = −

⎭

⇒

1 2

7 1

,

8 8

c c = =

y =

2 6

1 3

4 4

t t

e e

− −

+

SECTION 4.2 Real Characteristic Roots 321

Bases and Solution Spaces

23. 4 0 y y ′′ ′ − =

r

2

− 4r = 0 (Characteristic equation)

r(r − 4) = 0 ⇒ r = 0, 4

Basis: {1, e

4t

}

Solution Space: {y ⎜y = c

1

+ c

2

e

4t

; c

1

, c

2

∈ \}

24. 10 25 0 y y y ′′ ′ − + =

r

2

− 10r + 25 = 0 (Characteristic equation)

(r −5)

2

= 0 ⇒ r = 5, 5

Basis: {e

5t

, te

5t

}

Solution Space: {y⎜y = c

1

e

5t

+ c

2

te

5t

; c

1

, c

2

∈ \}

25. 5 10 15 0 y y y ′′ ′ − − =

5r

2

− 10r − 15 = 0 (Characteristic equation)

r

2

− 2r − 3 = 0 ⇒ (r − 3)(r + 1) = 0 ⇒ r = 3, −1

Basis: {e

3t

, e

−t

}

Solution Space: {y ⎜y = c

1

e

3t

+ c

2

e

−t

; c

1

, c

2

∈ \}

26. 2 2 2 0 y y ′′ ′ + + =

2

2 2 2 0 r r + + = (Characteristic equation)

( 2)( 2) 0 r r + + = r = 2, 2 − −

Basis:

{ }

2 2

,

t t

e te

− −

Solution Space:

{ }

2 2

1 2 1 2

; ,

t t

y y c e c te c c

− −

= + ∈\

Other Bases

27. 4 0 y y ′′ − =

r

2

− 4 = 0 (Characteristic equation)

r = ±2 ∴ {e

2t

, e

−2t

} is a basis

To show {cosh 2t, sinh 2t} is a basis, we need only show that cosh 2t and sinh 2t are linearly

independent solutions:

W =

cosh 2 sinh 2

2sinh 2 2cosh 2

t t

t t

= 4 cosh

2

2t − 4 sinh

2

2t

322 CHAPTER 4 Higher-Order Linear Differential Equations

cosh

2

2t =

2

2 2 4 4

1

2 2

t t t t

e e e e

− −

⎛ ⎞ ⎛ ⎞

+ + +

=

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

sinh

2

2t =

2

2 2 4 4

1

2 2

t t t t

e e e e

− −

⎛ ⎞ ⎛ ⎞

− − −

=

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

so cosh

2

2t − sinh

2

2t = 1 and W = 4 ≠ 0.

∴ cosh 2t, sinh 2t are linearly independent.

Substitute y = cosh 2t, y′ = 2 sinh 2t, y′′ = 4 cosh 2t

Then y′′ − 4y = 4 cosh 2t − 4 cosh 2t = 0 ∴ y = cosh 2t is a solution.

In similar fashion, we can show that y = sinh 2t is also a solution.

To show that

{ }

2

, cosh 2

t

e t is a basis, we use the facts that e

2t

and cosh 2t are solutions.

Then:

W =

2

2

cosh 2

2 2sinh 2

t

t

e t

e t

=

2 2

2

2 2 2

2

2

t t

t

t t t

e e

e

e e e

−

−

+

−

= (e

4t

− 1) − (e

4t

+ 1) = −2 ≠ 0

∴ e

2t

and cosh 2t are linearly independent

28. 0 y′′ =

r

2

= 0 (Characteristic equation) so that r = 0, 0.

Basis: {1, t}

To show {t + 1, t − 1} is also a basis:

Note that for both y = t + 1 and y = t − 1, y′′ = 0, so both are solutions.

W =

1 1

1 1

t t + −

= (t + 1) − (t − 1) = 2

∴ t + 1, t − 1 are linearly independent

To show {2t, 3t − 1} is another basis:

Note that for both y = 2t and y = 3t − 1, y′′ = 0, so both are solutions.

W =

2 3 1

2 3

t t −

- 6t − 2(3t − 1) = 2

∴ 2t, 3t − 1 are linearly independent

SECTION 4.2 Real Characteristic Roots 323

The Wronskian Test

29. W =

2 3

2 2 3

2

1 1

1 2 3 1

1 1 2 1 3

( 1) 0 2 6 1 0 2 6

0 0 2 6

0 0 6 0 0 6

0 0 0 6

t t t t t

t t t t t t t

t t

t t t

t

+ − +

+ − +

+

= + −

= (t + 1)12 − 12(t − 1) = 24 ≠ 0

Yes, {t + 1, t − 1, t

2

+ t, t

3

} is a basis for the solution space for y

(4)

= 0.

30. W =

5 5 5

5

5 5 5 5

5 5 5

2 1

1 2

(5 1) 5 10 5 1 5 10

25 10 25 50

(25 10) 25 50

t t t

t

t t t t

t t t

te e e

t e

t e e e e t

t

t e e e

−

−

−

+ = +

+

+

=

5 5

5 10 5 1 10 5 1 5

1 (2 )

25 50 25 10 50 25 10 25

t t

t t

e t e

t t

−

⎡ ⎤ + +

− + −

⎢ ⎥

+ +

⎣ ⎦

= 25e

5t

≠ 0

Yes, {te

5t

, e

5t

, 2e

5t

− 1} is a basis for the solution space for 10 25 y y y ′′′ ′′ ′ − + = 0.

31. The given set has only three solutions, so it cannot be a basis. A basis for the solution space of y

(4)

= 0 must have 4 linearly independent solutions.

Sorting Graphs

32.

324 CHAPTER 4 Higher-Order Linear Differential Equations

Relating Graphs

For Problems 33−35, 5 6 0 x x x + + = has (from Example 1) solutions

2 3

1 2

2 3

1 2

( ) (1)

( ) 2 3 (2)

t t

t t

x t c e c e

x t c e c e

− −

− −

= +

= − −

33. (a), (b)

x(0) ≈ − 10 (0) x ≈ 0

⇓ ⇓

c

1

+ c

2

= −10 −2c

1

− 3c

2

= 0

c

1

= −30, c

2

= 20

(c) From (1) in box, x(t) = −30e

−2t

+ 20e

−3t

.

For t > 0, each term diminishes as t increases; the result remains negative, below the

t-axis.

For t < 0, each exponential increases as t decreases; the negative term cancels the positive

term when 30e

−2t

= 20e

−3t

or e

−t

= 1.5,

that is, when t = −ln 1.5 ≈ −.405 which looks about right on the tx-graph.

(d) From (2), ( ) x t = 60e

−2t

− 60e

−3t

= 60e

−2t

(1 − e

−t

) which is always positive for t > 0,

decreasing as t increases.

( ) x t reaches a maximum when

2 3

( ) 120 180 0

t t

x t e e

− −

= − + =

2 3 0

2

,

3

t

t

e

e

−

−

− + =

=

so

2

ln 0.406,

3

t = − ≈ which looks about right on the tx -graph.

SECTION 4.2 Real Characteristic Roots 325

34. (a)

(b)

(b) x(0) ≈ 5 (0) x ≈ 0

⇓ ⇓

c

1

+ c

2

= 5 −2c

1

− 3c

2

= 0

Because all problems for finding c

i

are of type = Ac b

G

G

, we solve for

1 −

= c A b

G

G

We have A =

1 1

2 3

⎡ ⎤

⎢ ⎥

− −

⎣ ⎦

so A

−1

=

3 1

,

2 1

− − ⎡ ⎤

−

⎢ ⎥

⎣ ⎦

and here

5

0

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

b

G

so

3 1 5 15

2 1 0 10

− − + ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= − =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

c

G

(c) From (1) in box on previous page, x(t)= 15e

−2t

− 10e

−3t

.

As t increases from zero, both exponentials decrease with their sum remaining positive, which

agrees with the tx graph.

(d) From (2), ( ) x t = −30e

−2t

− 30e

−3t

= −30e

−2t

(1 + e

−t

)

For t > 0, this quantity is always negative, and as long as t increases, each term gets

closer to zero, in agreement with tx -graph.

326 CHAPTER 4 Higher-Order Linear Differential Equations

35.

(a) x(0) ≈ 0 and (0) x ≈ −8

(b) From the method of 34(b),

1 1

2

0 3 1 0 8

8 2 1 8 8

c

c

−

− − − ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = − =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A

so from (1), x(t) = −8e

−2t

+ 8e

−3t

.

(c) For t > 0,

2 3 t t

e e

− −

> so the sum is always negative and approaches zero as t increases,

in agreement with the tx graph.

(d) From (2) ( ) x t = +16e

−2t

− 24e

−3t

= 0, so 2e

−2t

− 3e

−3t

= 0,

which yields e

−t

=

2

3

or t ≈ .406,

which looks about right on the tx -graph.

For t > 0.406, ( ) 0 x t > and decreases toward zero as t increases.

SECTION 4.2 Real Characteristic Roots 327

For Problems 36−39, 6 0 x x x − − = has (from Example 1) solutions

2 3

1 2

2 3

1 2

( ) (1)

( ) 2 3 (2)

t t

t t

x t c e c e

x t c e c e

−

−

= +

= − +

36. (a)

(b) From (1) c

1

+ c

2

= 0

From (2) −2c

1

+ 3c

2

= 2

c

1

=

2

2 2

;

5 5

c − =

x(t) =

2 3

2 2

5 5

t t

e e

−

− +

2 3

4 6

( )

5 5

t t

x t e e

−

= +

(c) For t > 0, e

−2t

< e

3t

, so x(t) is always positive, and as t increases, so does x(t).

This result agrees with the tx-graph.

For t < 0, e

3t

<

e

−2t

so x(t) is always negative, and as t becomes more negative, x(t)

becomes more negative.

(d) ( ) x t is always positive.

For t > 0, e

−2t

< e

3t

, so the second term dominates as t increases, and ( ) x t increases as

well. These facts are in agreement with the tx -graph.

37. (a)

328 CHAPTER 4 Higher-Order Linear Differential Equations

(b) From (1), c

1

+ c

2

= 2

From (2), −2c

1

+ 3c

2

= 0

c

1

=

2

6 4

;

5 5

c =

x(t) =

2 3

6 4

5 5

t t

e e

−

+

2 3

12 12

( )

5 5

t t

x t e e

−

= − +

(c) x(t) is always positive.

For t > 0, as t increases, the first term decreases toward 0 and the second term increases

ever more rapidly, in agreement with the tx-graph.

(d) For t > 0, e

3t

> e

−2t

, so ( ) x t is positive, and ( ) x t increases as t increases, as shown on the

tx graph.

For t < 0, the first term will dominate and ( ) x t will be negative, ever more so as t

becomes more negative, in agreement with the tx -graph.

38. (a)

(b) From (1), c

1

+ c

2

= −3

From (2), −2c

1

+ 3c

2

= 0

c

1

=

2

9 6

;

5 5

c − = −

x(t) =

2 3

9 6

5 5

t t

e e

−

− −

2 3

18 18

( )

5 5

t t

x t e e

−

= + −

(c) x(t) is always negative, with a maximum at t = 0. (See part (d) and set ( ) x t = 0.)

These facts agree with the tx graph.

(d) For t > 0, e

3t

> e

−2t

so the negative term dominates in ( ) x t and ( ) x t is negative, ever

more so as t increases.

For t < 0, e

−2t

> e

3t

so the positive term dominates in ( ) x t and ( ) x t is positive, ever more

so as t becomes more negative. These facts agree with the tx graph.

SECTION 4.2 Real Characteristic Roots 329

39. (a)

(b) From (1), c

1

+ c

2

= 0

From (2), −2c

1

+ 3c

2

= −1

c

1

=

2

1 1

;

5 5

c = −

x(t) =

2 3

1 1

5 5

t t

e e

−

−

2 3

2 3

( )

5 5

t t

x t e e

−

= − −

(c) For t > 0 the second term dominates, so x(t) is negative, ever more so as t increases.

For t < 0 the first term dominates, so x(t) is positive, ever more so as t becomes more

negative.

These facts agree with the tx graph.

(d) ( ) x t is always negative. The maximum value will occur when t = 0, as shown on the tx -

graph.

Phase Portraits

Careful inspection shows:

40. (B) 41. (D) 42. (A) 43. (C)

330 CHAPTER 4 Higher-Order Linear Differential Equations

Independent Solutions

44. Letting

1 2

1 2

0

r t r t

c e c e + =

for all t, then by setting 0 t = and 1 t = we have, respectively

1 2

1 2

1 2

0

0.

r r

c c

c e c e

+ =

+ =

When

1 2

r r ≠ then these equations have the unique solution

1 2

0 c c = = , which shows the given

functions

1 2

,

r t r t

e e are linearly independent for

1 2

r r ≠ .

Second Solution

45. Substituting ( )

2 bt a

y v t e

−

= into

0 ay by cy ′′ ′ + + =

gives

2 2

2

2 2 2

2

2

.

4

bt a bt a

bt a bt a bt a

b

y v e ve

a

b b

y v e v e ve

a a

− −

− − −

′ ′ = −

′′ ′′ ′ = − +

Substituting , , v v v ′ ′′ into the differential equation gives the new equation (after dividing by

2 bt a

e

−

)

2

2

0

2 4

b b b

a v v v b v v cv

a a a

⎛ ⎞

⎛ ⎞

′′ ′ ′ − + + − + =

⎜ ⎟

⎜ ⎟

⎝ ⎠

⎝ ⎠

.

Simplifying gives

2

0

4

b

av c v

a

⎛ ⎞

′′ − − =

⎜ ⎟

⎝ ⎠

.

Because we have assumed

2

4 b ac = , we have the equation 0 v′′ = , which was the condition to be

proven.

Independence Again

46. Setting

2 2

1 2

0

bt a bt a

c e c te

− −

+ =

for all t, we set in particular 0 t = and then 1 t = . These yield, respectively, the equations

1

2 2

1 2

0

0

b a b a

c

c e c e

− −

=

+ =

which have the unique solution

1 2

0 c c = = . Hence, the given functions are linearly independent.

SECTION 4.2 Real Characteristic Roots 331

Repeated Roots, Long-Term Behavior

47. Because

2 bt a

e

−

approaches 0 as t → ∞ (for a, 0 b > ), we know the first term tends toward zero.

For the second term we need only verify that

2

2

0

bt a

bt a

t

te

e

−

= →

does as well. To use l’Hôpital’s rule, we compute the derivatives of both the numerator and de-

nominator of the previous expression, getting

2

2

1

bt a

b

a

e

,

which clearly approaches 0 as t → ∞. Then l’Hôpital’s rule assures us that the given expression

( ) 2 b a t

te

−

approaches 0 as well.

Negative Roots

48. We have

2

4 r b b mk = − ± − , so in the overdamped case where

2

4 0 b mk − > , these

characteristic roots are real. Because m and k are both nonnegative,

2 2

4 b mk b − < causing

2

1

4 r b b mk = − + − to be a negative sum of negative and positive terms

and

2

2

4 r b b mk = − − − to be a negative sum of two negative terms.

Circuits and Springs

49. (a) The LRC equation is

1

0 LQ RQ Q

C

+ + =

, hence the following discriminant conditions

hold:

( )

( )

( )

2

2

2

4 0 underdamped

4 0 critically damped

4 0 overdamped .

L

R

C

L

R

C

L

R

C

Δ = − <

Δ = − =

Δ = − >

332 CHAPTER 4 Higher-Order Linear Differential Equations

(b) The conditions in part (a) can be written

( )

( )

( )

2 underdamped

2 critically damped

2 overdamped .

L

R

C

L

R

C

L

R

C

<

=

>

These correspond to the analogy that m, b, and k correspond respectively to L, R, and

1

C

.

(see Table 4.1.3 in the textbook.)

A Test of Your Intuition

50. Intuitively, a curve whose rate of increase is proportional to its height will increase very rapidly

as the height increases. On the other hand, upward curvature doesn’t necessarily imply that the

function is increasing! (The curve

t

e

−

has upward curvature, yet decreases to 0 as t → ∞.) In this

case, the restriction that ( ) 0 0 y′ = will cause the second curve to increase, but probably not

nearly as rapidly as the first curve. Solving the equations, the IVP y y ′ = , ( ) 0 1 y = has the

solution

t

y e = , whereas the second curve described by y y ′′ = , ( ) 0 1 y = , ( ) 0 0 y′ = has the

solution

( )

1 1

2 2

t t

y t e e

−

= + .

The first curve is indeed above the second curve.

An Overdamped Spring

51. (a) The solution of an overdamped equation has the form

( )

1 2

1 2

r t r t

x t c e c e = + .

Suppose that

1 1 2 1

1 2

0

r t r t

c e c e + =

for some

1

t . Because

2 1

r t

e is never zero, we can divide by

2 1

r t

e to get

( )

1 2 1

1 2

0

r r t

c e c

−

+ = .

Solving for

1

t gives

2

1

1 2 1

1

ln

c

t

r r c

−

=

−

.

This unique number is the only value for which the curve may pass through 0. If the

argument of the logarithm is negative or if the value of

1

t is negative, then the solution

does not cross the equilibrium point.

(b) By a similar argument, we can show that the derivative ( ) x t also has one zero.

SECTION 4.2 Real Characteristic Roots 333

A Critically Damped Spring

52. (a) Suppose

( )

1 1

1 2 1

0

r t

c c t e + = .

We can divide by the nonnegative quantity

1 1

r t

e getting the equation

1 2 1

0 c c t + = , which

has the unique solution

1

1

2

c

t

c

= − . Hence, the solution of a critically damped equation can

pass through the equilibrium at most once. If the value of

1

t is negative, then the solution

does not cross the equilibrium point.

(b) By a similar argument, we can show that the derivative ( ) x t has one zero.

Linking Graphs

After inspection, we have labeled the yt and y t ′ graphs as follows.

53.

–5

y

3

t

–5

5

y'

3

t

–5

5

y'

5

y

–5

1

2

3

4

1

2

3

4

4

1

3

2

t = 0

54.

334 CHAPTER 4 Higher-Order Linear Differential Equations

55.

Damped Vibration

56. The IVP problem is

2 0 x x x + + = , ( )

1

0 3 in ft

4

x = = , ( ) 0 0 ft sec x = .

The solution is

( )

1 1

4 4

t t

x t e te

− −

= + .

This is zero only for

1

1 t = − , whereas the physical system does not start before 0 t = .

Surge Functions

57. For 0 mx bx kx + + = , let m = 1, find b, k and initial conditions for the solution x = Ate

−rt

2

0

0 (characteristic equation)

x bx kx

r br k

+ + =

+ + =

r =

2

4 1

2

b b k − ± − ⋅ ⋅

b

2

− 4k = 0 to obtain repeated roots, r = ,

2 2

b b

− −

x = c

1

e

−rt

+ c

2

te

−rt

1 2

( ( ) )

rt rt rt

x rc e c t r e e

− − −

= − + − + ∴ c

1

= 0 = x(0)

c

2

= A = (0) x

(0) x = c

2

∴ b = −2r, 4k = b

2

and from above we know 4k = 4r

2

so that k = ±r, for k > 0

SECTION 4.2 Real Characteristic Roots 335

Results: r and A are given, and

b = −2r

k = r

x(0) = 0

(0) x′ = A

LRC-Circuit I

58. (a)

1

2 101 50 0 LQ RQ Q Q Q Q

C

+ + = + + =

, ( ) 0 99 Q = , ( ) 0 0 Q =

(b) ( )

50 2

100

t t

Q t e e

− −

= − + (c) ( ) ( )

50 2

50 50

t t

I t Q t e e

− −

= = −

(d) As t → ∞, ( ) 0 Q t → and ( ) 0 I t →

LRC-Circuit II

59. (a)

1

15 50 0 LQ RQ Q Q Q Q

C

+ + = + + =

, ( ) 0 5 Q = , ( ) 0 0 Q =

(b) ( )

5 10

10 5

t t

Q t e e

− −

= − (c) ( ) ( )

5 10

50 50

t t

I t Q t e e

− −

= = − +

(d) As t → ∞, ( ) 0 Q t → and ( ) 0 I t →

The Euler-Cauchy Equation 0 ′′ ′ + + =

2

at y bty cy

60. Let ( )

r

y t t = , so

( )

1

2

1 .

r

r

y rt

y r r t

−

−

′ =

′′ = −

Hence

( )

2

1 0

r r r

at y bty cy ar r t brt ct ′′ ′ + + = − + + = .

Dividing by

r

t yields the characteristic equation

( ) 1 0 ar r br c − + + = ,

which can be written as

( )

2

0 ar b a r c + − + = .

If

1

r and

2

r are two distinct roots of this equation, we have solutions

( )

( )

1

2

1

2

.

r

r

y t t

y t t

=

=

336 CHAPTER 4 Higher-Order Linear Differential Equations

Because these two functions are clearly linearly independent (one not a constant multiple of the

other) for

1 2

r r ≠ , we have

( )

1 2

1 2

r r

y t c t c t = +

for 0 t > .

The Euler-Cauchy Equation with Distinct Roots

For Problems 61–65, see Problem 60 for the form of the characteristic equation for the Euler-Cauchy DE.

61.

2

2 12 0 t y ty y ′′ ′ + − =

In this case 1 a = , 2 b = , 12 c = − , so the characteristic equation is

( ) ( )( )

2

1 2 12 12 4 3 0 r r r r r r r − + − = + − = + − = .

Hence, we have roots

1

4 r = − ,

2

3 r = , and thus

( )

3 4

1 2

y t c t c t

−

= + .

62.

2

4 8 3 0 t y ty y ′′ ′ + − =

In this case 4 a = , 8 b = , 3 c = − , so the characteristic equation is

( ) ( )( )

2

4 1 8 3 4 4 3 2 1 2 3 0 r r r r r r r − + − = + − = − + = .

Hence, we have roots

1

1

2

r = ,

2

3

2

r = − , and thus

( )

1 2 3 2

1 2

y t c t c t

−

= + .

63.

2

4 2 0 t y ty y ′′ ′ + + =

In this case 1 a = , 4 b = , 2 c = , so the characteristic equation is

( ) ( )( )

2

1 4 2 3 2 1 2 0 r r r r r r r − + + = + + = + + = .

Hence, we have roots

1

1 r = − ,

2

2 r = − , and thus

( )

1 2

1 2

y t c t c t

− −

= + .

SECTION 4.2 Real Characteristic Roots 337

64.

2

2 3 0 t y ty y ′′ ′ + − =

In this case 2 a = , 3 b = , 1 c = − , so the characteristic equation is

( ) ( )( )

2

2 1 3 1 2 1 2 1 1 0 r r r r r r r − + − = + − = − + = .

Hence, we have roots

1

1

2

r = ,

2

1 r = − , and thus

( )

1 2 1

1 2

y t c t c t

−

= + .

Repeated Euler-Cauchy Roots

65. We are given that the characteristic equation

( )

2

0 ar b a r c + − + =

of Euler’s equation

2

0 at y bty cy ′′ ′ + + =

has a double root of r. Hence, we have one solution

1

r

y t = . To verify that ln

r

t t is also a

solution, we differentiate

1 1

ln

r r

y rt t t

− −

′ = + ,

( ) ( ) ( ) ( )

2 2 2 2 2

1 ln 1 1 ln 2 1

r r r r r

y r r t t rt r t r r t t r t

− − − − −

′′ = − + + − = − + − .

By direct substitution we have

( ) ( )

( ) ( )

2 2 2 2 1 1

1 ln 2 1 ln ln

1 ln 2 1 .

r r r r r

r r

at y bty cy at r r t t r t bt rt t t ct t

ar r br c t t a r b t

− − − −

⎡ ⎤ ⎡ ⎤ ′′ ′ + + = − + − + + +

⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤ = − + + + − +

⎣ ⎦ ⎣ ⎦

We know that ( ) 1 0 ar r br c − + + = , so this last expression becomes simply

( )

2

2 1

r

at y bty cy a r b t ′′ ′ ⎡ ⎤ + + = − +

⎣ ⎦

.

Thus the root of the characteristic equation is

2

b a

r

a

−

= − , which makes this expression zero.

To verify that

r

t and ln

r

t t are linearly independent (where

2

b a

r

a

−

= − is the double

root of the characteristic equation), we set

1 2

ln 0

r r

c t c t t + =

for specific values 1 t = and 2, which give, respectively, the equations

1

1 2

0

2 2 ln 2 0

r r

c

c c

=

+ =

and yields the unique solution

1 2

0 c c = = . Hence,

r

t and ln

r

t t are linearly independent

solutions.

338 CHAPTER 4 Higher-Order Linear Differential Equations

Solutions for Repeated Euler-Cauchy Roots

For Problems 66 and 67 use the result of Problem 60, ( )

1 2

ln

r r

y t c t c t t = + .

66.

2

5 4 0 t y ty y ′′ ′ + + =

In this case, 1 a = , 5 b = , and 4 c = , so our characteristic equation for r is

2

4 4 0 r r + + = , with a

double root at –2. The general solution is

( )

2 2

1 2

ln y t c t c t t

− −

= +

for 0 t > .

67.

2

3 4 0 t y ty y ′′ ′ − + =

In this case, 1 a = , 3 b = − , and 4 c = , so our characteristic equation for r is

2

4 4 0 r r − + = , with

a double root at 2. The general solution is

( )

2 2

1 2

ln y t c t c t t = +

for 0 t > .

68. 9t

2

y″ + 3ty′ + y = 0 Euler-Cauchy method: y = t

m

, t > 0

9m(m − 1) + 3m + 1 = 0 (characteristic equation)

9m

2

− 6m + 1 = 0

(3m − 1)

2

= 0 m =

1

3

1/ 3 1/ 3

1 2

( ) ln y t c t c t t = +

69.

2

4 8 0 t y ty y ′′ ′ + + = Euler-Cauchy method: y = t

m

, t > 0

4m(m − 1) + 8m + 1 = 0

4m

2

+ 4m + 1 = 0

(2m + 1)

2

= 0 m =

1

2

−

1/ 2 1/ 2

1 2

( ) ln y t c t c t t

− −

= +

Computer: Phase-Plane Trajectories

70. (a) ( )

3

2

t t

y t e e

− −

= +

The roots of the characteristic equation are –1 and –3, so the characteristic equation is

( )( )

2

1 3 4 3 0 r r r r + + = + + = .

( ) y t satisfies the differential equation

4 3 0 y y y ′′ ′ + + = .

SECTION 4.2 Real Characteristic Roots 339

(b) To find the IC for the trajectory of ( ) y t

in yy′ space we differentiate ( ) y t ,

getting

( )

3

2 3

t t

y t e e

− −

′ = − −

The IC of the given trajectory of

( ) ( ) ( )

, y t y t ′

in yy′ space is ( ) ( ) ( ) ( ) 0 , 0 3, 5 y y′ = − .

(c) We plot the trajectory starting at

( ) 3, 5 − along with a few other

trajectories in yy′ space.

DE trajectories in yy′ space

71. ( )

8 t t

y t e e

− −

= +

(a) The roots of the characteristic equation are –1 and –8, so the characteristic equation is

( )( )

2

1 8 9 8 0 r r r r + + = + + = .

( ) y t satisfies the differential equation

9 8 0 y y y ′′ ′ + + = .

(b) The derivative is

( )

8

8

t t

y t e e

− −

′ = − − .

The IC for the given trajectory in yy′

space is

( ) ( ) ( ) ( ) 0 , 0 2, 9 y y′ = − .

(c) We plot this trajectory in yy′ space.

DE trajectory in yy′ space

72. ( )

t t

y t e e

−

= +

(a) The roots of the characteristic equation are 1 and –1, so the characteristic equation is

( )( )

2

1 1 1 0 r r r − + = − = .

( ) y t satisfies the differential equation

0 y y ′′ − = .

340 CHAPTER 4 Higher-Order Linear Differential Equations

(b) The derivative is

( )

t t

y t e e

−

′ = − .

The IC for the given trajectory in yy′

space is

( ) ( ) ( ) ( ) 0 , 0 2, 0 y y′ = .

(c) We plot this and a few other trajectories

of this DE in yy′ space.

y

–4

y

4 –4 2 –2

2

–2

4

(2, 0)

DE trajectories in yy′ space

73. ( )

t t

y t e te

− −

= +

(a) The characteristic equation has a double root at –1, so the characteristic equation is

( )

2

2

1 2 1 0 r r r + = + + = .

( ) y t satisfies the differential equation

2 0 y y y ′′ ′ + + = .

(b) The derivative is

( )

t

y t te

−

′ = − .

The IC for the given trajectory in yy′

space is

( ) ( ) 0 1, 0 0 y y′ = = .

(c) See the figure to the right.

–1.5

1

1.5

–1.5

1.5

y

(1, 0)

y'

DE trajectory in yy′ space

74. ( )

2

3 2

t

y t e = +

(a) The roots of the characteristic equation are 0 and 2, so the characteristic equation is

( )

2

2 2 0 r r r r − = − = .

( ) y t satisfies the differential equation

2 0 y y ′′ ′ − = .

SECTION 4.2 Real Characteristic Roots 341

(b) The derivative is

( )

2

4

t

y t e ′ = .

The IC for the given trajectory in yy′

space is

( ) ( ) ( ) ( ) 0 , 0 5, 4 y y′ = .

(c) See the figure to the right.

DE trajectories in yy′ space

Reduction of Order

75. (a) Let

2 1

y vy = and

2 1 1

2 1 1 1

2 .

y v y vy

y v y v y vy

′ ′ ′ = +

′′ ′′ ′ ′ ′′ = + +

Then

( ) ( ) ( )

2 2 2 1 1 1 1 1 1

2 0 y p x y q x y v y v y pv y vy pvy qvy ′′ ′ ′′ ′ ′ ′ ′′ ′ + + = + + + + + = .

Because

1 1 1

0 y py qy ′′ ′ + + = , cancel the terms involving v, and arrive at the new equation

( ) ( )

1 1 1

2 0 y v y p x y v ′′ ′ ′ + + =

(b) Setting v w ′ = and using the fact that

1 1

y dx dy ′ ′ = , we obtain

( ) ( )

( )

( )

( )

( )

( )

( )

( )

1 1 1

1 1

1

1 1

1

1

1

1

2

1

2

1

2

1

2 0

2

0

2

2

ln

2

ln

ln ln

p x dx

p x dx

y w y p x y w

y p x y

w w

y

y p x y

dw

dx

w y

y

w p x dx

y

w dy p x dx

y

w y p x dx

e

w v

y

e

v

y

−

−

−

′ ′ + + =

′ ⎛ ⎞ +

′ + =

⎜ ⎟

⎜ ⎟

⎝ ⎠

′ ⎛ ⎞ − −

=

⎜ ⎟

⎜ ⎟

⎝ ⎠

′ ⎛ ⎞ −

= −

⎜ ⎟

⎝ ⎠

−

= −

= −

′ = ± =

= ±

∫

∫

∫

∫ ∫

∫

∫

By convention, the positive sign is chosen.

342 CHAPTER 4 Higher-Order Linear Differential Equations

(c) If v is a constant function on I, then 0 v′ ≡ and 0 w ≡ because v w ′ = . The condition

0 w ≡ contradicts our work in part (b) as ln w where 0 w = is undefined. Because v is

not constant on I, { }

1 2

, y y is a linearly independent set of I.

Reduction of Order: Second Solution

76. 6 9 0 y y y ′′ ′ − + = ,

3

1

t

y e =

We identify ( ) 6 p t = − , so

( ) 6 p t dt t = −

∫

.

Substituting in the formula developed in Problem 75, we have

( )

( )

( )

6

3 3

2 1

2 2

3

1

p t dt

t

t t

t

e e

y y dt e dt te

y t

e

−

= = =

∫

∫ ∫

.

77. 4 4 0 y y y ′′ ′ − + = ,

2

1

t

y e =

We won’t use the formula this time. We simply redo the steps in Problem 75. We seek a second

solution of the form

2

2 1

t

y vy ve = = . Differentiating, we have

2 2

2

2 2 2

2

2

4 4 .

t t

t t t

y v e ve

y v e v e ve

′ ′ = +

′′ ′′ ′ = + +

Substituting into the equation we obtain

2

2 2 2

4 4 0

t

y y y v e ′′ ′ ′′ − + = = .

Dividing by

2t

e gives 0 v′′ = or

( )

1 2

v t c t c = + .

Hence, we have found new solutions

2 2 2

2 1 2

t t t

y ve c te c e = = + .

Because

2

1

t

y e = , we let

1

1 c = ,

2

0 c = , yielding a second independent solution

2

2

t

y te = .

SECTION 4.2 Real Characteristic Roots 343

78.

2

0 t y ty y ′′ ′ − + = ,

1

y t =

We won’t use the formula this time. We simply redo the steps in Problem 75. We seek a second

solution of the form

2 1

y vy tv = = . Differentiating, we have

2

2

2 .

y tv v

y tv v

′ ′ = +

′′ ′′ ′ = +

Substituting into the equation we obtain

2 3 2

2 2 2

0 t y ty y t v t v ′′ ′ ′′ ′ − + = + = .

Letting w v′ = and dividing by

3

t yields

1

0 w w

t

′ + = .

We can solve by integrating the factor method, getting

1

1

w c t

−

= . Integrating we find

1 2

ln v c t c = + ,

so

2 1 2

ln y tv c t t c t = = + .

Letting

1

1 c = ,

2

0 c = , we get a second linearly independent solution

2

ln y t t = .

79.

( )

2

1 2 2 0 t y ty y ′′ ′ + − + = ,

1

y t =

We won’t use the formula this time. We simply redo the steps in Problem 75. We seek a second

solution of the form

2 1

y vy tv = = . Differentiating yields

2

2

2 .

y tv v

y tv v

′ ′ = +

′′ ′′ ′ = +

Substituting into the equation we get

( ) ( )

2 2

2 2 2

1 2 2 1 2 0 t y ty y t t v v ′′ ′ ′′ ′ + − + = + + = .

Letting w v′ = and dividing by

( )

2

1 t t + , we can solve the new equation using the integrating

factor method, getting

( )

( )

2

2

2

2

2

ln 1 2ln ln

1

1

t

dt t t

t

t t

= − + + =

+

+

∫

.

344 CHAPTER 4 Higher-Order Linear Differential Equations

We arrive at

2

2

1 1 1

2

1 t

w c c c t

t

−

+

= = + .

Integrating this, we get

( )

1

1 2

v c t t c

−

= − + ,

so

( )

2

2 1 2

1 y tv c t c t = = − + .

Letting

1

1 c = ,

2

0 c = we get a second linearly independent solution

2

2

1 y t = − .

Classical Equations

80. 2 4 0 y ty y ′′ ′ − + = , ( )

2

1

1 2 y t t = − (Hermite’s Equation)

Letting

( )

2

2 1

1 2 y vy v t = = − , we have

( )

( )

2

2

2

2

1 2 4

1 2 8 4

y t v tv

y t v tv v

′ ′ = − −

′′ ′′ ′ = − − −

and perform the long division, yielding the equation

2

8

2 0

2 1

t

v t v

t

⎛ ⎞

′′ ′ + − + =

⎜ ⎟

−

⎝ ⎠

.

Letting w v′ = and solving the first-order equation in w, we get

( )

2 2

2

1

2 1

t

w c e t

−

= − .

To find

2

y we simply let

1

1 c = and integrate to get

( )

2 2

2

2 1

t

v e t dt

−

= −

∫

.

Multiplying by

( )

2

1 2t − yields a final answer of

( ) ( ) ( )

2 2

2 2

2

1 2 2 1

t

y t t e t dt

−

= − −

∫

.

81.

( )

2

1 0 t y ty y ′′ ′ − − + = , ( )

1

y t t = (Chebyshev’s Equation)

Letting

2 1

y vy vt = = , we have

2

y tv v ′ ′ = + ,

2

2 y tv v ′′ ′′ ′ = + ,

SECTION 4.2 Real Characteristic Roots 345

hence we have the equation

( ) ( ) ( )

2 2 2

2 2 2

1 1 2 3 0 t y ty y t t v t v ′′ ′ ′′ ′ − − + = − + − = .

Dividing by

( )

2

1 t t − , and letting w v′ = ,

( )

2

2

2 3

0

1

t

w w

t t

−

′ + =

−

.

Using partial fractions yields

( )

2

2

2 3 1 1

2ln ln 1 ln 1

2 2

1

t

dt t t t

t t

−

= + − + +

−

∫

,

so our integrating factor is

2 2

1 t t − and

1

2 2

1

1

w c

t t

=

−

.

Letting

1

1 c = and multiplying by t yields a final answer of

( )

2

2 2

1

1

y t tv t dt

t t

= =

−

∫

.

This is a perfect example of a formula that does not tell us much about how the solutions behave.

Check out the IDE tool Chebyshev’s Equation to see the value of graphical solutions.

82. ( ) 1 0 ty t y y ′′ ′ + − + = , ( )

1

1 y t t = − (Laguerre’s Equation)

Letting ( )

2 1

1 y vy v t = = − , we have

( )

2

1 y v t v ′ ′ = − + , ( )

2

1 2 y v t v ′′ ′′ ′ = − + ,

hence we have the equation

( ) ( ) ( )

2

2 2

1 1 4 1 0 ty t y y t t v t t v ′′ ′ ′′ ′ + − + = − + − + − = .

Dividing by ( ) 1 t t − and letting w v′ = yields

( )

2

4 1

0

1

t t

w w

t t

− + −

′ + =

−

.

Hence by use of partial fractions, our integrating factor is

1 2

1

1

dt

t t

u e

− + +

−

=

∫

so that

1

2

( 1)

t

e

w C

t t

=

−

.

Letting

1

1 c = and multiplying by 1 t − yields a final answer of

( ) ( ) ( )

( )

2

2

1 1

1

t

e

y t v t t dt

t t

= − = −

−

∫

.

346 CHAPTER 4 Higher-Order Linear Differential Equations

Lagrange’s Adjoint Equation

83. (a)−(b) Differentiating the right side of

[ ] [ ] ( ) ( ) ( )

d

t y y y t y g t y

dt

μ μ ′′ ′ ′ + + = +

we obtain

y y y y y g y gy μ μ μ μ μ ′′ ′ ′ ′ ′ ′ + + = + + +

Setting the coefficients of y, , y y ′ ′′ equal, we find

for y : μ = μ (no information)

for : y′ μ = g μ′ +

for : y′′ g μ ′ =

The last equation yields g = dt μ

∫

and substituting this into the second equation, and

differentiating, gives a differential equation for the “integrating factor”

μ μ μ ′′ ′ − + = 0.

(c) We perform the differentiation on the right-hand-side of the given equation, yielding

[ ] ( ) ( ) ( ) ( ) ( ) . t y p t y q t y y y g t y g t y μ μ μ ′′ ′ ′′ ′ ′ ′ ′ + + = + + +

Multiplying out the left-hand side and subtracting yields

[ ] [ ] ( ) ( ) ( ) ( ) 0. p t g t y q t g t y μ μ μ ′ ′ ′ − − + − =

Setting the first set of coefficients equal to 0 yields p g μ μ ′ = − ,

hence , p p g μ μ μ ′′ ′ ′ ′ = + − so that . g p p μ μ μ ′ ′′ ′ ′ = − + +

The second set of coefficients yields q g μ ′ − = 0 so that . g q μ ′ =

Setting these two equations for g′ equal to each other yields ( ) 0 p q p μ μ μ ′′ ′ ′ − + − =

which was to be shown.

Suggested Journal Entry

84. Student Project

SECTION 4.3 Complex Characteristic Roots 347

4.3

Complex Characteristic Roots

Solutions in General

1. 9 0 y y ′′ + =

The characteristic equation is

2

9 0 r + = , which has roots 3i, –3i. The general solution is

( )

1 2

cos3 sin3 y t c t c t = + .

2. 0 y y y ′′ ′ + + =

The characteristic equation is

2

1 0 r r + + = , which has roots

1 3

2 2

i − ± . The general solution is

( )

2

1 2

3 3

cos sin

2 2

t

y t e c t c t

−

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

3. 4 5 0 y y y ′′ ′ − + =

The characteristic equation is

2

4 5 0 r r − + = , which has roots 2 i ± . The general solution is

( ) ( )

2

1 2

cos sin

t

y t e c t c t = + .

4. 2 8 0 y y y ′′ ′ + + =

The characteristic equation is

2

2 8 0 r r + + = , which has roots 1 7 i − ± . The general solution is

( )

( ) 1 2

cos 7 sin 7

t

y t e c t c t

−

= + .

5. 2 4 0 y y y ′′ ′ + + =

The characteristic equation is

2

2 4 0 r r + + = , which has roots 1 3 i − ± . The general solution is

( )

( ) 1 2

cos 3 sin 3

t

y t e c t c t

−

= + .

6. 4 7 0 y y y ′′ ′ − + =

The characteristic equation is

2

4 7 0 r r − + = , which has roots 2 3 i ± . The general solution is

( )

( )

2

1 2

cos 3 sin 3

t

y t e c t c t = + .

7. 10 26 0 y y y ′′ ′ − + =

The characteristic equation is

2

10 26 0 r r − + = , which has roots 5 i + . The general solution is

( ) ( )

5

1 2

cos sin

t

y t e c t c t = + .

348 CHAPTER 4 Higher-Order Linear Differential Equations

8. 3 4 9 0 y y y ′′ ′ + + =

The characteristic equation is

2

3 4 9 0 r r + + = , which has roots

2 23

3 3

i − ± . The general solution

is

( )

2 3

1 2

23 23

cos sin

3 3

t

y t e c t c t

−

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

9. 0 y y y ′′ ′ − + =

The characteristic equation is

2

1 0 r r − + = , which has roots

1 3

2 2

i ± . The general solution is

( )

2

1 2

3 3

cos sin

2 2

t

y t e c t c t

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

10. 2 0 y y y ′′ ′ + + =

The characteristic equation is

2

2 0 r r + + = , which has roots

1 7

2 2

i − ± . The general solution is

( )

2

1 2

7 7

cos sin

2 2

t

y t e c t c t

−

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

Initial-Value Problems

11. 4 0 y y ′′ + = , ( ) 0 1 y = , ( ) 0 1 y′ = −

The characteristic equation is

2

4 0 r + = , which has roots 2i ± . The general solution is

( )

1 2

cos 2 sin 2 y t c t c t = + .

Substituting this into the initial conditions gives ( )

1

0 1 y c = = , ( )

2

0 2 1 y c ′ = = − . Hence, the

solution of the initial-value problem is

( )

1

cos 2 sin 2

2

y t t t = − .

12. 4 13 0 y y y ′′ ′ − + = , ( ) 0 1 y = , ( ) 0 0 y′ =

The characteristic equation is

2

4 13 0 r r − + = , which has roots 2 3i ± . The general solution is

( ) ( )

2

1 2

cos3 sin3

t

y t e c t c t = + .

Substituting this into the initial conditions yields ( )

1

0 1 y c = = , ( )

1 2

0 2 3 0 y c c ′ = + = , resulting in

1

1 c = ,

2

2

3

c = − . Hence, the solution of the initial-value problem is

( )

2

2

cos3 sin3

3

t

y t e t t

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

.

SECTION 4.3 Complex Characteristic Roots 349

13. 2 2 0 y y y ′′ ′ + + = , ( ) 0 1 y = , ( ) 0 0 y′ =

The characteristic equation is

2

2 2 0 r r + + = , which has roots 1 i − ± . Hence, the general solution

is

( ) ( )

1 2

cos sin

t

y t e c t c t

−

= + .

Substituting this into the initial conditions yields ( )

1

0 1 y c = = , ( )

1 2

0 0 y c c ′ = − = , resulting in

1

1 c = ,

2

1 c = . Hence, the solution of the initial-value problem is

( ) ( ) cos sin

t

y t e t t

−

= + .

14. 0 y y y ′′ ′ − + = , ( ) 0 0 y = , ( ) 0 1 y′ =

From Problem 6,

( ) ( ) ( )

2

1 2

3 3

cos sin

2 2

t

y t e c t c t

⎧ ⎫

⎡ ⎤ ⎡ ⎤

⎪ ⎪

= +

⎢ ⎥ ⎢ ⎥ ⎨ ⎬

⎢ ⎥ ⎢ ⎥

⎪ ⎪ ⎣ ⎦ ⎣ ⎦

⎩ ⎭

.

Substituting this into the initial conditions yields ( ) 0 0 y = , ( ) 0 1 y′ = , resulting in

1

0 c = ,

2

2

3

3

c = . Hence, the solution of the initial-value problem is

( )

2

2 3

3 sin

3 2

t

y t e t

−

⎛ ⎞

=

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

15. 4 7 0 y y y ′′ ′ − + = , ( ) 0 0 y = , ( ) 0 1 y′ = −

From Problem 6,

( )

( ) ( ) { }

2

1 2

cos 3 sin 3

t

y t e c t c t = + .

Subsituting this into the initial conditions yields ( ) 0 0 y = , ( ) 0 1 y′ = − , resulting in

1

0 c = ,

2

1

3

3

c = − .

Hence, the solution of the initial-value problem is

( )

( )

2

1

3 sin 3

3

t

y t e t = − .

16. 2 5 0 y y y ′′ ′ + + = , ( ) 0 1 y = , ( ) 0 1 y′ = −

The characteristic equation is

2

2 5 0 r r + + = , which has roots 1 2i − ± . Hence, the general solu-

tion is

( ) ( )

1 2

cos 2 sin 2

t

y t e c t c t

−

= + .

350 CHAPTER 4 Higher-Order Linear Differential Equations

Subsituting this into the initial conditions yields ( ) 0 1 y = , ( ) 0 1 y′ = − , resulting in

1

1 c = ,

2

0 c = .

Hence, the solution of the initial-value problem is

( ) cos 2

t

y t e t

−

= .

Working Backwards

17.

3 3 2

( 1) 3 3 1 r r r r − = − + −

3 3 0 y y y y ′′′ ′′ ′ − + − =

18.

3 2

( 4)( (1 ))( (1 )) 6 10 8 0 r r i r i r r r − − − − + = − + − =

6 10 8 0 y y y y ′′′ ′′ ′ − + − =

19.

3 2

( 2)( (2 ))( (2 )) 6 13 10 r r i r i r r r − − + − − = − + −

6 13 10 0 y y y y ′′′ ′′ ′ − + − =

20.

2 4 3 2

( 4)( (2 ))( (2 )) 4 16 20 0 r r i r i r r r r − − + − − = − + + − =

(4)

6 16 20 0 y y y y y ′′′ ′′ ′ − + + − =

Matching Problems

21. 0 y y ′′ ′ − = ⇒ r = 0, 1

y(t) = c

1

+ c

2

e

t

Graph D

22. 0 y y ′′ ′ + = ⇒ r = 0, −1

y(t) = c

1

+ c

2

e

−t

Graph B

23. 3 2 0 y y y ′′ ′ + + = ⇒ r = −2, −1

y(t) = c

1

e

−2t

+ c

2

e

−t

Graph A

24. 5 6 0 y y y ′′ ′ − + = ⇒ r = 2, 3

y(t) = c

1

e

2t

+ c

2

e

3t

Graph C

25. 0 y y y ′′ ′ + + = ⇒ r =

1 3

2

i − ±

y(t) =

1

2

1 2

3 3

cos sin

2 2

t

t t

e c c

⎛ ⎞

−

⎜ ⎟

⎝ ⎠

⎛ ⎞

+

⎜ ⎟

⎜ ⎟

⎝ ⎠

Graph G

26. 0 y y ′′ ′ + = ⇒ r = ±i

y(t) = c

1

cos t + c

2

sin t Graph F

27. 4 4 0 2, 2 y y y r ′′ ′ + + = ⇒ = − −

2

1 2

( ) ( )

t

y t c c t e

−

= + Graph E

SECTION 4.3 Complex Characteristic Roots 351

28.

1 3

0

2

i

y y y r

±

′′ ′ − + = ⇒ =

1

2

1 2

3 3

( ) cos sin

2 2

t

y t e c t c t

⎛ ⎞

⎜ ⎟

⎝ ⎠

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

Graph H

Euler’s Formula

29. (a) The Maclaurin series for

x

e is

2 3

1 1 1

1

2! 3! !

x n

e x x x x

n

= + + + + + +

(b) ( ) ( ) ( )

2 3 1 1 1

1

2! 3! !

n

i

e i i i i

n

θ

θ θ θ θ = + + + + + +

(c) Using the given identities for i, we can write

( ) ( ) ( )

2 3

2 4 3 5

1 1 1

1

2! 3! !

1 1 1 1

1 cos sin

2! 4! 3! 5!

n

i

e i i i i

n

i i

θ

θ θ θ θ

θ θ θ θ θ θ θ

= + + + + + +

⎛ ⎞ ⎛ ⎞

= − + − + + − + − = +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

(d) Done in part (c). (e) Done in part (c).

Long-Term Behavior of Solutions

30.

1

0 r < ,

2

0 r < . When

1 2

r r ≠ , the solution is

( )

1 2

1 2

r t r t

y t c e c e = +

and goes to 0 as t →∞. When

1 2

0 r r r = = < , the solution has the form

( )

1 2

rt rt

y t c e c te = + .

In this case using l’Hôpital’s rule we prove the second term

rt

te goes to zero as t →∞ when

0 r < .

31.

1

0 r < ,

2

0 r = . The solution

( )

1

1 2

r t

y t c e c = +

approaches the constant

2

c as t →∞ because

1

0 r < .

32. r i α β = ± , ( ) ( )

1 2

cos sin

t

y t e c t c t

α

β β = + . For 0 β ≠ the solution ( ) y t oscillates with

decreasing amplitude when 0 α < ; oscillates with increasing amplitude when 0 α > ; oscillates

with constant amplitude when 0 α = .

352 CHAPTER 4 Higher-Order Linear Differential Equations

33.

1

0 r = ,

2

0 r = . The solution

( )

1 2

y t c c t = +

approaches ∞ as t →∞ when

2

0 c > and –∞ when

2

0 c < .

34.

1

0 r > ,

2

0 r < . The solution

( )

1 2

1 2

r t r t

y t c e c e = +

approaches ∞ as t →∞ when

1

0 c > and –∞ when

1

0 c < .

35. r i β = ± , ( )

1 2

cos sin y t c t c t β β = + is a periodic function of period

2π

β

, and amplitude

2 2

1 2

c c +

Linear Independence

36. Suppose

1 2

cos sin 0

t t

c e t c e t

α α

β β + =

on an arbitrary interval. Dividing both sides by

t

e

α

, then differentiating the new equation and

dividing by β, yields

1 2

2 1

cos sin 0

cos sin 0.

c t c t

c t c t

β β

β β

+ =

− =

Hence,

1 2

0, 0 c c = = and we have proven linear independence of the given functions.

Real Coefficients

37. Solution of the differential equation is

( ) ( ) ( )

( ) ( )

1 2

1 2 1 2

cos sin cos sin

cos sin .

t t

t t

y t k e t i t k e t i t

e k k t ie k k t

α α

α α

β β β β

β β

= + + −

= + + −

For the solution to be real, there must exist real numbers r and s such that

1 2

1 2

k k r

k k si

+ =

− =

Solving for

1

k and

2

k , we get

1

2

1 1

2 2

1 1

.

2 2

k r si

k r si

= +

= −

SECTION 4.3 Complex Characteristic Roots 353

Solving =

n

n

d y

0

dt

38. (a)

4

4

3

3

3

2

3 2

2

2

3 2 1

3 2

3 2 1 0

0

1

2

1 1

3! 2

d y

dt

d y

k

dt

d y

k t k

dt

dy

k t k t k

dt

y k t k t k t k

=

=

= +

= + +

= + + +

(b)

( ) 4

0 y = . The characteristic equation is

4

0 r = , which has a fourth-order root at

0. Hence, the solution is

( )

2 3

0 1 2 3

y t c c t c t c t = + + + ,

which is the same as in part (a).

(c) In general we have

( )

( ) ( )

1 2

1 2 1 0

1 2

1 2 1 0

1 1

1 ! 2 !

n n

n n

n n

n n

y t k t k t k t k

n n

c t c t c t c

− −

− −

− −

− −

= + + + +

− −

= + + + +

**because all of the constants are arbitrary.
**

Higher-Order DEs

39.

5 4 3

5 4 3

4 4

0

d y d y d y

dt dt dt

− + =

The characteristic equation is

( ) ( )

2

5 4 3 3 2 3

4 4 4 4 2 0 r r r r r r r r − + = − + = − = ,

which has roots, 0, 0, 0, 2, 2. Hence,

( )

2 2 2

1 2 3 4 5

t t

y t c c t c t c e c te = + + + + .

40.

3 2

3 2

4 7

10 0

d y d y dy

y

dt dt dt

+ − − =

The characteristic equation is

3 2

4 7 10 0 r r r + − − = ,

which has roots, –1, 2, –5. Hence,

( )

2 5

1 2 3

t t t

y t c e c e c e

− −

= + + .

354 CHAPTER 4 Higher-Order Linear Differential Equations

41.

5

5

0

d y dy

dt dt

− =

The characteristic equation is

( ) ( )( ) ( )( )( )

5 4 2 2 2

1 1 1 1 1 1 0 r r r r r r r r r r r − = − = − + = − + + = ,

which has roots, 0, ±1, ±i. Hence

( )

1 2 3 4 5

cos sin

t t

y t c c e c e c t c t

−

= + + + + .

42. 4 5 2 0 y y y y ′′′ ′′ ′ − + − =

3 2

4 5 2 0 r r r − + − = (characteristic equation)

(1) 1 4 5 2 0 f = − + − =

1 is a root r ∴ =

By long division, we obtain

2

( 1)( 3 2) 0

( 1)( 2)( 1) 0 1, 1, 2

r r r

r r r r

− − + =

− − − = =

2

1 2 3

( )

t t t

y t c e c te c e = + +

43. 6 12 8 0 y y y y ′′′ ′′ ′ + + + =

3 2

6 12 8 0 r r r + + + = (characteristic equation)

( 2) 8 24 24 8 0 f − = − + − + =

2 is a root r ∴ = −

By long division, we obtain

2

3

( 2)( 4 4) 0

( 2) 0 2, 2, 2

r r r

r r

+ + + =

+ = = − − −

2 2 2 2

1 2 3

( )

t t t

y t c e c te c t e

− − −

= + +

44.

(4)

0 y y − =

4

1 0 r − = (characteristic equation)

2 2

( 1)( 1) 0 r r + − = , 1 r i = ± ±

1 2 3 4

( ) cos sin

t t

y t c t c t c e c e

−

= + + +

Linking Graphs

45.

–5

5

y

3

t

–5

5

y'

1

t

–5

5

y'

5

y

–5

3

2

1

3

1

2

3

2

1

t = 0

SECTION 4.3 Complex Characteristic Roots 355

46.

–5

5

y

3

t

–5

5

y'

1

t

–5

5

y'

5

y

–5

3

2

1

3

2

1

3

2

1

t = 0

Changing the Damping

47. The curves below show the solution of

0 x bx x + + = , ( ) 0 4 x = , ( ) 0 0 x =

for damping 0 b = , 0.5, 1, 2, 4. The larger the damping the faster the curves approach 0. The

curve that oscillates has no damping ( ) 0 b = .

–4

4

16 12 8 4

t

x t ( )

2

–2

b

= 4

b

= 2

b

= 1

b

= 0.5

b

= 0

–2

2

x

4

–4

–2 –4 2 4

b

= 4

b

= 2

b

= 1

b

= 0.5

b = 0

x

.

In Figure 4.3.12 (b) in the text the larger the damping b the more directly the trajectory “heads”

for the origin. The trajectory that forms a circle corresponds to zero damping. Note that every

time a curve in (a) crosses the axis twice the corresponding trajectory in (b) circles the origin.

356 CHAPTER 4 Higher-Order Linear Differential Equations

Changing the Spring

48. (a) The solutions of

0 x x kx + + = , ( ) 0 4 x = , ( ) 0 0 x =

are shown for

1

4

k = ,

1

2

, 1, 2, 4. For

larger k we have more oscillations.

–4

4

16 12 8 4

t

x t ( )

2

–2

k

= 4

k

= 1

k

= 0.5

k = 0.25

k

= 2

(b) For larger k, since there are more oscilla-

tions, the phase-plane trajectory spirals

further around the origin.

–2

2

x

4

–4

–2 –4 2

k = 4

k = 1

k = 0.5

k = 0.25

k = 2

x

.

Changing the Mass

49. (a) b = 0 and ω

o

=

k

m

so that ω

o

is inversely proportional to m .

(b) If m is doubled, ω

o

is decreased by a factor of

1

2

.

(c) If m is doubled, the damping required for critical damping is increased by a factor of 2.

Finding the Maximum

50. (a) 2 3 0 x x x + + = , x(0) = 1, (0) 0 x =

r

2

+ 2r + 3 = 0 (characteristic equation)

r = 1 2i − ±

( )

( ) ( )

1 2

1 2 1 2

cos 2 sin 2

2 sin 2 2 cos 2 cos 2 sin 2

t

t t

x e c t c t

x e c t c t e c t c t

−

− −

⎫

= +

⎪

⇒

⎬

= − + − + ⎪

⎭

1 = c

1

0 =

2

2 1 c − so that c

2

=

1

2

x =

1

cos 2 sin 2

2

t

e t t

−

⎛ ⎞

+

⎜ ⎟

⎝ ⎠

SECTION 4.3 Complex Characteristic Roots 357

To find maximum displacement, set 0 x = .

1 1

2 sin 2 2 cos 2 cos 2 sin 2 0

2 2

t t

x e t t e t t

− −

⎛ ⎞

⎛ ⎞ ⎛ ⎞

= − + − + =

⎜ ⎟

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎝ ⎠

2

2 sin 2 0

2

t

⎛ ⎞

− + =

⎜ ⎟

⎜ ⎟

⎝ ⎠

when 2t π = , so that

2

t

π

= sec

Substituting for t: x

max

=

2

1

cos 2 sin 2

2 2 2

e

π

π π

−

⎛ ⎞

⎛ ⎞

+

⎜ ⎟

⎜ ⎟

⎝ ⎠

⎝ ⎠

=

2

e

π

−

−

Max. Amplitude x

max

=

2

e

π

−

(b) m = 1, b = 2, k = 10 x(0) = 0, (0) 2 x =

The DE is 2 10 0 x x x ′′ ′ + + = for which the characteristic equation

r

2

+ 2r + 10 = 0 gives r = −1 ± 3i.

1 2

1 2 1 2

( ) ( cos3 sin3 )

( ) ( 3 sin3 3 cos3 ) ( cos3 sin3 )

t

t t

x t e c t c t

x t e c t c t e c t c t

−

− −

= +

′ = − + − +

2

( ) sin3

3

t

x t e t

−

= is the solution

To find the maximum displacement x

max

, set ′ x (t)=0 and solve for t:

0 =

2

3

(3e

−t

cos3t −e

−t

sin3t)

so that tan3t = 3 and t = 0.416 radians which gives x

max

= 0.4172.

(c) m = 1, b = 4, k = 4 x(0) = 0, (0) 2 x =

The DE is 4 4 0 x x ′′ ′ + + = for which the characteristic equation

r

2

+ 4r + 4 = 0 gives r = −2, −2

2 2

1 2

2 2 2

1 2

( )

( ) 2 ( 2 )

t t

t t t

x t c e c te

x t c e c te e

− −

− − −

⎫

= +

⎪

⎬

′ = − + − +

⎪

⎭

⇒

1 2

0, 2 c c = =

The solution is

2

( ) 2

t

x t te

−

= .

To find the maximum displacement x

max

, we set ( ) 0 x t ′ = and solve for t:

2 2

0 2( 2 )

t t

te e

− −

= − +

so that t = 1/2 which yields x

max

= e

−1

.

358 CHAPTER 4 Higher-Order Linear Differential Equations

Oscillating Euler-Cauchy

51. We used the substitution

r

y t = and obtained for

1

r i α β = + and

2

r i α β = − the solution

( )

( ) ( )

( ) ( ) ( ) ( ) ( ) ( )

ln ln ln ln ln ln

1 2 1 2 1 2

ln

1 2 1 2

cos ln sin ln cos ln sin ln .

i t i t i i t i t t i t

t

y t k t k t k e k e k e k e

e c t c t t c t c t

α β α β α β α β α β α β

α α

β β β β

+ − + − + −

= + = + = +

= + = +

This is the same process as that used at the start of Case 3 in the text utilizing the Euler’s For-

mula (4).

52.

2

2 0 t y ty y ′′ ′ + + = , ( ) 1 2 1 0 r r r − + + = ,

2

1 0 r r + + = ,

1 3

2 2

r i = − ± ,

( )

1 2

1 2

3 3

cos ln sin ln

2 2

y t t c t c t

−

⎡ ⎤

⎛ ⎞ ⎛ ⎞

= + ⎢ ⎥ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟

⎢ ⎥

⎝ ⎠ ⎝ ⎠

⎣ ⎦

53.

2

3 5 0 t y ty y ′′ ′ + + =

Letting

r

y t = yields

( )

( ) { }

2 2 1

1 3 5 0

1 3 5 0,

r r r

r

t r r t trt t

t r r r

− −

− + + =

− + + =

and gives

2

2 5 0 r r + + = , which has roots 1 2i − ± . Hence, the solution is

( ) ( ) ( ) ( )

1

1 2

cos 2ln sin 2ln y t t c t c t

−

= + .

54.

2

17 16 0 t y ty y ′′ ′ + + = Euler-Cauchy: y = t

m

, t > 0

m(m − 1) + 17m + 16 = 0 (characteristic equation)

m

2

+ 16m + 16 = 0

m =

2

16 (16) 4(16)

2

− ± −

= −8 ± 4 3

( )

8

1 2

( ) cos(4 3 ln ) sin(4 3 ln ) y t t c t c t

−

= +

SECTION 4.3 Complex Characteristic Roots 359

Third-Order Euler-Cauchy

55. The third-order Euler-Cauchy equation has the form

3 2

0 at y bt y cty dy ′′′ ′′ ′ + + + = . The derivatives

of

r

y t = ( ) 0 t > are

( )

( )( )

1

2

3

1

1 2

r

r

r

y rt

y r r t

y r r r t

−

−

−

′ =

′′ = −

′′′ = − −

Substitute these equations into the third-order Euler-Cauchy equation above to obtain:

( )( ) ( )

( )( ) ( )

3 3 2 2 1

1 2 1 0

1 2 1 0

r r r r

r r r r

at r r r t bt r r t ctrt dt

at r r r bt r r ct r dt

− − −

− − + − + + =

− − + − + + =

Dividing by

r

t , we obtain the characteristic equation:

( )( ) ( ) 1 2 1 0 ar r r br r cr d − − + − + + =

Third-Order Euler-Cauchy Problems

56.

3 2

2 2 0 t y t y ty y ′′′ ′′ ′ + − + = has Euler-Cauchy characteristic equation:

( )( ) ( )

3 2 2

3 2

1 2 1 2 2 0

3 2 2 2 0

2 2 0

r r r r r r

r r r r r r

r r r

− − + − − + =

− + + − − + =

− − + =

Note: 1 r = is a zero of the polynomial ( )

3 2

2 2 f r r r r = − − + because

( ) 1 1 2 1 2 0 f = − − + = .

Therefore 1 r − is a factor of

3 2

2 2 r r r − − + , which enables us to find the other factors.

( )( )( )

3 2

2 2 1 1 2 r r r r r r − − + = − + −

so 1 r = , –1, 2. Hence, the general solution to this Euler-Cauchy DE is

( )

1 2

1 2 3

y t c t c t c t

−

= + + ,

for 0 t > .

57.

3 2

3 5 0 t y t y ty ′′′ ′′ + + = Let y = t

m

, t > 0

3 2 2

3

( 1)( 2) 3 ( 1) 5 0 (characertistic equation)

3 2 3 3 5 0

4 0 0, 2

m m m m m m

m m m m m

m m m i

− − + − + =

− + + − + =

+ = = ±

y(t) = c

1

+ c

2

cos (2 ln t) + c

3

sin (2 ln t)

360 CHAPTER 4 Higher-Order Linear Differential Equations

Inverted Pendulum

58. The differential equation 0 x x − = has the characteristic equation

2

1 0 r − = with roots 1 ± .

Hence, the general solution is

( )

1 2

t t

x t c e c e

−

= + .

(a) With initial conditions ( ) 0 0 x = , ( ) 0 1 x = , we find

1

1

2

c = and

2

1

2

c = − . Hence, the solu-

tion of the IVP is

( )

1 1

sinh

2 2

t t

x t e e t

−

= − = .

(b) As t →∞, ( ) 0 x t → if

1

0 c = , and then ( ) 0 x t → also. This will happen whenever

( ) ( ) 0 0 x x = − .

Pendulum and Inverted Pendulum

59. (a) The inverted pendulum equation has characteristic equation

2

1 0 r − = , which has roots

1 ± . Hence, the solution

( ) ( ) ( )

1 2 1 2 1 2

cosh sinh cosh sinh sinh cosh

t t

x t c e c e c t t c t t C t C t

−

= + = + + − = + ,

where

1 1 2

C c c = − ,

2 1 2

C c c = + .

(b) The characteristic equation of the pendulum equation is

2

1 0 r + = , which has roots i ± .

Hence, the solution

( )

1 2

cos sin x t c t c t = + .

(c) The reader may think something strange about this because one form (a) appears real and

(b) complex, but they are really the same; the difference is taken up by how one chooses

the coefficients

1 2

, c c in each case. The span of

{ }

,

it it

e e

−

is the same as the span of

{ } sin , cos t t .

SECTION 4.3 Complex Characteristic Roots 361

Finding the Damped Oscillation

60. The initial conditions

( ) 0 1 x = , ( ) 0 1 x =

give the constants

1

1 c = ,

2

2 c = . Hence, we have

( ) ( ) cos 2sin

t

x t e t t

−

= + .

5

–0.5

1.5

x

t

1

0.5

3 1

x t e t t

t

( ) = + ( )

−

cos sin 2

Extremes of Damped Oscillations

61. The local maxima and minima of the curve

( ) ( )

1 2

cos sin

t

x t e c t c t

α

ω ω = +

have nothing to do with the exponential factor

t

e

α

; they depend only on

1 2

cos sin c t c t ω ω + ,

which can be rewritten as ( ) cos A t ω δ − having period

2

T

π

ω

= . Hence, consecutive maxima and

minima occur at equidistant values of t, the distance between them being one-half the period, or

π

ω

. (You can note in Problem 32 that the time between the first local maxima and the first local

minima is

1

π

π = .)

Underdamped Mass-Spring System

62. We are given parameters and initial conditions

0.25 m = , 1 b = , 4 k = , ( ) 0 1 x = , ( ) 0 0 x = .

Hence, the IVP is

0.25 4 0 x x x + + = , ( ) 0 1 x = , ( ) 0 0 x = ,

which has the solution

( )

2

3

cos 2 3 sin 2 3

3

t

x t e t t

−

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

362 CHAPTER 4 Higher-Order Linear Differential Equations

Damped Mass-Spring System

63. The IVP is

64 0 x bx x + + = , ( ) 0 1 x = , ( ) 0 0 x = .

(a) 10 b = : (underdamped), ( )

5

5

cos 39 sin 39

39

t

x t e t t

−

⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

(b) 16 b = : (critically damped), ( ) ( )

8

1 8

t

x t t e

−

= +

(c) 20 b = : (overdamped), ( ) ( )

4 16

1

4

3

t t

x t e e

− −

= −

LRC-Circuit I

64. (a) The IVP is

1

8 25 0 LQ RQ Q Q Q Q

C

+ + = + + =

, ( ) 0 1 Q = , ( ) 0 0 Q =

(b) ( ) ( )

4 4

4 5

cos3 sin3 cos 3

3 3

t t

Q t e t t e t δ

− −

⎛ ⎞

= + = −

⎜ ⎟

⎝ ⎠

where

1

4

tan

3

δ

−

=

(c) ( ) ( ) ( ) ( )

4 4

20

5 sin 3 cos 3

3

t t

I t Q t e t e t δ δ

− −

= = − − − −

where

1

4

tan

3

δ

−

=

(d) Charge on the capacitor and current in the circuit approach zero as t → +∞ .

LRC-Circuit II

65. (a) The IVP is

1 1

1 4 0

4

LQ RQ Q Q Q Q

C

+ + = + + =

, ( ) 0 1 Q = , ( ) 0 0 Q =

(b) ( )

( )

2 2

3 2 3

cos 2 3 sin 2 3 cos 2 3

3 3

t t

Q t e t t e t δ

− −

⎛ ⎞

= + = −

⎜ ⎟

⎜ ⎟

⎝ ⎠

,

3

tan

3

δ =

(c) ( ) ( )

( ) ( )

2 2

4 3

cos 2 3 4 sin 2 3

3

t t

I t Q t e t e t δ δ

− −

= = − − − −

,

3

tan

3

δ =

(d) As t →∞, both ( ) 0 Q t → and ( ) 0 I t →

Computer Lab: Damped Free Vibrations

66. IDE Lab

SECTION 4.3 Complex Characteristic Roots 363

Effects of Nonconstant Coefficients

67.

1

0 x x

t

+ =

(a) This ODE describes (among other things) an undamped vibrating spring in which the

restoring force is initially very large (when t is near zero), but eventually decays to zero,

causing the frequency of vibration to decrease and the solution period to increase as t

increases.

(b) We plotted the solution with IC ( ) 0.1 2 x = , ( ) 0.1 0 x = in the tx and xx planes.

10 20 –10

–2

–20

–4

1

3

x

–1

–3

2

x

(c) As we expected, the tx graph shows that the period of the oscillation increases with t. We

see also that the amplitude increases in the absence of friction. The xx phase portrait

shows that as time and amplitude increase, velocity decreases, which is consistent with

the previous observations. A good question for further exploration would be whether

amplitude increases indefinitely or levels off.

68.

1

0 x x x

t

+ + =

(a) This ODE describes a damped vibrating spring in which the damping starts very large

when t is near zero, but decays to zero. We suspect that initially the amplitude of a

solution will rapidly decay, but as time increases the motion could become almost like

simple harmonic oscillation, as there will be almost no friction.

(b) We plotted the solution with IC ( ) 0.1 2 x = , ( ) 0.1 0 x = in the tx as well as the xx

planes.

364 CHAPTER 4 Higher-Order Linear Differential Equations

1 2 –1

–1

0.5

–2

x

–0.5

–1.5

1

x

(c) As first expected, the tx graph shows that the solution is rapidly decaying. However the

xx phase portrait, constructed with a longer time interval, shows that our second

expectation is not confirmed. As time increases the oscillations do not become

harmonic—the amplitude of the oscillations continues to decrease, gradually and

indefinitely.

69. 0 tx x + =

(a) If you divide by t, you will see that this equation is the same as the equation in

Problem 67.

70.

( )

2

1 0 x x x x + − + =

(a) This ODE shows negative friction for 1 x < and positive damping for 1 x > . For a small

initial condition near 0 x = , we might expect the solution to grow and then oscillate

around 1 x = .

(b) We plotted the solutions in the tx and xx planes at initial velocity ( ) 0 0 x = for three

different initial displacements: ( ) 0 0.5 x = , ( ) 0 2.0 x = , ( ) 0 4.0 x = .

–4

x

x

4

4

–4

.

SECTION 4.3 Complex Characteristic Roots 365

(c) As expected, the tx graph shows that initially the solution is growing for ( ) 0 0.5 x = and

decaying for ( ) 0 4 x = . We also see that all the solutions seem to become periodic with

the same amplitude and period, but we note that the motion is not exactly sinusoidal and

that the amplitude is about 2 rather than 1 as we suspected. The xx phase portrait

confirms that the long term trajectories are not circular as in simple harmonic motion, but

distorted as we see in the tx graph. This equation is called van der Pol’s equation and

describes oscillations (mostly electrical) where internal friction depends on the value of

the dependent variable x; further details will be explored in Chapter 7.

71. ( ) sin 0 x t x x + + =

(a) In this ODE damping changes periodically from negative to positive, so we can predict

oscillation in amplitude as well as periodic vibratory motion.

(b) We plotted the solution with IC ( ) ( ) 0 2, 0 0 x x = = in the tx and xx planes.

(c) The tx graph looks like a superposition of two periodic oscillations. The xx phase

portrait for a longer time interval shows that continued oscillations almost repeat, but

never exactly. This is called quasi-periodic motion.

72.

1

0 x x tx

t

+ + =

(a) For this ODE damping is initially large, but vanishes as time increases; the restoring

force on the other hand is initially small but increases with time. How will these effects

combine?

(b) We plotted the solution with IC ( ) ( ) 0.1 2, 0.1 0 x x = = in the tx and xx planes.

366 CHAPTER 4 Higher-Order Linear Differential Equations

1 2 –1

–1

–2

x

1

2

x

(c) As we expected, the tx graph shows initially large damping, which rapidly decreases the

amplitude of the solution, and increasing frequency, due to the effect of the increasing

spring “constant”, which shortens the period. The center of the xx graph will continue to

fill in, very slowly, if you give it a much longer time interval.

73. ( ) sin 2 0 x t x + =

(a) In this ODE the restoring force changes periodically from positive to negative with a

frequency that is different from the natural frequency of the spring. We expect some

complicated but periodic motion.

(b) We plotted the solution with IC ( ) 0 2 x = , ( ) 0 0 x = in the tx and xx planes.

40 20

2

4

x

t

60

–4

–2

80 100

(c) The tx graph to 100 t = indeed looks almost periodic, with period 50. However the xx

phase portrait over a longer time interval shows that continued motion almost repeats, but

never exactly. This is another example of quasi-periodic motion, as in Problem 71.

Extending the tx graph will be another good way to see that the long term motion is

indeed not perfectly repeating.

SECTION 4.3 Complex Characteristic Roots 367

Boundary-Value Problems

74. 0 y y ′′ + = , y(0) = 0, 0

2

y

π ⎛ ⎞

=

⎜ ⎟

⎝ ⎠

y(t) = c

1

cos t + c

2

sin t

y(0) = 0 = c

1

2

y

π ⎛ ⎞

⎜ ⎟

⎝ ⎠

= 0 = c

2

, so ( ) 0 y t = is the solution.

75. 0 y y ′′ + = , y(0) = 0, 1

2

y

π ⎛ ⎞

=

⎜ ⎟

⎝ ⎠

y(t) = c

1

cos t + c

2

sin t

y(0) = 0 = c

1

2

y

π ⎛ ⎞

⎜ ⎟

⎝ ⎠

= 1 = c

2

, so ( ) sin y t t = is the solution.

76. 0 y y ′′ + = , y(0) = 1, ( ) 1 y π =

y(t) = c

1

cos t + c

2

sin t

y(0) = 1 = c

1

( ) 1 y π = = −c

1

*No solutions

77. 0 y y ′′ + = ,

4

y

π ⎛ ⎞

⎜ ⎟

⎝ ⎠

= 1, 2

2

y

π ⎛ ⎞

=

⎜ ⎟

⎝ ⎠

y(t) = c

1

cos t + c

2

sin t

1 =

1 2

1 1

2 2

c c + c

1

+ c

2

= 2

2 = c

2

c

1

= 2 2 −

( )

( ) 2 2 cos 2sin y t t t = − + is the solution.

368 CHAPTER 4 Higher-Order Linear Differential Equations

Exact Second-Order Differential Equations

78.

2

1 1

0 y y y

t t

′′ ′ + − = is the same as

1

0 y y

t

′

⎡ ⎤

′′ + =

⎢ ⎥

⎣ ⎦

.

Integrating we obtain the linear equation

1

1

, y y c

t

′ + =

for which

1

ln

1

so we have .

dt

t

t

e e t ty y c t μ ′ = = = + =

∫

Thus,

1

( ) ,

d

ty c t

dt

= so ty =

2 1

2

2

c

t c + and

1 2

( ) .

2

c c

y t t

t

= +

Substituting back into the original equation we find c

1

= 0, so ( )

c

y t

t

= is the general solution.

79.

2

2 2

0 y y

t t

′′ ′ + − =

2

0 y y

t

′

⎡ ⎤

′′ + =

⎢ ⎥

⎣ ⎦

Integrating and setting c

1

= 0, we obtain

2

2ln 2

2

0

dt

t

t

y y e e t

t

μ ′ + = = = =

∫

2

2 0 . t y ty c ′ + = =

2

( ) 0

d

t y

dt

= , t

2

y = c,

2

( )

c

y t

t

=

80.

2

( 2 ) 4( 1) 2 0 t t y t y y ′′ ′ − + − + = where

2

2 0 t t − ≠

Find ( ) : gy ′′

( )

( ) ( )

2

gy gy yg

gy gy yg gy y g yg g y

gy g y yg

′ ′ ′ = +

′′ ′ ′ ′ ′′ ′ ′ ′′ ′ ′ = + = + + +

′′ ′ ′ ′′ = + +

Let g = t

2

− 2t. Then 2 2, 2 g t g ′ ′′ = − =

2

1

1 2

Then ( ) ( 2 ) 4( 1) 2

( ) 0

( )

gy t t y t y y

gy

gy c

gy c t c

′′ ′′ ′ = − + − +

′′ =

′ =

= +

so

1 2

2

( )

2

c t c

y t

t t

+

=

−

.

Suggested Journal Entry

81. Student Project

SECTION 4.4 Undetermined Coefficients 369

4.4

Undetermined Coefficients

Inspection First

1. ( )

p

y y t y t t ′′ − = ⇒ = − 2. ( ) 2 2

p

y y y t t ′′ ′ + = ⇒ =

3. ( )

2

2

p

y y t t ′′ = ⇒ = 4. ( )

2

4

p

ty y t y t t ′′ ′ + = ⇒ =

5. ( ) 2 2 4 2

p

y y y y t ′′ ′ − + = ⇒ = 6. ( ) 2cos cos

p

y y t y t t ′′ − = − ⇒ =

7. ( )

t t

p

y y y e y t e ′′ ′ − + = ⇒ = 8. ( ) 2 2 2

p

y y y t y t t ′′′ ′ + + = + ⇒ =

Educated Prediction

The homogeneous equation 2 5 0 y y y ′′ ′ + + = has characteristic equation

2

2 5 0 r r + + = , which has

complex roots 1 2i − ± . Hence,

( )

1 2

sin 2 cos 2

t t

h

y t c e t c e t

− −

= + ,

so for the right-hand sides ( ) f t , we try the following:

9. ( ) ( )

3 3 2

2 3

p

f t t t y t At Bt Ct D = − ⇒ = + + +

10. ( ) ( ) ( )

t t

p

f t te y t At B e = ⇒ = + 11. ( ) ( ) 2sin sin cos

p

f t t y t A t B t = ⇒ = +

12. ( ) ( ) ( ) 2 sin cos sin

t t

p

f t e t y t e A t B t

− −

= ⇒ = +

Guess Again

The homogeneous equation 6 9 0 y y y ′′ ′ − + = has characteristic equation

2

6 9 0 r r − + = , which has a

double root 3, 3. Hence,

( )

3 3

1 2

t t

h

y t c e c te = + .

We try particular solutions of the form:

13. ( ) ( ) ( ) ( ) cos 2 sin cos

p

f t t t y t At B t Ct D t = ⇒ = + + +

14. ( ) ( ) ( )

3 3 2 3 t t

p

f t te y t At Bt e = ⇒ = +

(We can’t have any terms here dependent on terms in the homogeneous solution.)

15. ( ) ( ) sin sin cos

t t

p

f t e t y t Ae B t C t

− −

= + ⇒ = + +

16. ( ) ( )

4 2 4 3 2

1

p

f t t t y t At Bt Ct Dt E = − + ⇒ = + + + +

370 CHAPTER 4 Higher-Order Linear Differential Equations

Determining the Undetermined

17. 1 y′ = . The homogeneous solution is ( )

h

y t c = , where c is any constant. By simple inspection we

observe that ( )

p

y t t = is a solution of the nonhomogeneous equation. Hence, the general solution

is

( ) y t t c = + .

18. 1 y y ′ + = . The homogeneous solution is ( )

t

h

y t ce

−

= where c is any constant. By simple inspec-

tion we observe that ( ) 1

p

y t = is a solution of the nonhomogeneous equation. Hence, the general

solution is

( ) 1

t

y t ce

−

= + .

19. y y t ′ + = . ( )

t

h

y t ce

−

= ,

p

y At B = + ,

p

y A ′ = . Substituting into the DE gives ( ) A At B t + + = .

Coefficient of t: 1 A = . Coefficient of 1: 0 A B + = . Hence, 1 A = , 1 B = − . 1

p

y t = − ,

1

t

y ce t

−

= + − .

20. 1 y′′ = . The homogeneous solution of the equation is

( )

1 2 h

y t c t c = + ,

where

1

c ,

2

c are arbitrary constants. By inspection, we note that

2

1

2

p

y t = is a particular

solution. Hence, the solution of the homogeneous equation is

( )

2

1 2

1

2

y t t c t c = + + .

If you could not find a particular solution by inspection, you could try a solution of the form

( )

2

p

y t At Bt C = + + .

21. 4 1 y y ′′ ′ + = . The characteristic equation is

2

4 0 r r + = , which has roots 0, –4. Hence, the homo-

geneous solution is

( )

4

1 2

t

h

y t c c e

−

= + .

The constant on the right-hand side of the differential equation indicates we seek a particular

solution of the form ( )

p

y t A = , except that the homogeneous solution has a constant solution;

thus we seek a solution of the form ( )

p

y t At = . Substituting this expression into the differential

equation yields 4 1 A = , or

1

4

A = . Hence, we have a particular solution

( )

1

4

p

y t t = ,

SECTION 4.4 Undetermined Coefficients 371

so the general solution is

( )

4

1 2

1

4

t

y t c c e t

−

= + + .

22. 4 1 y y ′′ + = . The characteristic equation is

2

4 0 r + = , which has roots 2i ± . Hence, the homoge-

neous solution is

( )

1 2

cos 2 sin 2

h

y t c t c t = + .

The constant on the right-hand side of the differential equation indicates we seek a particular

solution of the form ( )

p

y t A = . Substituting this expression into the differential equation yields

4 1 A = , or

1

4

A = . We have a particular solution ( )

1

4

p

y t = , so the general solution

( )

1 2

1

cos 2 sin 2

4

y t c t c t = + + .

23. 4 y y t ′′ ′ + = . The characteristic equation is

2

4 0 r r + = , which has roots 0, –4. Hence, the homo-

geneous solution is

( )

4

1 2

t

h

y t c c e

−

= + .

The term on the right-hand side of the differential equation indicates we seek a particular solution

of the form

( )

p

y t At B = + .

However, the homogeneous solution has a constant term so we seek a solution of the form

( )

2

p

y t At Bt = + .

Substituting this expression into the differential equation yields

4 2 8 4 y y A At B t ′′ ′ + = + + = .

Setting the coefficient of t, 1 equal to each other yields

1

8

A = ,

1

16

B = − . Thus, the solution

( )

4 2

1 2

1 1

8 16

t

y t c c e t t

−

= + + − .

24. 2 3 6 y y y t ′′ ′ + − = − . The characteristic equation is

2

2 0 r r + − = , which has roots –2 and 1.

Hence, the homogeneous solution

( )

2

1 2

t t

h

y t c e c e

−

= + .

The linear polynomial on the right-hand side of the equation indicates we seek a particular

solution of the form

372 CHAPTER 4 Higher-Order Linear Differential Equations

( )

p

y t At B = + .

(Note that we don’t have any matches with the homogeneous solution.) Substituting this expres-

sion into the differential equation yields the equation

2 2 2 3 6 y y y A At B t ′′ ′ + − = − − = −

so 3 A = , 0 B = . Hence, we have the general solution

( )

2

1 2

3

t t

y t c e c e t

−

= + + .

25. 3

t

y y e ′′ + = + . The characteristic equation is

2

1 0 r + = , which has roots i ± . Hence, the

homogeneous solution is

( )

1 2

cos sin

h

y t c t c t = + .

The terms on the right-hand side of the differential equation indicates we seek a particular

solution of the form

( )

t

p

y t Ae B = + .

Substituting this expression into the differential equation yields

2 3

t t

y y Ae B e ′′ + = + = + .

Setting coefficients of

t

e , 1 equal to each other, we get equations for A, B, which yield

1

2

A = ,

3 B = . Hence, we have the general solution

( )

1 2

1

cos sin 3

2

t

y t c t c t e = + + + .

26. 2 6

t

y y y e ′′ ′ − − = . The characteristic equation is

2

2 0 r r − − = , which has roots –1 and 2. Hence,

the homogeneous solution is

( )

2

1 2

t t

h

y t c e c e

−

= + .

The exponential term on the right-hand side of the differential equation indicates we seek a

particular solution of the form

( )

t

p

y t Ae = .

(Note this is not linearly dependent on any of the exponential terms in the homogeneous

solution.) Substituting this expression into the differential equation we get

2 2 6

t t

y y y Ae e ′′ ′ − − = − = .

SECTION 4.4 Undetermined Coefficients 373

Hence, 3 A = − , and we have a particular solution

( ) 3

t

p

y t e = − ,

and hence

( )

2

1 2

3

t t t

y t c e c e e

−

= + − .

27. 6sin 2 y y t ′′ ′ + = . The characteristic equation is

2

0 r r + = , which has roots 0 and –1. Hence, the

homogeneous solution is

( )

1 2

t

h

y t c c e

−

= + .

The sine term on the right-hand side of the differential equation indicates we seek a particular

solution of the form

( ) cos 2 sin 2

p

y t A t B t = + .

Substituting into the differential equation yields

( ) ( ) 4 2 cos 2 4 2 sin 2 6sin 2 y y A B t B A t t ′′ ′ + = − + + − − = .

Comparing coefficients yields the equations

4 2 0

4 2 6,

A B

B A

− + =

− − =

which has the solution

3

5

A = − ,

6

5

B = − . Hence, we have

( )

3 6

cos 2 sin 2

5 5

p

y t t t = − − ,

and the general solution is

( )

1 2

3 6

cos 2 sin 2

5 5

t

y t c c e t t

−

= + − − .

28. 4 5 2

t

y y y e ′′ ′ + + = . The characteristic equation of the differential equation is

2

4 5 0 r r + + = ,

which has roots 2 i − ± . Hence, the homogeneous solution is

( ) ( )

2

1 2

cos sin

t

h

y t e c t c t

−

= + .

The exponential on the right-hand side of the differential equation indicates we seek a particular

solution of the form

( )

t

p

y t Ae = .

Substituting this expression into the differential equation yields

4 5 10 2

t t

y y y Ae e ′′ ′ + + = = ,

374 CHAPTER 4 Higher-Order Linear Differential Equations

which yields

1

5

A = . Hence, we have a particular solution ( )

1

5

t

p

y t e = , and the general solution

is given by

( ) ( )

2

1 2

1

cos sin

5

t t

y t e c t c t e

−

= + + .

29. 4 4

t

y y y te

−

′′ ′ + + = . The characteristic equation is given by

2

4 4 0 r r + + = , which has a double

root of –2, so the homogeneous solution is

( )

2 2

1 2

t t

h

y t c e c te

− −

= + .

The term on the right-hand side of the differential equation indicates we seek a particular solution

of the form

( )

t t

p

y t Ate Be

− −

= + .

Substituting this expression into the differential equation yields

( ) 4 4 2

t t t

y y y Ate A B e te

− − −

′′ ′ + + = + + = .

Comparing coefficients, yields equations, which we solve, getting 1 A = , 2 B = − . Hence, the

general solution is

( )

2 2

1 2

2

t t t t

y t c e c te te e

− − − −

= + + − .

30. sin y y t t ′′ − = . The characteristic equation is

2

1 0 r − = , which has roots 1 ± . Hence, the

homogeneous solution is

( )

1 2

t t

h

y t c e c e

−

= + .

The term on the right-hand side of the differential equation indicates we seek a particular solution

( ) ( ) ( ) cos sin

p

y t At B t Ct D t = + + + .

Differentiating this expression two times and substituting it into the differential equation yields

the algebraic equation

( ) ( ) 2 sin 2 cos 2 2 sin 2 2 cos sin y y Ct t At t A D t C B t t t ′′ − = − − + − − + − = .

Comparing terms in sint , cost , sin t t , cos t t , we get equations that yield

0 A = ,

1

2

B = − ,

1

2

C = − , 0 D = .

Hence,

( ) ( )

1 2

1

sin cos

2

t t

y t c e c e t t t

−

= + − + .

SECTION 4.4 Undetermined Coefficients 375

31.

2

12cos y y t ′′ + = . The characteristic equation is

2

1 0 r + = , which has roots i ± . Hence, the

homogeneous solution is

( )

1 2

cos sin

h

y t c t c t = + .

Using the trigonometric identity

( )

1

cos 1 cos 2

2

t t = +

the term on the right-hand side of the differential equation yields

( )

2

12cos 6 1 cos 2 t t = + .

Hence, we seek a particular solution of the form

( ) cos 2 sin 2

p

y t A t B t C = + + .

Substituting this into the differential equation yields

3 cos 2 3 sin 2 6 6cos 2 y y A t B t C t ′′ + = − − + = + .

Comparing coefficients, we get 2 A = − , 0 B = , 6 C = , so the general solution is

( )

1 2

cos sin 2cos 2 6 y t c t c t t = + − + .

32. 8

t

y y te ′′ − = . The characteristic equation is

2

1 0 r − = , which has roots 1 ± . Hence, the homoge-

neous solution is

( )

1 2

t t

h

y t c e c e

−

= + .

The term on the right-hand side of the differential equation indicates we seek a particular solution

( )

t t

p

y t Ate Be = + ,

but one term in the homogeneous solution is linearly dependent on this term, so we seek

( ) ( )

2 t

p

y t e At Bt = + .

Substituting this expression into the differential equation yields

( ) 4 2 2 8

t t t

y y Ate A B e te ′′ − = + + = ,

which gives the two equations 4 8 A = , 2 2 0 A B + = , which gives 2 A = , 2 B = − . Hence, the

general solution is

( ) ( )

1 2

2 1

t t t

y t c e c e te t

−

= + + − .

376 CHAPTER 4 Higher-Order Linear Differential Equations

33.

2

4 4

t

y y y te ′′ ′ − + = . The characteristic equation of the differential equation is

2

4 4 0 r r − + = ,

which has a double root of 2. Hence, the homogeneous solution is

( )

2 2

1 2

t t

h

y t c e c te = + .

The term on the right-hand side of the differential equation indicates we seek a particular solution

of the form

( )

2 2 t t

p

y t Ate Be = + ,

but both terms are linearly dependent with terms in the homogeneous solution, so we choose

( )

3 2 2 2 t t

p

y t At e Bt e = + .

Differentiating and substituting this expression into the differential equation yields the algebraic

equation

( )

2 2

4 4 0 0 6 2

t t

y y y e At B te ′′ ′ − + = + + + = .

Comparing coefficients, we get

1

6

A = , 0 B = . Hence, the general solution is

( )

2 2 3 2

1 2

1

6

t t t

y t c e c te t e = + + .

34. 4 3 20cos y y y t ′′ ′ − + = . The characteristic equation of the differential equation is

2

4 3 0 r r − + = ,

which has roots 1, 3. Hence,

( )

3

1 2

t t

h

y t c e c e = + .

The term on the right-hand side of the differential equation indicates we seek

( ) cos sin

p

y t A t B t = + .

Substituting this expression into the differential equation yields

( ) ( ) 4 3 4 2 sin 2 4 cos 20cos y y y A B t A B t t ′′ ′ − + = + + − = .

Comparing coefficients yields 2 A = , 4 B = − . Hence,

( )

3

1 2

2cos 4sin

t t

y t c e c e t t = + + − .

35. 3 2 sin

t

y y y e t ′′ ′ − + = . The characteristic equation of the differential equation is

2

3 2 0 r r − + = ,

which has roots 1, 2. Hence,

( )

2

1 2

t t

h

y t c e c e = + .

The term on the right-hand side of the equation indicates we seek a particular solution

( ) cos sin

t t

p

y t Ae t Be t = + .

SECTION 4.4 Undetermined Coefficients 377

Differentiating and substituting this expression into the equation yields

( ) ( ) 3 2 sin cos sin

t t t

y y y A B e t A B e t e t ′′ ′ − + = − + − − = .

Comparing coefficients, we find

1

2

A = ,

1

2

B = − , yielding the general solution

( ) ( )

2

1 2

1

cos sin

2

t t t

y t c e c e e t t = + + − .

36. 3 sin 2cos y y t t ′′ ′ + = + . The characteristic equation is

2

3 0 r r + = , which has roots 0, –3. Hence,

the homogeneous solution is

( )

3

1 2

t

h

y t c c e

−

= + .

The sine and cosine terms on the right-hand side of the equation indicate we seek a particular

solution of the form

( ) cos sin

p

y t A t B t = + .

Substituting this into the equation yields

( ) ( ) 3 3 cos 3 sin sin 2cos y y A B t B A t t t ′′ ′ + = − + + − − = + .

Comparing terms, we arrive at ( ) 3 2 A B − + = , ( ) 3 1 B A − − = , yielding

1

2

A = − ,

1

2

B = . From

this, that the general solution is

( ) ( )

3

1 2

1

sin cos

2

t

y t c c e t t

−

= + + − .

37. 4 6 y y t ′′′ ′′ − =

(1) Find y

h

: r

3

− 4r

2

= 0 ⇒ r

2

(r − 4) = 0 ⇒ r = 0, 0, 4

∴ y

h

= c

1

+ c

2

t + c

3

e

4t

(2) Find y

p

: y

P

= t

2

(At + B) = At

3

+ Bt

2

p

y

′

= 3At

2

+ 2Bt

p

y

′′

= 6At + 2B

p

y

′′′

= 6A

4 6 4(6 2 ) 24 6 8 6

p p

y y A At B At A B t

′′′ ′′

− = − + = − + − =

coefficient of t: −24A = 6, coefficient of 1: 6A − 8B = 0

378 CHAPTER 4 Higher-Order Linear Differential Equations

so that A =

1

,

4

− and 8B =

1

6

4

⎛ ⎞

−

⎜ ⎟

⎝ ⎠

=

3 3

so that

2 16

B − = − .

Hence y

p

=

3 2

1 3

4 16

t t − −

(3)

4 3 2

1 2 3

1 3

( )

4 16

t

h p

y t y y c c t c e t t = + = + + − −

38.

(3)

3 3

t

y y y y e ′′ ′ − + − =

(1) Find y

h

: r

3

− 3r

2

+ 3r − 1 = 0 (characteristic equation)

f(r) =− r

3

−3r

2

+ 3r − 1 = 0

f(1) = 1 − 3 + 3 − 1 = 0 so r = 1 is a root.

By long division, we obtain

r

3

− 3r

2

+ 3r − 1 = (r − 1)(r

2

− 2r + 1) = (r − 1)

3

Triple root r = 1, 1, 1

y

h

= c

1

e

t

+ c

2

te

t

+ c

3

t

2

e

t

(2) Find y

p

: y

p

= t

3

(Ae

t

) = At

3

e

t

3 2

3

t t

p

y At e At e

′

= +

3 2 2 3 2

3 3 6 6 6

t t t t t t t

p

y At e At e At e Ate At e At e Ate

′′

= + + + = + +

(3) 3 2 2 2

3 3 6 3 6 6 6

t t t t t t t t

p

y At e At e At e Ate At e Ate Ate Ae = + + + + + + +

(3) 3 2

3 2

3 2

3 2

3 3 9 18 6

3 18 18 0

3 9 0 0

0 0 0

t t t t

p p p p

t t t t

t t t t

t t t t

y y y y At e At e Ate Ae

At e At e Ate e

At e At e te e

At e t e te e

′′ ′ − + − = + + +

− − − +

+ + + +

− + + +

1

6 so that

6

t t

Ae e A = =

Thus y

p

=

3

1

6

t

t e

(3)

2 3

1 2 3

1

( )

6

t t t t

h p

y t y y c e c te c t e t e = + = + + +

SECTION 4.4 Undetermined Coefficients 379

39.

(4)

10 y y − =

(1) Find y

h

:

r

4

− 1 = 0

2 2 2

( 1)( 1) ( 1)( 1)( 1) r r r r r + − = + − + r = ±i, ± 1

y

h

= c

1

cos t + c

2

sin t + c

3

e

t

+ c

4

e

−t

(2) Find y

p

:

y

p

= A, so that

(4)

0

p p p p

y y y y ′ ′′ ′′′ = = = =

⇒

(4)

0 10 10 10

p p p

y y A A y − = − = ⇒ = − ⇒ = −

(3)

1 2 3 4

( ) cos sin 10

t t

h p

y t y y c t c t c e c e

−

= + = + + + −

40. 0 y y y y ′′′ ′′ ′′′ ′′ = ⇒ − =

(1) Find y

h

: r

3

− r

2

= 0 ⇒ r

2

(r − 1) = 0 r = 0, 0, 1

(2) There is no y

p

because the DE is homogeneous.

(3) y(t) = c

1

+ c

2

t + c

3

e

t

Initial-Value Problems

41. 2 3 6 , (0) 1, (0) 0 y y y t y y ′′ ′ ′ + − = − = − =

(1) Find y

h

:

r

2

+ r − 2 = 0 ⇒ (r − 1)(r + 2) = 0 ⇒ r = 1, −2

∴ y

h

= c

1

e

t

+ c

2

e

−2t

(2) Find y

p

: y

p

= At + B, , 0 2 2( ) 3 6

p p p p p

y A y y y y A At B t ′ ′′ ′′ ′ = = ⇒ + − = − + = −

coefficient of t: −2A = −6, coefficient of 1: A − 2B = 3 ⇒ A = 3, B = 0

∴ y

p

= 3t

(3) y(t) = y

h

+ y

p

= c

1

e

t

+ c

2

e

−2t

+ 3t;

2

1 2

2 3

t t

y c e c e

−

′ = − +

y(0) = −1 ⇒ c

1

+ c

2

= −1;

1 2

(0) 0 2 3 0 y c c ′ = ⇒ − + =

1 2 1

1 2 2

5

1

3

2

2 3

3

c c c

c c c

+ = − = −

− = − =

∴

2

5 2

3

3 3

t t

y e e t

−

= − + +

⇒

380 CHAPTER 4 Higher-Order Linear Differential Equations

42. 4 4 , (0) 1, (0) 1

t

y y y te y y

−

′′ ′ ′ + + = = − =

(1) Find y

h

:

r

2

+ 4r + 4 = 0 ⇒ (r + 2)

2

= 0 ⇒ r = −2, −2

∴ y

h

= c

1

e

−2t

+ c

2

te

−2t

= (c

1

+ c

2

t)e

−2t

(2) Find y

p

:

y

p

= e

−t

(At + B) ⇒ ( )

t t

p

y e At B Ae

− −

′ = − + +

⇒ ( ) ( ) 2

t t t t t

p

y e At B Ae Ae e At B Ae

− − − − −

′′ = + − − = + −

So 4 4 ( ) 2 4( ( ) ) 4 ( )

( 2 )

t t t t t

p p p

t

y y y e At B Ae e At B Ae e At B

e At A B

− − − − −

−

′′ ′ + + = + − + − + + + +

= + +

This gives A = 1, 2A + B = 0 and so A = 1 and B = −2.

Therefore y

p

= e

−t

(t − 2).

(3) y = y

h

+ y

p

= c

1

e

−2t

+ c

2

te

−2t

+ e

−t

(t − 2)

2 2 2

1 2 2

2 2 ( 2)

t t t t t

y c e c e c te e t e

− − − − −

′ = − + − − − +

1 1

1 2 2

(0) 1 2 1 1

(0) 1 2 2 1 1 0

y c c

y c c c

= − ⇒ − = − =

′ = ⇒ − + + + = =

∴ y(t) = e

−2t

+ e

−t

(t − 2)

43. 4 , (0) 1, (0) 1 y y t y y ′′ ′ + = = = −

(1) Find y

h

: r

2

+ 4 = 0 ⇒ r = ± 2i ⇒ y

h

= c

1

cos 2t + c

2

sin 2t

(2) Find y

p

: y

p

= At + B, , 0

p p

y A y ′ ′′ = =

∴ 4 4( ) 4 4

p p

y y At b At B t ′′ + = + = + =

coefficient of t: 4A = 1, coefficient of 1: 4B = 0 ⇒ A =

1

, 0

4

B =

∴ y

p

=

1

4

t

(3) y = y

h

+ y

p

= c

1

cos 2t + c

2

sin 2t +

1

4

t ,

1 2

1

2 sin 2 2 cos 2

4

y c t c t ′ = − + +

y(0) = 1 ⇒ c

1

= 1; (0) y′ = −1 ⇒ 2c

2

+

1

4

= −1 ⇒ 2c

2

=

5

4

− ⇒ c

2

=

5

8

−

∴

5 1

( ) cos 2 sin 2

8 4

y t t t t = − +

⇒

SECTION 4.4 Undetermined Coefficients 381

44. 2 6cos , (0) 1, (0) 1 y y y t y y ′′ ′ ′ + + = = = −

(1) Find y

h

: r

2

+ 2r + 1 = 0 ⇒ (r + 1)

2

= 0 ⇒ r = −1, −1

y

h

= c

1

e

−t

+ c

2

te

−t

(2) Find y

p

: y

p

= A cos t + B sin t , sin cos , cos sin

p p

y A t B t y A t B t ′ ′′ = − + = − −

2 cos sin

2( cos sin )

cos sin

p p p

y y y A t B t

B t A t

A t B t

′′ ′ ⇒ + + = − −

+ −

+ +

2B cos t − 2A sin t = 6 cost

coefficient of cos t: 2B = 6, coefficient of sin t: −2A = 0 ⇒ A = 0, B = 3

∴ y

p

= 3 sin t

(3) y = y

h

+ y

p

=

1 2 1 2 2

3sin , 3cos

t t t t t

c e c te t y c e c te c e t

− − − − −

′ + + = − − + +

y(0) = 1 ⇒ c

1

= 1; (0) 1 y′ = − ⇒ −c

1

+ c

2

+ 3 = −1 ⇒ c

2

= −3

∴ ( ) 3 3sin

t t

y t e te t

− −

= − +

45. 4 cos 2 , (0) 1, (0) 0 y y t y y ′′ ′ + = = =

(1) Find y

h

: 4r

2

+ 1 = 0 ⇒ r

2

=

1

4

− ⇒ r =

1

2

i ± ⇒ y

h

=

1 2

1 1

cos sin

2 2

c t c t

⎛ ⎞ ⎛ ⎞

+

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

(2) Find y

p

: y

p

= A cos 2t + B sin 2t, 2 sin 2 2 cos 2 , 4 cos 2 4 sin 2

p p

y A t B t y A t B t ′ ′′ = − + = − −

4 16 cos 2 16 sin 2 cos 2 sin 2

p p

y y A t B t A t B t ′′ ⇒ + = − − + +

15 cos 2 15 sin 2 cos 2 A t B t t = − − =

coefficient of cos 2t: −15A = 1, coefficient of sin 2t: −15 B = 0

⇒ A =

1

, 0

15

B − =

1

cos 2

15

p

y t ∴ = −

382 CHAPTER 4 Higher-Order Linear Differential Equations

(3) y = y

h

+ y

p

=

1 2

1 1 1

cos sin cos 2

2 2 15

c t c t t

⎛ ⎞ ⎛ ⎞

+ −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

1 2

1 1 1 1 2

sin cos sin 2

2 2 2 2 15

y c t c t t

⎛ ⎞ ⎛ ⎞

′ = − + +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

y(0) = 1 ⇒ c

1

−

1

15

= 1 ⇒ c

1

=

16

15

(0) 0 y′ = ⇒ c

2

= 0

∴

16 1 1

( ) cos cos 2

15 2 15

y t t t

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

46. 9 cos3 , (0) 1, (0) 1 y y t y y ′′ ′ + = = = −

(1) Find y

h

: r

2

+ 9 = 0 r = ± 3i

y

h

= c

1

cos 3t + c

2

sin 3t

(2) Find y

p

: y

p

= t(A cos 3t + B sin 3t)

( 3 sin3 3 cos3 ) ( cos3 sin3 )

p

y t A t B t A t B t ′ = − + + +

( 9 cos3 9 sin3 ) ( 3 sin3 3 cos3 ) 3 sin3 3 cos3

p

y t A t B t A t B t A t B t ′′ = − − + − + − +

9 (9 9 ) cos3 (9 9 ) sin3 (6 )cos3 ( 6 sin3 )

p p

y y A A t t B B t t B t A t ′′ + = − + − + + − = cos 3t

coefficient of cos 3t: 6B = 1, coefficient of sin 3t: −6A = 0

⇒ A = 0, B =

1

6

so that y

p

=

1

sin3

6

t t

(3) y = y

h

+ y

p

= c

1

cos 3t + c

2

sin 3t +

1

sin3

6

t t ,

y′ = −3c

1

sin 3t + 3c

2

cos 3t +

1 1

cos3 sin3

2 6

t t t +

y(0) = 1 ⇒ c

1

= 1; (0) y′ = −1 ⇒ 3c

2

= −1 ⇒ c

2

=

1

3

−

∴

1 1

( ) cos3 sin3 sin3

3 6

y t t t t t = − +

SECTION 4.4 Undetermined Coefficients 383

47. 3 2 4 ,

t

y y y e

−

′′ ′ − + = y(0) = 1, (0) y′ = 0

(1) Find y

h

:

r

2

− 3r + 2 = 0 ⇒ (r − 1)(r − 2) = 0 ⇒ r = 1, r = 2, so that y

h

= c

1

e

t

+ c

2

e

2t

(2) Find y

p

:

y

p

= Ae

−t

, ,

t t

p p

y Ae y Ae

− −

′ ′′ = − =

2

3 2 3( ) 2 6 4 so that

3

t t t t t

p p p

y y y Ae Ae Ae Ae e A

− − − − −

′′ ′ − + = − − + = = = ⇒ y

p

=

2

3

t

e

−

(3) y = y

h

+ y

p

= c

1

e

t

+ c

2

e

2t

+

2

,

3

t

e

−

2

1 2

2

2

3

t t t

y c e c e e

−

′ = + −

y(0) = 1 ⇒ c

1

+ c

2

+

2

3

= 1 c

1

+ c

2

=

1

3

c

1

= 0

1 2

2

(0) 0 2 0

3

y c c ′ = ⇒ + − = c

1

+ 2c

2

=

2

3

c

2

=

1

3

Thus

2

1 2

( )

3 3

t t

y t e e

−

= + .

48. 4 3 ,

t

y y y e t

−

′′ ′ − + = + y(0) = 0, (0) y′ = 0

(1) Find y

h

:

r

2

− 4r + 3 = 0 ⇒ (r − 1)(r − 3) = 0 so that y

h

= c

1

e

t

+ c

2

e

3t

(2) Find y

p

:

y

p

= Ae

−t

+ Bt + C,

p

y′ ,

t

Ae B

−

= − +

t

p

y Ae

−

′′ =

Thus 4 3 4( ) 3( )

t t t

p p p

y y y Ae Ae B Ae Bt C

− − −

′′ ′ − + = − − + + + +

4( ) 3( )

t t t

Ae Ae B Ae Bt C

− − −

= − − + + + +

= 8Ae

−t

+ 3Bt − 4B + 3C = e

−t

+ t

⇒ 8A = 1, 3B = 1, −4B + 3C = 0

⇒A =

1 1 4 4

, ,

8 3 3 9

B C B = = = .

Thus, y

p

=

1 1 4

.

8 3 9

t

e t

−

+ +

⇒

⇒

384 CHAPTER 4 Higher-Order Linear Differential Equations

(3) Find y:

y = y

h

+ y

p

= c

1

e

t

+ c

2

e

3t

+

1 1 4

8 3 9

t

e t

−

+ + .

3

1 2

1 1

3

8 3

t t t

y c e c e e

−

′ = + − + .

y(0) = 0 ⇒ c

1

+ c

2

+

1 4

8 9

+ = 0 c

1

+ c

2

=

41

72

− c

1

=

3

4

−

(0) y′ = 0 ⇒ c

1

+ 3c

2

−

1 1

8 3

+ = 0 c

1

+ 3c

2

=

5

24

− c

2

=

13

72

Therefore, y(t) =

3

3 13 1 1 4

.

4 72 8 3 9

t t t

e e e t

−

− + + + +

49. 2 y y y ′′ ′ − − = 4 cos 2t, y(0) = 0, (0) y′ = 0

(1) Find y

h

:

r

2

− r − 2 = 0 ⇒ (r + 1)(r − 2) = 0 so r = 2, −1

y

h

= c

1

e

−t

+ c

2

e

2t

(2) Find y

p

:

y

p

= A cos 2t + B sin 2t,

p

y′ = −2A sin 2t + 2B cos 2t

p

y′′ = −4A cos 2t − 4B sin 2t

2

p p p

y y y ′′ ′ − − = −4A cos 2t − 4B sin 2t + 2A sin 2t − 2B cos 2t − 2A cos 2t − 2B sin 2t

= (−6A − 2B) cos 2t + (2A − 6B) sin 2t = 4 cos 2t

∴ coefficient of cos 2t: −6A − 2B = 4, coefficient of sin 2t: 2A − 6B = 0

so A = 3B and B =

1

5

− ⇒ A =

3

5

−

Thus y

p

=

3 1

cos 2 sin 2

5 5

t t − − .

⇒ ⇒

SECTION 4.4 Undetermined Coefficients 385

(3) y = y

h

+ y

p

= c

1

e

−t

+ c

2

e

2t

−

3 1

cos 2 sin 2

5 5

t t −

y′ = −c

1

e

−t

+ 2c

2

e

2t

+

6 2

sin 2 cos 2

5 5

t t −

y(0) = 0 ⇒ c

1

+ c

2

−

3

5

= 0 c

1

+ c

2

=

3

5

c

1

=

4

15

(0) y′ = 0 ⇒ −c

1

+ 2c

2

−

2

5

= 0 −c

1

+ 2c

2

=

2

5

c

2

=

1

3

∴

2

4 1 3 1

( ) cos 2 sin 2

15 3 5 5

t t

y t e e t t

−

= + − −

50. y′′′ −

2

4 3 , y y t ′′ ′ + = y(0) = 1, (0) y′ = 0, (0) y′′ = 0

(1) Find y

h

:

r

3

− 4r

2

+ 3r = 0 ⇒ r(r

2

− 4r + 3) = 0

r(r − 3)(r − 1) = 0, so r = 0, 1, 3

y

h

= c

1

+ c

2

e

t

+ c

3

e

3t

(2) Find y

p

:

y

p

= t(At

2

+ Bt + C) = At

3

+ Bt

2

+ Ct

p

y′ = 3At

2

+ 2Bt + C,

p

y′′ = 6At + 2B,

p

y′′′ = 6A

4 3

p p p

y y y ′′′ ′′ ′ − + = 6A − 24At − 8B + 9At

2

+ 6Bt + 3C = t

2

coefficient of t

2

: 9A = 1, coefficient of t: −24A + 6 B = 0,

coefficient of 1: 6A − 8B + 3C = 0

A =

1 4

, ,

9 9

B = C =

26

27

∴

p

y =

3 2

1 4 26

9 9 27

t t t + +

(3) y = y

h

+ y

p

= c

1

+ c

2

e

t

+ c

3

e

3t

+

3 2

1 4 26

9 9 27

t t t + +

Using this general solution and the initial conditions we obtain:

3 3 2

161 2 8 1 4 26

( )

81 3 81 9 9 27

t t

y t e e t t t = − − + + +

⇒

⇒

386 CHAPTER 4 Higher-Order Linear Differential Equations

51. y

(4)

− y = e

2t

, y(0) = (0) (0) (0) 0 y y y ′ ′′ ′′′ = = =

(a) Find y

h

:

r

4

− 1 = 0 ⇒ (r

2

+ 1)(r

2

− 1) ⇒ r = ± i, ± 1

y

h

= c

1

cos t + c

2

sin t + c

3

e

t

+ c

4

e

−t

(2) Find y

p

:

y

p

= Ae

2t

,

p

y′ = 2Ae

2t

,

p

y′′ = 4Ae

2t

,

p

y′′′ = 8Ae

2t

,

(4)

p

y = 16Ae

2t

Thus

(4)

p p

y y − = 16Ae

2t

− Ae

2t

= e

2t

⇒ 15Ae

2t

= e

2t

⇒ A =

1

15

so that y

p

=

2

1

15

t

e .

(3) y = y

h

+ y

p

= c

1

cos t + c

2

sin t + c

3

e

t

+ c

4

e

−t

+

2

1

15

t

e

y′ = −c

1

sin t + c

2

cos t + c

3

e

t

− c

4

e

−t

+

2

2

15

t

e

y′′ = −c

1

cos t − c

2

sin t + c

3

e

t

+ c

4

e

−t

+

2

4

15

t

e

y′′′ = c

1

sin t − c

2

cos t + c

3

e

t

− c

4

e

−t

+

2

8

15

t

e

y(0) = 0 ⇒ c

1

+ c

3

+ c

4

+

1

15

= 0

2 3 4

2

(0) 0 0

15

y c c c ′ = ⇒ + − + =

1 3 4

4

(0) 0 0

15

y c c c ′′ = ⇒ − + + + =

2 3 4

8

(0) 0 0

15

y c c c ′′′ = ⇒ − + − + =

From these 4 equations in 4 unknowns, we obtain (by the methods of Chapter 3),

c

1

=

1

,

10

c

2

=

1

,

5

c

3

=

1

4

− and c

4

=

1

12

∴

2

1 1 1 1 1

( ) cos sin

10 5 4 12 15

t t t

y t t t e e e

−

= + − + +

52. y

(4)

= e

t

, y(0) = 1, (0) y′ = 0, (0) 0 y′′ = , (0) y′′′ = 0

(1) Find y

h

:

r

4

= 0 ⇒ r = 0 (multiplicity 4)

y

h

= c

1

+ c

2

t + c

3

t

2

+ c

4

t

3

SECTION 4.4 Undetermined Coefficients 387

(2) Find y

p

:

y

p

= Ae

t

,

p

y′ = Ae

t

,

p

y′′ = Ae

t

,

p

y′′′ = Ae

t

,

(4)

p

y = Ae

t

(4)

p

y = Ae

t

= e

t

⇒ A = 1 so that y

p

= e

t

(3) y = y

h

+ y

p

= c

1

+ c

2

t + c

3

t

2

+ c

4

t

3

+ e

t

y′ = c

2

+ 2c

3

t + 3c

4

t

2

+ e

t

y′′ = 2c

3

+ 6c

4

t + e

t

y′′′ = 6c

4

+ e

t

y(0) = 1 ⇒ c

1

+ 1 = 1 ⇒ c

1

= 0

2 2

(0) 0 1 0 1 y c c ′ = ⇒ + = ⇒ = −

3 3

1

(0) 0 2 1 0

2

y c c ′′ = ⇒ + = ⇒ = −

4 4

1

(0) 0 6 1 0

6

y c c ′′′ = ⇒ + = ⇒ = −

∴

2 3

1 1

( )

2 6

t

y t t t t e = − − − +

53. 4 cos

2

t

y y t

⎛ ⎞

′′ + = −

⎜ ⎟

⎝ ⎠

Find y

h

: y

h

= c

1

cos

2

1 1

sin

2 2

t c t

⎛ ⎞ ⎛ ⎞

+

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

Find y

p

:

1

p

y = At + B

2

p

y = cos sin

2 2

t t

t c D

⎛ ⎞

⎛ ⎞ ⎛ ⎞

+

⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

∴

1 2

( ) cos sin

2 2

p p p

t t

y t y y At B Ct Dt

⎛ ⎞ ⎛ ⎞

= + = + + +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

54.

2 t

y y t e ′′′ ′′ − = +

Find y

h

:

r

3

− r

2

= 0 ⇒ r

2

(r − 1) = 0 ⇒ r = 0, 0, 1

∴ y

h

= c

1

+ c

2

t + c

3

e

t

388 CHAPTER 4 Higher-Order Linear Differential Equations

Find y

p

:

1 2

2 2

( ), ( )

t

p p

y t At Bt C y t De = + + =

∴

1 2

4 3 2

( )

t

p p p

y t y y At Bt Ct Dte = + = + + +

55. 5 6 cos

t

y y y t te ′′ ′ − + = −

Find y

h

:

r

2

− 5r + 6 = 0 ⇒ (r − 2)(r − 3) = 0 ⇒ r = 2, 3

∴ y

h

= c

1

e

2t

+ c

2

e

3t

Find y

p

:

1 2

cos sin , ( )

t

p p

y A t B t y e Ct D = + = +

∴

1 2

( ) cos sin ( )

t

p p p

y t y y A t B t e Ct D = + = + + +

56. y

(4)

− y = te

t

+ sin t

r

4

− 1 = 0 ⇒ (r

2

+ 1)(r

2

− 1) = 0 ⇒ r = ± i, ±1

y

h

= c

1

cos t + c

2

sin t + c

3

e

t

+ c

4

e

−t

1

( )

t

p

y te At B = + ,

2

( cos sin )

p

y t C t D t = +

∴

2

( ) ( ) cos sin

t

p

y t e At Bt Ct t Dt t = + + +

Judicious Superposition

57. (a) The characteristic equation is

2

6 0 r r − − = has roots 3 r = , –2, so the general solution is

( )

3 2

1 2

t t

h

y t c e c e

−

= + .

(b) (i) Substituting ( )

t

p

y t Ae = yields

6

t t t t

Ae Ae Ae e − − = ,

which yields

1

6

A = − . Hence, ( )

1

6

t

p

y t e = − .

(ii) Substituting ( )

t

p

y t Ae

−

= yields

6

t t t t

Ae Ae Ae e

− − − −

+ − =

or

1

4

A = − . Hence, ( )

1

4

t

p

y t e

−

= − .

SECTION 4.4 Undetermined Coefficients 389

(c) Calling ( ) 6 L y y y y ′′ ′ = − − we found in part (b) that

1

6

t t

L e e

⎛ ⎞

− =

⎜ ⎟

⎝ ⎠

, and

1

4

t t

L e e

− −

⎛ ⎞

− =

⎜ ⎟

⎝ ⎠

.

Multiplying each equation by

1

2

and using basic properties of derivatives yields

1 1

12 2

t t

L e e

⎛ ⎞

− =

⎜ ⎟

⎝ ⎠

, and

1 1

8 2

t t

L e e

−

⎛ ⎞

− =

⎜ ⎟

⎝ ⎠

and

( )

1 1 1

cosh

12 8 2

t t t t

L e e e e t

− −

⎛ ⎞

− − = + =

⎜ ⎟

⎝ ⎠

.

Hence, a solution of 6 cosh y y y t ′′ ′ − − = is

( )

1 1

12 8

t t

p

y t e e

−

= − − .

Wholesale Superposition

58. We first solve the equation

!

n

t

y y

n

′ + =

first getting the homogeneous

( )

t

h

y t ce

−

= .

To find a particular solution, we try

( )

( )

1

1 1 0

n n n

p n n

y t A t A t At A

−

−

= + + + + … .

Substituting this into the equation yields

( ) ( )

1 2 1

1 1 1 1 0

1

!

n

n n n n

n n n n

t

nA t n A t A A t A t At A

n

− − −

− −

⎡ ⎤

+ − + + + + + + + =

⎣ ⎦

… … .

Comparing coefficients, we have

390 CHAPTER 4 Higher-Order Linear Differential Equations

( )

( )

( )

1

2

3

1

!

1

1 !

1

2 !

1

,

3 !

n

n

n

n

A

n

A

n

A

n

A

n

−

−

−

=

−

=

−

=

−

−

=

−

and so on. Hence, we have

( )

( ) ( )

1 2

! 1 ! 2 !

n n n

n

p

t t t

y

n n n

− −

= − + −

− −

….

Further, we have

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( ) ( )

0

1

2

2

3 2

3

4 3 2

4

1 2

1

1

1

2!

1

3! 2!

1

4! 3! 2!

! 1 ! 2 !

p

p

p

p

p

n n n

n

p

y t

y t t

t

y t t

t t

y t t

t t t

y t t

t t t

y t

n n n

− −

=

= −

= − +

= − + −

= − + − +

= − + −

− −

… … … …

…

By superposition, the sum of these solutions is a solution of

t

y y e ′ + = . (We agree our discussion

is formal in the sense that we have proven superposition for finite sums.) There is a slight

problem in adding the preceding functions because the sum changes form depending on whether

we add an even or odd number of terms. We have

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

( )

2 4 2

0 1 2

2

3 5 2 1

0 1 2 1

2 1

1

2! 4! 2 !

3! 5! 2 1 !

n

n

n p p p

n

n

n p p p

t t t

S y t y t y t

n

t t t

S y t y t y t t

n

+

+

+

= + + = + + + +

= + + = + + + +

+

… …

… …

SECTION 4.4 Undetermined Coefficients 391

However, because the sequence

n

S converges, it converges to the average of the nth and

( ) 1 n + st terms. That is,

( )

2 3

2 2 1

1 1 1

1

2 2 2! 3! 2

t

n n

t t

S S t e

+

⎛ ⎞

+ = + + + + =

⎜ ⎟

⎝ ⎠

…

Hence, we found ( )

1

2

t

p

y t e = .

Discontinuous Forcing Functions

59. y y ′′ ′ + =

2 0 4 (0) (0) 0

1 4

t y y

t

′ ≤ < = = ⎧

⎨

≥

⎩

Part 1:

1 1

2 y y ′′ ′ + = y

1

(0) =

1

(0) 0 y′ =

Find (y

1

)

h

: r

2

+ r = 0

r(r + 1) = 0 r = 0, −1 (y

1

)

h

= c

1

+ c

2

e

−t

Find (y)

p

: (y

1

)

p

= 2t by inspection, so

y

1

= c

1

+ c

2

e

−t

+ 2t 0 = c

1

+ c

2

1 2

2

t

y c e

−

′ = − + 0 = −c

2

+ 2

∴ y

1

= −2 + 2e

−t

+ 2t

Part 2:

2 2

1 y y ′′ ′ + =

y

2

= c

1

+ c

2

e

−t

+ t y

2

(4) = y

1

(4) = −2 + 2e

−4

+ 8 = 6 + 2e

−4

2

y′ = −c

2

e

−t

+ 1

4

2 1

(4) (4) 2 2 y y e

−

′ ′ = = − +

Thus, when t = 4,

4 4

1 2

4 4

2

6 2 4

2 2 1

e c c e

e c e

− −

− −

+ = + +

− + = − +

8 = c

1

+ 5 ⇒ c

1

= 3

−2e

−4

+ 1 = −c

2

e

−4

⇒ −2 + e

4

= −c

2

⇒c

2

= 2 − e

4

∴ y

2

= 3 + (2e − e

4

)e

−t

+ t

and y(t) =

4

2 2 2 0 4

3 (2 ) 4

t

t

e t t

e e t t

−

−

⎧

− + + ≤ <

⎪

⎨

+ − + ≥

⎪

⎩

⇒

⇒

c

2

= 2, c

1

= −2

392 CHAPTER 4 Higher-Order Linear Differential Equations

60.

cos 0 (0) 1, (0) 0

16

0

t t y y

y y

t

π

π

′ ≤ ≤ = = ⎧

′′ + =

⎨

>

⎩

Part 1:

1 1

16 cos y y t ′′ + = 0 ≤ t ≤ π y

1

(0) = 1,

1

(0) y′ = 0

Find (y

1

)

h

: (y

1

)

h

= c

1

cos 4t + c

2

sin 4t

Find (y

1

)

p

: (y

1

)

p

= A cos t + B sin t

1

( ) cos sin

p

y A t B t ′′ = − −

Therefore, y

1

= c

1

cos 4t + c

2

sin 4t +

1

cos

15

t

1 1 2

1

4 sin 4 4 cos 4 sin

15

y c t c t t ′ = − + −

y

1

(0) = 1 = c

1

+

1

15

c

1

=

14

15

−

1 2

(0) 0 4 y c ′ = = c

2

= 0

∴ y

1

(t) =

14 1

cos 4 cos

15 15

t t − +

Part 2:

2 2

16 0 y y ′′ + = t > π

y

2

= c

1

cos 4t + c

2

sin 4t

2 1 2

4 sin 4 4 cos 4 y c t c t ′ = − +

y

1

(π) =

2 1

14 1 13

( )

15 15 15

y c π − + = − = = − c

1

=

13

15

−

1 2 2

( ) 4 ( ) 4 y y c π π ′ ′ = = = c

2

= 1

Thus y

2

(t) =

13

cos 4 sin 4

15

t t − +

and y(t) =

14 1

cos 4 cos 0

15 15

13

cos 4 sin 4

15

t t t

t t t

π

π

⎧

− + ≤ ≤

⎪

⎪

⎨

⎪

− + >

⎪

⎩

⇒ B = 0, A =

1

15

⇒

SECTION 4.4 Undetermined Coefficients 393

Solutions of Differential Equations Using Complex Functions

61. 2 2sin y y y t ′′ ′ − + =

The homogeneous solution is y

h

= c

1

e

t

+ c

2

te

t

.

For the particular solution we use 2 2

it

y y y e ′′ ′ − + = and seek the imaginary part of the particular

solution.

We let y

p

= Ae

it

. Then

it

p

y iAe ′ = and

p

y′′ = −Ae

it

.

By substitution, we obtain

−Ae

it

− 2iAe

it

+ Ae

it

= 2e

it

−2iA = 2 A =

1

i

i

−

− =

y

p

= ie

it

= i(cos t + i sin t) = i cos t − sin t

Im(y

p

) = cos t

∴ y(t) = y

h

+ Im y

p

= c

1

e

t

+ c

2

te

t

+ cos t

62. 25 6sin y y t ′′ + = We will use 25 6

it

y y e ′′ + =

The homogeneous solution is y

h

= c

1

cos 5t + c

2

sin 5t

For the particular solution we want Im(y

p

) where y

p

= Ae

it

; and

it it

p p

y iAe y Ae ′ ′′ = = −

By substitution, we obtain

−Ae

it

+ 25Ae

it

= 6e

it

24A = 6 so that A =

1

4

Im(y

p

) = Im

1 1

sin

4 4

it

e t

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

∴ y(t) = c

1

cos 5t + c

2

sin 5t +

1

sin

4

t

63. 25 20sin5 y y t ′′ + = We will use

5

25 20

i t

y y e ′′ + =

The homogeneous solution is y

h

= c

1

cos 5t + c

2

sin 5t

For the particular solution we note that e

5it

is included in y

h

, so we must use an extra factor of t in

y

p

. We want Im(y

p

) where y

p

= Ate

5it

, so

5 5

( 5 )

it it

p

y A t ie e ′ = + ,

and

5 5 5

5 ( 5 ) 5

it it it

p

y A i t ie e Ai e ′′ = + + =

5 5

(10 25 )

it it

A ie te − .

394 CHAPTER 4 Higher-Order Linear Differential Equations

By substitution, we obtain

5 5 5 5

(10 25 ) 25 20

i it it it

A ie te Ate e

−

− + = so that 10Ai = 20. Thus A = −2i and y

p

= −2ite

5it

.

Im(y

p

) = ( ) 2 (cos5 sin5 ) Im it t i t − + = −2t cos 5t

y = c

1

cos 5t + c

2

sin 5t− 2t cos 5t

Complex Exponents

64.

2

3 2 3

it

y y y e ′′ ′ − + =

The homogeneous solution is y

h

= c

1

e

t

+ c

2

e

2t

.

We seek a particular solution of the form y

p

= Ae

2it

.

Then

2

2

it

y iAe ′ = and

2

4

it

y Ae ′′ = − .

By substitution, we obtain

2 2 2 2

2 2 2

4 3(2 ) 2( ) 3

2 6 3

(2 6 ) 3

3 2 6 3 9

2 6 2 6 20 20

it it it it

it it it

Ae iAe Ae e

Ae Aie e

A i

i

A i

i i

− − + =

− − =

− + =

− − −

= ⋅ = +

+ −

2

3 9 3 9

(cos 2 sin 2

20 20 20 20

3 9 9 3

cos 2 sin 2 cos 2 sin 2

20 20 20 20

it

p

y i e i t i t

t t i t t

− − ⎛ ⎞ ⎛ ⎞

= + = + +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

− ⎛ ⎞ ⎛ ⎞

= − + −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

It can be verified directly by substitution that

Re(y

p

) =

3 9

cos 2 sin 2

20 20

t t − − satisfies

3 2 3cos 2 y y y t ′′ ′ − + =

and that Im(y

p

) =

9 3

cos 2 sin 2

20 20

t t − satisfies

3 2 3sin 2 y y y t ′′ ′ − + =

Suggested Journal Entry

65. Student Project

SECTION 4.5 Variation of Parameters 395

4.5

Variation of Parameters

Straight Stuff

1. 4 y y t ′′ + =

The homogeneous solutions to the equation are y t

1

1 ( ) = and y t e

t

2

( ) =

−

.

To find a particular solution of the form ( )

1 2

t

p

y t v v e

−

= + , we solve the equations

1 2

0

0.

t

t

v e v

e v

−

−

′ ′ + =

′ − =

for

1 2

, v v ′ ′ . This gives

1

4 v t ′ = and

2

4

t

v te ′ = − .

Integrating yields ( )

2

1

2 v t t = and ( ) ( )

2

4 1

t

v t e t = − .

Hence, we have a particular solution

( ) ( ) ( ) ( )

2 2

1 1 2 2

1 2 4 1 2 4 4

t t

p

y t y v y v t e e t t t

−

= + = + − = − + .

Combining the constant term with the homogeneous solution, we write the general solution as

y t c c e t t

t

( ) = + + −

−

1 2

2

2 4 .

2. ′′ − ′ =

−

y y e

t

The homogeneous solutions to the equation are y t

1

1 ( ) = and y t e

t

2

( ) = .

To find a particular solution of the form ( )

1 1 2 2 1 2

t

p

y t v y v y v v e = + = + , we solve

1 2

2

0

.

t

t t

v e v

e v e

−

′ ′ + =

′ =

This gives

1

t

v e

−

′ = − and

2

2

t

v e

−

′ = .

Integrating yields ( )

1

t

v t e

−

= and ( )

2

2

1

2

t

v t e

−

= − .

Hence, we have a particular solution

( ) ( )

2

1 1 2 2

1 1 1

1

2 2 2

t t t t t t

p

y t y v y v e e e e e e

− − − − −

⎛ ⎞

= + = + − = − =

⎜ ⎟

⎝ ⎠

.

The general solution is

( )

1 2

1

2

t t

y t c c e e

−

= + + .

396 CHAPTER 4 Higher-Order Linear Differential Equations

3. ′′ − ′ + = y y y

t

e

t

2

1

, t > ( ) 0

The two linear independent solutions y

1

and y

2

of the homogeneous equation are

y t e

t

1

( ) = and y t te

t

2

( ) = .

Using the method of variation of parameters, we seek the particular solution

( ) ( ) ( )

1 2

t t

p

y t v t e v t te = + .

In order for y t

p

( ) to satisfy the differential equation, υ

1

and υ

2

must satisfy

1 1 2 2 11 2

1 1 2 2 11 22

0

1

( 1)

t t

t t t

y v y v e v te v

y v y v e v e t v e

t

′ ′ ′ ′ + = =

′ ′ ′ ′ + = + =

.

Solving algebraically for

1

v′ and

2

v′ we obtain ( )

1

1 v t ′ = − and

2

1

v

t

′ = .

Integrating gives the values ( )

1

v t t = − and

2

ln v t = .

Substituting these values into y

p

yields the particular solution

y t te te t

p

t t

( ) = − + ln .

Hence, the general solution is y t c e c te te t

t t t

( ) = + +

1 2

ln .

4. ′′ + = y y t csc

The two linearly independent solutions y

1

and y

2

of the homogeneous equation are

y t t

1

( ) = cos and y t t

2

( ) = sin .

Using the method of variation of parameters, we seek the particular solution

( ) ( ) ( )

1 2

cos sin

p

y t v t t v t t = + .

In order for y t

p

( ) to satisfy the differential equation,

1

v and

2

v must satisfy

1 1 2 2 11 22

1 1 2 2 11 22

(cos ) (sin ) 0

( sin ) (cos ) csc .

y v y v t v t v

y v y v t v t v t

′ ′ ′ ′ + = + =

′ ′ ′ ′ + = − + =

Solving algebraically for

1

v′ and

2

v′ we obtain ( )

1

1 v t ′ = − and

2

cot v t ′ = .

Integrating gives the values ( )

1

v t t = − and ( )

2

ln sin v t = .

Substituting these values into y

p

yields the particular solution

( ) ( ) cos sin ln sin

p

y t t t t t = − + .

Hence, the general solution is ( ) ( )

1 2

cos sin cos sin ln sin y t c t c t t t t t = + − + .

SECTION 4.5 Variation of Parameters 397

5. ′′ + = y y t t sec tan

The homogeneous solutions are y t t

1

( ) = cos and y t t

2

( ) = sin .

We seek the solution ( ) ( ) ( )

1 2

cos sin

p

y y t v t v = + .

We form the system

1 2

1 2

(cos ) (sin ) 0

(sin ) (cos ) sec tan .

t v t v

t v t v t t

′ ′ =

′ ′ − + =

.

Solving algebraically for

1

v′ and

2

v′ yields ( )

2

1

tan v t t ′ = − and

2

tan v t ′ = .

Integrating gives the values

1

tan v t t = − and

2

ln sec v t = .

The particular solution is y t t t t t t t t t t

p

= − ( ) + = − + tan cos sin lnsec sin cos sin lnsec .

Thus, the general solution is y t c t c t t t t t ( ) = + − +

1 2

cos sin cos sin lnsec .

6. ′′ − ′ + = y y y e t

t

2 2 sin

The homogeneous solutions are y t e t

t

1

( ) = cos and y t e t

t

2

( ) = sin .

To find a particular solution of the form ( )

2 2

cos sin

t t

p

y t v e t v e t = + ,

we solve the equations

( ) ( )

1 2

1 2

cos sin 0

cos sin sin cos sin

t t

t t t t t

e tv e tv

e t e t v e t e t v e t

′ ′ + =

′ ′ − + + =

for

1

v′ and

2

v′ . This yields

2

11

sin v t ′ = − and

2

sin cos v t t ′ = .

Integrating yields the functions ( ) ( )

1

1

cos sin

2

v t t t t = − + and ( )

2

2

1

sin .

2

v t t =

Hence, a particular solution

( ) ( ) ( ) ( )

2

1 1 2 2

1 1 1

cos cos sin sin sin sin cos

2 2 2

t t t

p

y t y v y v e t t t t e t t e t t t = + = − + + = −

and the general solution is

y t c e t c e t te t

t t t

( ) = + −

1 2

1

2

cos sin cos .

398 CHAPTER 4 Higher-Order Linear Differential Equations

7. ′′ − ′ + =

+

−

y y y

e

t

3 2

1

1

The homogeneous solutions are y t e

t

1

( ) = and y t e

t

2

2

( ) = .

Hence we seek the particular solution ( )

2

1 2

t t

p

y y e v e v = + to form the system

2

1 2

2

1 2

0

1

2 .

1

t t

t t

t

e v e v

e v e v

e

−

′ ′ + =

′ ′ + =

+

Solving algebraically for

1

v′ and

2

v′ yields

1

1

t

t

e

v

e

−

−

−

′ =

+

and

2

2

1

t

t

e

v

e

−

−

′ =

+

.

The first integral is trivial;

1

ln(1 ).

t

v e

−

= +

The second one is more difficult. However, if we perform some algebra, we can write

( )

2

2

1

1 1 1

t t t

t t

t

t t t

e e e

e e

v e

e e e

− − −

− −

−

− − −

+ −

′ = = = −

+ + +

, which integrates to give

( )

2

ln 1

t t

v e e

− −

= − + + .

With υ

1

and υ

2

we have the particular solution

y e e e e e

p

t t t t t

= + − + +

− −

ln ln 1 1

2

b g b g

and the general solution is

( ) ( ) ( )

2 2

1 2

ln 1

t t t t t

y t c e c e e e e

−

= + + + + .

(The term −e

t

in y

p

was absorbed in the homogeneous solution, giving a better form for the

solution.)

8. ′′ + ′ + =

−

y y y e t

t

2 ln , t > ( ) 0

The homogeneous solutions are y t e

t

1

( ) =

−

and y t te

t

2

( ) =

−

.

We seek a particular solution ( )

1 2

t t

p

y y e v te v

− −

= + to form the system

1 2

1 2

0

( ) ln .

t t

t t t t

e v te v

e v e te v e t

− −

− − − −

′ ′ + =

′ ′ − + − =

Solving algebraically for

1

v′ and

2

v′ , yields

1

ln v t t ′ = − and

2

ln v t ′ = .

Integrating yields

2 2

1

1 1

ln

2 4

v t t t = − + and

2

ln v t t t = − .

Hence, we have a particular solution

2 2 2 2 2

1 1 1 3

ln ln ln .

2 4 2 4

t t t t t

p

y t e t t e t e t t e t e t

− − − − −

⎛ ⎞

= − + + − = −

⎜ ⎟

⎝ ⎠

Thus the general solution is y t c e c te t e t

t t t

( ) = + + − ( )

− − −

1 2

2

1

4

2 3 ln .

SECTION 4.5 Variation of Parameters 399

9. 4 tan 2 y y t ′′ + =

y

h

=

1 2

1 2

cos 2 sin 2 c t c t

y y

+

( )

2 2

1

( tan 2 )(sin 2 ) 1 sin 2 1 1 cos 2 1

sec2 cos 2

2 2 cos 2 2 cos 2 2

1

ln sec2 tan 2 sin 2

4

t t t t

v dt dt dt t t dt

t t

t t t

− −

= = − = − = − −

= − + −

∫ ∫ ∫ ∫

2

tan 2 cos 2 1

cos 2

2 4

t t

v dt t = = −

∫

So y

p

= y

1

v

1

+ y

2

v

2

=

( )

1 1

cos 2 ln sec2 tan 2 sin 2 sin 2 cos 2

4 4

t t t t t t − + − − .

General solution: y(t) = c

1

cos 2t + c

2

sin 2t + y

p

.

10. 5 6 cos( )

t

y y y e ′′ ′ + + =

y

h

=

2 3

1 2

1 2

t t

c e c e

y y

− −

+

( )

3

2

1

5

cos( )

cos( ) sin( ) cos( )

t t

t t t t t

t

e e

v dt e e dt e e e

e

−

−

−

= = = +

−

∫ ∫

2

3 2

2

5

cos( )

cos( ) 2sin( ) sin( ) 2 cos( )

t t

t t t t t t t

t

e e

v dt e e dt e e e e e

e

−

−

= = − = − =

−

∫ ∫

So y

p

= y

1

v

1

+ y

2

v

2

=

( ) ( )

2 3 2

sin( ) cos( ) 2sin( ) sin( ) 2 cos( )

t t t t t t t t t t

e e e e e e e e e e

− −

+ + − −

=

3 2

2 sin( ) cos( )

t t t t

e e e e

− −

−

General solution: y(t) = c

1

e

−2t

+ c

2

e

−3t

+ y

p

.

11.

2

sec y y t ′′ + =

y

h

=

1 2

1 2

cos sin c t c t

y y

+

2

1

( sec )sin ( sec tan ) sec v t dt t t dt t = − = − = −

∫ ∫

2

2

sec cos sec ln sec tan v t t dt t dt t t = = = +

∫ ∫

So, y

p

= y

1

v

1

+ y

2

v

2

= cos sec sin ln sec tan t t t t t − + +

= 1 sin ln sec tan t t t − + +

General solution:

1 2

( ) cos cos sin 1 sin ln sec tan y t t c t t t t = + − + +

400 CHAPTER 4 Higher-Order Linear Differential Equations

12.

t

e

y y

t

′′ − =

y

h

=

1 2

1 2

t t

c e c e

y y

−

+

1

1 1 1 1

( ) ln

2 2 2

t

t

e

v e dt dt t

t t

−

⎛ ⎞

= = =

⎜ ⎟

⎝ ⎠

∫ ∫

0

2 2

2

1 1 1

2 2 2

t t s

t

t

t

e e e

v e dt dt ds

t t s

⎛ ⎞

= − = − = −

⎜ ⎟

⎝ ⎠

∫ ∫ ∫

So y

p

= v

1

y

1

+ v

2

y

2

=

0

2

1 1

ln

2 2

s

t

t t

t

e

e t e ds

s

−

−

∫

General solution: y(t) = c

1

e

t

+ c

2

e

−t

+ y

p

Variable Coefficients

13. t y ty y t t

2 3

2 2 ′′ − ′ + = sin , y t t

1

( ) = , y t t

2

2

( ) =

We begin by dividing the equation by t

2

, to get the proper form for using variation of parameters.

′′ − ′ + = y

t

y

t

y t t

2 2

2

sin .

Substitution verifies that y

1

and y

2

for a fundamental set of solution to the associated

homogeneous equation, so

2

1 2 h

y c t c t = + ,

we seek a particular solution ( )

2

1 2 p

y y v t v t = + ,

where

1

v and

2

v satisfy the conditions

2

1 2

0 tv t v ′ ′ + =

2

1 2

2 sin v t v t t ′ ′ + =

Solving algebraically for

1

v′ and

2

v′ , yields

11

sin v t t ′ = − and

2

sin . v t ′ =

Integrating yields

1

cos sin v t t t = − and

2

cos . v t = −

Thus, y t t t t t t t t t

p

( ) = − − = −

2 2

cos sin cos sin .

Hence, the general solution of this equation is y t c t c t t t ( ) = + −

1 2

2

sin .

14. t y ty y t t

2 2 2

4 1 ′′ + ′ − = + b g , y t t

1

2

( ) = , y t t

2

2

( ) =

−

We begin by dividing the equation by t

2

, to get the proper form for using variation of parameters:

′′ + ′ − = + y

t

y

t

y t

1 4

1

2

2

.

SECTION 4.5 Variation of Parameters 401

Substitution verifies that y

1

and y

2

form a fundamental set of solutions to the associated

homogeneous equation, so

2 2

1 2

.

h

y c t c t

−

= +

We seek a particular solution ( )

2 2

1 2 p

y y v t v t

−

′ = + ,

where υ

1

and υ

2

satisfy the conditions

2 2

1 2

0 t v t v

−

′ ′ + =

3 2

1 2

2 2 1 . tv t v t

−

′ ′ − = +

Solving algebraically for

1

v′ and

2

v′ yields

2

1

1

4

t

v

t

+

′ = and

( )

3 2

2

1

4

t t

v

− +

′ = .

Integrating yields

2

1

1 1

ln

4 8

v t t = + and

4 6

2

1 1

16 24

v t t = − − .

Thus, y t t t t

t t

p

( ) = + − −

1

4

1

8 16 24

2 4

2 4

ln .

Hence, the general solution of this equation is y t c t c t t t t b g = + + +

−

1

2

2

2 4

1

4

1

12

ln .

(Notice that the term t

2

in y

p

can be absorbed in the homogeneous solution.)

15. 1 2 1

2

− ( ) ′′ + ′ − = − ( )

−

t y ty y t e

t

, y t t

1

( ) = , y t e

t

2

b g =

We begin by dividing the equation by 1− ( ) t , to get the proper form for variation of parameters

′′ +

−

′ −

−

= −

−

y

t

t

y

t

y t e

t

1

1

1

2 1 b g

Susbtitution verifies that y

1

and y

2

form a fundamental set of solutions to the associated

homogeneous equation, so

1 2

t

h

y c t c e = +

We seek a particular solution ( )

1 2

t

p

y y v t v e = + ,

where

1

v and

2

v satisfy the conditions

1 2

0

t

t v e v ′ ′ + =

1 2

2( 1)

t t

v e v t e

−

′ ′ + = − −

Solving algebraically for

1

v′ and

2

v′ yields

1

2

t

v e

−

′ = and

2

2

2

t

v te

−

′ = − . Integrating

yields

1

2

t

v e

−

= − and

2

2

1

.

2

t

v e t

−

⎛ ⎞

= +

⎜ ⎟

⎝ ⎠

Thus, ( )

1 1

2

2 2

t t t t

p

y t te te e e t

− − − −

⎛ ⎞

= − + + = −

⎜ ⎟

⎝ ⎠

.

Hence, the general solution of this equation is y t c t c e e t

t t

( ) = + + −

F

H

I

K

−

1 2

1

2

.

402 CHAPTER 4 Higher-Order Linear Differential Equations

16. ′′ + ′ + −

F

H

I

K

=

−

y

t

y

t

y t

1

1

1

4

2

1 2

, ( )

1/ 2

1

sin y t t t

−

= , ( )

1/ 2

2

cos y t t t

−

=

Substitution verifies that y

1

and y

2

form a fundamental set of solutions to the associated

homogeneous equation, so

1/ 2 1/ 2

1 2

sin cos

h

y c t t c t t

− −

= +

We seek a particular solution ( )

1/ 2 1/ 2

1 2

sin cos

p

y y v t t v t t

− −

= + ,

where

1

v and

2

v satisfy the conditions

1/ 2 1/ 2

1 2

sin cos 0 t tv t v

− −

′ ′ + =

3/ 2 1/ 2 3/ 2 1/ 2 1/ 2

1 2

1 1

sin cos cos sin

2 2

t t t t v t t t t v t

− − − − −

⎛ ⎞ ⎛ ⎞

′ ′ − + + − − =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

.

Multiplying through by

1/ 2

t then solving for

1

v′ and

2

v′ :

1

cos v t ′ = and

2

sin . v t ′ = −

1

sin v t = and

2

cos . v t =

Thus, ( ) ( )

1/ 2 2 2 1/ 2

sin cos

p

y t t t t t

− −

= + = , and the general solution of this equation is

( )

1/ 2 1/ 2 1/ 2

1 2

sin cos

h p

y t y y c t t c t t t

− − −

= + = + + .

Third-Order Theory

17. ( ) ( ) ( ) ( ) ( ) L y y p t y q t y r t y f t ′′′ ′′ ′ = + + + =

Given

1 1 2 2 3 3

( )

h

y t c y c y c y = + + ,

we seek

1 1 2 2 2 3

( )

p

y t v y v y v y = + + .

Differentiating yields

1 1 1 1 2 2 2 2 3 3 3 3 p

y v y v y v y v y v y v y ′ ′ ′ ′ ′ ′ ′ = + + + + +

=

1 1 2 2 3 3

v y v y v y ′ ′ ′ = + + (if we set

1 1 2 2 3 3

0 y v y v y v ′ ′ ′ + + = ).

Differentiating, again

1 1 1 1 2 2 2 2 3 3 3 3 p

y v y v y v y v y v y v y ′′ ′ ′ ′′ ′ ′ ′′ ′ ′ ′′ = + + + + +

1 1 2 2 3 3

v y v y v y ′′ ′′ ′′ = + + (if now we set

1 1 2 2 3 3

0 y v y v y v ′ ′ ′ ′ ′ ′ + + = ).

Differentiating yet again:

1 1 1 1 2 2 2 2 3 3 3 3

.

p

y v y v y v y v y v y v y ′′′ ′ ′′ ′′′ ′ ′′ ′′′ ′ ′′ ′′′ = + + + + +

Substituting y

p

, ′ y

p

, ′′ y

p

and

p

y′′′ into the L y f ( ) = , then regrouping all terms in v

1

and v

2

, we see

that the coefficient of each is 0 because each y

i

is a solution of L(y

i

) = 0. Thus we are left with

1 1 2 2 3 3

.

p

y y v y v y v f ′′′ ′′ ′ ′′ ′ ′′ ′ = + + =

This last equation, together with the two assumptions (in parentheses) that we made while

differentiating, gives a system to solve for

1

v′ ,

2

v′ ,

3

v′ :

1 1 2 2 3 3

1 1 2 2 3 3

1 1 2 2 3 3

0

0

.

y v y v y v

y v y v y v

y v y v y v f

′ ′ ′ + + =

′ ′ ′ ′ ′ ′ + + =

′′ ′ ′′ ′ ′′ ′ + + =

SECTION 4.5 Variation of Parameters 403

We use Cramer’s Rule to solve the system, then integrate to find

1

v ,

2

v ,

3

v and hence, obtain a

particular solution

( )

1 1 2 2 3 3 p

y t v y v y v y = + + .

Third-Order DEs

18. ′′′ − ′′ − ′ + = y y y y e

t

2 2

The characteristic equation ( )( )( ) 1 1 2 0 λ λ λ − + − = and has roots 1, –1, and 2. The fundamental

set is y e

t

1

= , y e

t

2

=

−

, and y e

t

3

2

= . Hence,

y c e c e c e

h

t t t

= + +

−

1 2 3

2

.

By variation of parameters, we seek

2

1 2 3

( )

t t t

p

y t v e v e v e

−

= + + , as in Problem 17. Hence the

system to solve is

2

1 2 3

2

1 2 3

2

1 2 3

0

2 0

4 .

t t t

t t t

t t t t

e v e v e v

e v e v e v

e v e v e v e

−

−

−

′ ′ ′ + + =

′ ′ ′ − + =

′ ′ ′ + + =

Using Cramer’s rule and computing the determinants yields:

2

2 2

2

2 6 ;

4

t t t

t t t t

t t t

e e e

W e e e e

e e e

−

−

−

⎡ ⎤

⎢ ⎥

= − = −

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

2

2

2

2

1

2

0

0 2

4

3 1

2 6

t t

t t

t t t

t

t

e e

e e

e e e

e

v

W e

−

−

−

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

′ = = = −

−

2

2

2

4

2

2

2

3

2

0

0 2

4

1

6 6

0

0

2 1

3 6

t t

t t

t t t

t

t

t

t t

t t

t t t

t

t

t

e e

e e

e e e

e

v e

W e

e e

e e

e e e

e

v e

W e

−

−

−

−

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

′ = = =

−

⎡ ⎤

⎢ ⎥

−

⎢ ⎥

⎢ ⎥

− ⎢ ⎥

⎣ ⎦

′ = = =

−

Hence we obtain

2

1 2 3

1 1 1

.

2 6 3

t t

v v e v e

−

′ ′ ′ = − = =

Hence,

2

1 2 3

1 1

.

2 12 3

t t

t

v v e v e

−

= − = = −

We get a particular solution of y t te e e

p

t t t

( ) = − + −

1

2

1

12

1

3

and the general solution is

y t c e c e c e te e

t t t t t

( ) = + + − −

−

1 2 3

2

1

2

1

4

.

404 CHAPTER 4 Higher-Order Linear Differential Equations

19. sec y y t ′′′ ′ + =

Find y

h

: y

h

= r

3

+ r = 0 r(r

2

+ 1) = 0 r = 0, ±i

y

h

= c

1

+ c

2

cos t + c

3

sin t

y

p

= v

1

+ v

2

cos t + v

3

sin t

W =

1 cos sin

0 sin cos 1

0 cos sin

t t

t t

t t

− =

− −

1

0 cos sin

0 sin cos

sec cos sin cos sin

sec sec

sin cos 1

t t

t t

t t t t t

v t t

t t

−

− −

′ = = =

−

v′ = ln sec tan t t +

2

1 0 sin

0 0 cos

0 sec sin 0 cos

cos

1 1 1

sec sin 1 sec

t

t

t t t

t

v

t t t

−

− ⎛ ⎞

′ = = = = −

⎜ ⎟

−

⎝ ⎠

2

v t = −

3

1 cos 0

0 sin 0

0 cos sec sin 0

sin

1 sin sec

cos sec 1 cos

t

t

t t t

t

v t t

t t t

−

− −

−

′ = = = − =

−

3

ln cos v t =

y(t) =

1 2 3

cos sin ln sec tan cos sin ln cos c c t c t t t t t t t + + + + − +

20. 9 tan3 y y t ′′ ′ + =

Find y

h

: r

3

+ 9r = 0 r(r

2

+ 9) = 0 r = 0 ± 3i

y

h

= c

1

+ c

2

cos 3t + c

3

sin 3t

y

p

= v

1

+ v

2

cos 3t + v

3

sin 3t

W =

1 cos3 sin3

3sin3 3cos3

0 3sin3 3cos3 1

9cos3 9sin3

0 9cos3 9sin3

t t

t t

t t

t t

t t

−

− =

− −

− −

= 27

SECTION 4.5 Variation of Parameters 405

2 2

1

0 cos3 sin3

0 3sin3 3cos3

tan3 9cos3 9sin3 cos3 sin3

tan3 tan3 tan3

(3cos 3 3sin 3 )

3sin3 3cos3 27 27 27 9

t t

t t

t t t t t

t t t

v t t

t t

−

− −

′ = = = + =

−

v

1

=

ln cos3

27

t −

2

1 0 sin3

0 0 3cos3

0 tan3 9sin3

3cos3 tan3 1

sin3

27 27 9

t

t

t t

t t

v t

−

−

′ = = = −

v

2

=

1

cos3

27

t

2

3

1 cos3 0

0 3sin3 0

0 9cos3 tan3

3sin3 tan3 1 sin 3

27 27 9 cos3

t

t

t t

t t t

v

t

−

−

−

′ = = = −

v

3

=

( )

2

1 1 cos 3 1

ln sec3 tan3 sin3

9 cos3 27

t

dt t t t

t

−

− = − + −

∫

y =

( )

2

1 2 3

1 1 sin3

cos3 sin3 ln cos3 cos 3 ln sec3 tan3 sin3

27 27 27

t

c c t c t t t t t t + + − + − + −

Method Choice

21. ( ) y y f t ′′′ ′ − =

We first find the homogeneous solution. The characteristic equation

2

( 1) λ λ − = 0 has roots 0, ±

1, so the homogeneous solution is

1 2 3

t t

h

y c c e c e

−

= + + .

(a) 2

t

y y e

−

′′′ ′ − = . Because e

−t

is in y

h

, we must try y

p

= a te

−t

. The method of undetermined

coefficients is straightforward and gives a = 1, so y

p

= te

−t

and the general solution can be

written

1 2 3

( )

t t t

y t c c e c e te

− −

= + + + .

(b)

2

sin y y t ′′′ ′ − = . We cannot use undertermined coefficients on sin

2

t, so we use variation

of parameters to seek a particular solution of the form y

p

(t) = v

1

+ v

2

e

t

+ v

3

e

−t

, with the

derivatives of v

1

, v

2

, and v

3

determined from the equations

1 2 3

1 2 3

2

1 2 3

1 0

0 0

0 sin .

t t

t t

t t

v e v e v

v e v e v

v e v e v t

−

−

−

′ ′ ′ + + =

′ ′ ′ + − =

′ ′ ′ + + =

Discussion continues on next page.

406 CHAPTER 4 Higher-Order Linear Differential Equations

Using Cramer’s rule (as outlined in Problem 18), we obtain

2

1

sin v t ′ = −

2

2

1

sin

2

t

v e t

−

′ =

2

3

1

sin .

2

t

v e t ′ =

The antiderivative of ′ υ

1

is easy to find; the other two must be left as integrals

( )

1

1

sin cos

2

v t t t = −

2

2

1

sin

2

t

v e t dt

−

=

∫

2

3

1

sin

2

t

v e t dt =

∫

.

Hence, the general solution is

( ) ( )

2 2

1 2 3

1 1 1

sin cos sin sin

2 2 2

t t t t t t

y t c c e c e t t t e e t dt e e t dt

− − −

= + + + − + +

∫ ∫

.

(c) tan y y t ′′′ ′ − = . As in Part (b) , we must use variation of parameters to find y

p

, with

1 2 3

1 2 3

1 2 3

1 0

0 0

0 tan .

t t

t t

t t

v e v e v

v e v e v

v e v e v t

−

−

−

′ ′ ′ + + =

′ ′ ′ + − =

′ ′ ′ + + =

Using Cramer’s rule (as outlined in Problem 18), to solve these equations we find

1

tan v t ′ =

2

1

tan

2

t

v e t

−

′ =

3

1

tan .

2

t

v e t ′ =

The antiderivative of ′ υ

1

is easy to find; the other two must be left as integrals

1

ln cos v t =

2

1

tan

2

t

v e t dt

−

=

∫

3

1

tan

2

t

v e t dt =

∫

.

Hence, the general solution is

( )

1 2 3

1 1 1

ln cos tan tan .

2 2 2

t t t t t t

y t c c e c e t e e t dt e e t dt

− − −

= + + + + +

∫ ∫

Parts (b) and (c) demonstrate the power of graphical methods because the algebraic

expressions for y(t) are pretty meaningless. It is easier and more informative to use DE

software to approximate solutions of this equation in ty space than it is to pursue the

analytical formula for the solution. The figures show curves for several initial conditions

to show the variety that can occur. For any IVP there would be only one solution.

Note: We used a 3D graphc DE solver with the following equations for ( ) : y y f t ′′′ ′ − =

y x ′ = x y ′ =

y x z ′′ ′ = = relisted as y x ′ =

( ) y z x f t ′′′ ′ = = + ( ) z x f t ′ = +

SECTION 4.5 Variation of Parameters 407

(b) f(t) = sin

2

t.

The expression for y(t) on the previous page can be further

evaluated using the identity

sin

2

t =

1

(1 cos 2 ),

2

t −

but solution behavior is more easily seen on a graph of y(t).

(c) f(t) = tan t

The expression for y(t) on the previous page is even more

complicated than that for part (b); again, solution behavior is

more readily understood with a graph of y(t).

Green’s Function Representation

22. ( ) y y f t ′′ + =

We know that y t

1

= cos and y t

2

= sin are the solutions of the corresponding homogeneous

equation. Their Wronksian is

W y y t

t t

t t

1 2

1 ,

cos sin

sin cos

a f( ) =

−

= .

which is makes it easy to use the suggested variation of parameters formulas

( ) ( )

( )( )

2

1

1 2

sin( ) ( ),

,

y t f t

v t f t

W y y t

−

′ = = −

( ) ( )

( )( )

1

2

1 2

cos( ) ( ).

,

y t f t

v t f t

W y y t

′ = =

Integrating yields

( ) ( )

1

0

sin

t

v s f s ds = −

∫

( ) ( )

2

0

cos .

t

v s f s ds =

∫

Hence, ( )

1 1 2 2 p

y t y v y v ′ ′ = +

[ ]

0 0

0

0

cos( ) sin( ) ( ) sin( ) cos( ) ( )

cos( )sin( ) sin( ) cos( ) ( )

sin( ) ( ) .

t t

t

t

t s f s ds t s f s ds

t s t s f s ds

t s f s ds

= − +

= − +

= −

∫ ∫

∫

∫

Green Variation

23. The homogeneous solutions are y e

t

1

= and y e

t

2

=

−

. We seek a particular solution of the form

1 1 1 2 p

y v y v y = + , where

1

v′ and

2

v′ satisfy

( )

1 2

1 2

0

.

t t

t t

e v e v

e v e v f t

−

−

′ ′ + =

′ ′ − =

408 CHAPTER 4 Higher-Order Linear Differential Equations

Adding and subtracting the equations and solving yields

( )

1

1

2

t

v e f t

−

′ = ( )

2

1

2

t

v e f t ′ = −

Integrating gives ( )

1

0

1

2

t

s

v e f s ds

−

=

∫

( )

2

0

1

2

t

s

v e f s ds = −

∫

.

Hence,

1 1 2 2 p

y v y v y = +

( ) ( )

( )

( ) ( )

0 0

0

0

1 1

2 2

2

sinh .

t t

t s t s

t s t s

t

t

e e f s e e f s ds

e e

f s ds

t s f s ds

− −

− − +

= −

⎛ ⎞

−

=

⎜ ⎟

⎝ ⎠

= −

∫ ∫

∫

∫

Green’s Follow-Up

24. From the Leibniz Rule in multivariable calculus we have the following result:

For a continuous function g(t,s),

( )

0 0

, lim ( , ) ( , ) .

t r

r t

d g

g t s ds g r r t s ds

dt t

→

∂ ⎡ ⎤

= +

⎢ ⎥

∂

⎣ ⎦

∫ ∫

In Problem 22, the solution of the equation ′′ + = ( ) y y f t is

y t t s f s ds

t

( ) = − ( ) ( )

z

sin

0

Differentiating yields

( ) ( ) ( ) ( ) ( ) ( )

0 0

sin cos cos

t t

y t t f t t s f s ds t s f s ds ′ = − + − = −

∫ ∫

and

( ) ( ) ( ) ( ) ( ) ( ) ( )

0 0

cos sin sin .

t t

y t t f t t s f s ds f t t s f s ds ′′ = − − − = − −

∫ ∫

Hence,

′′ + = ( ) − − ( ) ( ) + − ( ) ( ) = ( )

z z

y y f t t s f s ds t s f s ds f t

t t

sin sin

0 0

.

Suggested Journal Entry I

25. Student Project

Suggested Journal Entry II

26. Student Project

SECTION 4.6 Forced Oscillations 409

4.6

Forced Oscillations

Mass-Spring Problems

1. 2 6cos x x x t ′′ ′ + + =

Find x

h

: r

2

+ 2n + 1 = 0 ⇒ (r + 1)

2

= 0 ⇒ r = −1

x

h

= c

1

e

−t

+ c

2

te

−t

.

Find x

p

: x

p

= A cos t + B sin t, sin cos ,

p

x A t B t ′ = − + cos sin

p

x A t B t ′′ = − −

2 cos sin

p p p

x x x A t B t ′′ ′ + + = − −

2( cos sin )

cos sin

2 cos 2 sin 6cos

B t A t

A t B t

B t A t t

+ −

+ +

− =

⇒ 2B = 6, −2A = 0 ⇒ A = 0, B = 3

x

p

= 3 sin t

x(t) = x

h

+ x

p

= c

1

e

−t

+ c

2

te

−t

+ 3 sin t

x

ss

= 3 sin t = 3cos

2

t

π ⎛ ⎞

−

⎜ ⎟

⎝ ⎠

Amplitude C = 3; phase shift

2

δ π

β

= radians

2. 2 3 cos3 x x x t ′′ ′ + + =

Find x

h

: r

2

+ 2r + 3 = 0 ⇒ r = −1 ± 2i

x

h

=

( ) 1 2

cos 2 sin 2

t

e c t c t

−

+

Find x

p

: x

p

= A cos 3t + B sin 3t, 3 sin3 3 cos3 ,

p

x A t B t ′ = − + 9 cos3 9 sin3

p

x A t B t ′′ = − −

2 3

p p p

x x x ′′ ′ + + =

9Acos3 9 sin3 )

2(3 cos3 3 sin3 )

3( cos3 sin3 )

( 6 6 )cos3 + ( 6 6 )sin3 cos3

t B t

B t A t

A t B t

A B t A B t t

− −

+ −

+ +

− + − − =

⇒ −6A + 6B = 1, −6A − 6B = 0

⇒ A =

1 1

,

12 12

B − = , so

410 CHAPTER 4 Higher-Order Linear Differential Equations

x

p

=

1 1

cos3 sin3

12 12

t t − +

x(t) = x

h

+ x

p

=

( ) 1 2

1 1

cos 2 sin 2 cos3 sin3

12 12

t

e c t c t t t

−

+ − +

x

ss

=

1 1

cos3 sin3

12 12

t t − + =

2 3

cos 3

12 4

t

π ⎛ ⎞

−

⎜ ⎟

⎝ ⎠

Amplitude C =

2

12

; phase shift

4

δ π

β

= radians

3. 2 3 4cos8 x x t ′′ + =

Find x

h

: 2r

2

+ 3 = 0 ⇒ r

2

3

2

− ⇒ r =

3

2

i ±

⇒ x

h

=

1 2

3 3

cos sin

2 2

c t c t +

Find x

p

: x

p

= A cos 8t, 8 sin8 ,

p

x A t ′ = − 64 cos8

p

x A t ′′ = −

2 3 2( 64 cos8 )

3( cos8 )

125 cos8 4cos8

p p

x x A t

A t

A t t

′′ + = −

+

− =

⇒ −125A = 4

⇒ A =

4

125

−

⇒ x

p

=

4

cos8

125

t −

x(t) = x

h

+ x

p

=

1 2

3 3 4

cos sin cos8

2 2 125

c t c t t + −

x

ss

=

4 4

cos8 cos(8 )

125 125

t t π − = −

Amplitude C =

4

125

; phase shift

8

δ π

β

= radians

SECTION 4.6 Forced Oscillations 411

4.

1 5

2 2 cos

2 2

x x x t ′′ ′ + + =

Find x

h

: 2r

2

+ 2r +

1

2

= 0 ⇒ r =

1

2

−

x

h

=

(1/ 2) (1/ 2)

1 2

t t

c e c te

− −

+

Find x

p

: x

p

= A cos t + B sin t, sin cos

p

x A t B t ′ = − + , cos sin

p

x A t B t ′′ = − −

1

2 2 2( cos sin )

2

2( cos sin )

1

( cos sin )

2

3 3 5

2 cos 2 sin cos

2 2 2

p p p

x x x A t B t

B t A t

A t B t

A B t A B t t

′′ ′ + + = − −

+ −

+ +

⎛ ⎞ ⎛ ⎞

− + + − − =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⇒

3 5 3

2 , 2 0

2 2 2

A B A B − + = − − =

⇒ A =

3 4

,

5 5

B − =

x

p

=

3 4

cos sin

5 5

t t − +

x(t) = x

h

+ x

p

=

(1/ 2) (1/ 2)

1 2

3 4

cos sin

5 5

t t

c e c te t t

− −

+ − +

x

ss

=

3 4

cos sin

5 5

t t − + = cos( 2.2) t −

Amplitude C = 1; phase shift 2.2

δ

β

≈ radians

5. 2 2 2cos x x x t ′′ ′ + + =

Find x

h

: r

2

+ 2r + 2 = 0 ⇒ r = −1 ± i

x

h

= ( )

1 2

cos sin

t

e c t c t

−

+

Find x

p

: x

p

= A cos t + B sin t, sin cos

p

x A t B t ′ = − + cos sin

p

x A t B t ′′ = − −

2 cos sin

2( cos sin )

2( cos sin )

( 2 ) cos ( 2 )sin 2cos

p p p

x x x A t B t

B t A t

A t B t

A B t A B t t

′′ ′ + + = − −

+ −

+ +

+ + − + =

⇒ A + 2B = 2, −2A + B = 0

412 CHAPTER 4 Higher-Order Linear Differential Equations

⇒ A =

2 4

,

5 5

B =

x

p

=

2 4

cos sin

5 5

t t +

x(t) = x

h

+ x

p

=

1 2

2 4

( cos sin ) cos sin

5 5

t

e c t c t t t

−

+ + +

x

ss

=

2 4

cos sin

5 5

t t + =

2

cos( 1.1)

5

t −

Amplitude C =

2

5

; phase shift 1.1

δ

β

≈ radians

6. 4 5 2cos 2 x x x t ′′ ′ + + =

Find x

h

: r

2

+ 4r + 5 = 0 ⇒ r = −2 ± i

x

h

=

2

1 2

( cos sin )

t

e c t c t

−

+

Find x

p

: x

p

= A cos 2t + B sin 2t, 2 sin 2 2 cos 2 ,

p

x A t B t ′ = − + 4 cos 2 4 sin 2

p

x A t B t ′′ = − −

4 5 4 cos 2 4 sin 2 )

4(2 cos 2 2 sin 2 )

5( cos 2 sin 2 )

( 8 )cos 2 ( 8 )sin 2 2cos 2

p p p

x x x A t B t

B t A t

A t B t

A B t A B t t

′′ ′ + + = − −

+ −

+ +

+ + − + =

⇒ A + 8B = 2, −8A + B = 0

⇒ A =

2

,

65

B =

16

65

x

p

=

2

cos 2

65

t +

16

sin 2

65

t

x(t) = x

h

+ x

p

=

2

1 2

2 16

( cos sin ) cos 2 sin 2

65 65

t

e c t c t t t

−

+ + +

x

ss

=

2 16

cos 2 sin 2

65 65

t t + =

2

cos(2 1.4)

65

t −

Amplitude C =

2

65

; phase shift 0.73

δ

β

≈ radians

SECTION 4.6 Forced Oscillations 413

Pushing Up

7. m =

1

, 2.5, 6

4

b k = =

1

2.5 6 2cos 2

4

x x x t + + =

The IVP is 10 24 8cos 2 , (0) 2, (0) 0. x x x t x x + + = = − =

Find x

h

: r

2

+ 10r + 24 = 0

(r + 4)(r + 6) = 0 r = −4, −6

x

h

= c

1

e

−4t

+ c

2

e

−6t

Find x

p

: x

p

= A cos 2t + B sin 2t

p

x = −2A sin 2t + 2B cos 2t

4 cos 2 4 sin 2

p

x A t B t = − −

10 24 4 cos 2 4 sin 2 10( 2 sin 2 2 cos 2 ) 24( cos 2 sin 2 )

p p p

x x x A t B t A t B t A t B t + + = − − + − + + +

coeff. of cos 2t: −4A + 24A + 20B = 8 20A + 20 B = 8

coeff. of sin 2t: −4B + 24B − 20A = 0 −20A + 20B = 0

∴ A = B =

1

5

x

p

=

1 1

cos 2 sin 2

5 5

t t +

Therefore

x(t) = x

h

+ x

p

=

4 6

1 2

1 1

cos 2 sin 2

5 5

t t

c e c e t t

− −

+ + +

4 6

1 2

2 2

4 6 sin 2 cos 2

5 5

t t

x c e c e t t

− −

= − − − +

Substituting initial conditions gives

1 2

0 2

1

(0) 2

5

2

(0) 0 4 6

5

x c c

x c c

⎫

= − = + +

⎪

⎪

⎬

⎪

= = − − +

⎪

⎭

⇒

1 2

34 23

,

5 5

c c = − =

Thus x(t) =

4 6

34 23 1 1

cos 2 sin 2

5 5 5 5

t t

e e t t

− −

− + + + .

414 CHAPTER 4 Higher-Order Linear Differential Equations

Pulling Down

8. m =

16 1

, 6, 16

32 2

b k = = =

1

6 16 4cos 4

2

x x x t + + =

The IVP is 12 32 8cos 4 , (0) 1, (0) 0 x x x t x x + + = = = .

Find x

h

: r

2

+ 12r + 32 = 0

(r + 4)(r + 8) = 0 r = −4, −8

x

h

= c

1

e

−4t

+ c

2

e

−8t

Find x

p

: x

p

= A cos 4t + B sin 4t

p

x = −4A sin 4t + 4B cos 4t

16 cos 4 16 sin 4

p

x A t B t = − −

12 32 16 cos 4 16 sin 4 12( 4 sin 4 4 cos 4 ) 32( cos 4 sin 4 )

8cos 4

p p

x x x A t B t A t B t A t B t

t

+ + = − − + − + + +

=

Coeff. of cos 4t: −16A + 32A + 48B = 8 16A + 48B = 8

Coeff. of sin 4t: −16B + 32B − 48A = 0 −48A + 16B = 0

∴ A =

1

20

B =

3

20

x

p

=

1 3

cos 4 sin 4

20 20

t t +

Therefore

x(t) = x

h

+ x

p

=

4 8

1 2

1 3

cos 4 sin 4

20 20

t t

c e c e t t

− −

+ + +

4 8

1 2

1 3

4 8 sin 4 cos 4

5 5

t t

x c e c e t t

− −

= − − +

Substituting intial conditions,

1 2

1 2

1

1

20

3

0 4 8

5

c c

c c

= + +

= − − +

1 2

1 2

19

20

3

2

20

c c

c c

⎫

+ =

⎪

⎪

⎬

⎪

+ =

⎪

⎭

⇒

1 2

7 4

,

4 5

c c = = −

Thus x(t) =

4 8

7 4 1 3

cos 4 sin 4

4 5 20 20

t t

e e t t

− −

− − + + .

SECTION 4.6 Forced Oscillations 415

Mass-Spring Again

9. (a) The mass is 100 kg m = ; gravitational force (weight) acting on the spring is

( ) 100 9.8 980 mg = = newtons. Because the weight stretches the spring by

20 cm 0.2 m = , we have

980

4900 nt m

0.20

k = = .

(b) The initial-value problem for this mass is

49 0 x x + = , ( ) 0 0.40 x = , ( ) 0 0 x = .

Solving we write the transient solution in polar form

( ) ( ) ( )

0

cos cos 7 x t C t C t ω δ δ = − = −

where the circular frequency is

0

7 ω = radians per second. Using the initial conditions

gives

( )

( )

0 cos 0.4

0 7 sin 0

x C

x C

δ

δ

= =

= − =

or 0 δ = , 0.4 C = . Hence,

( ) ( ) 0.4cos 7 x t t = .

(c) Amplitude: 0.4 C = meter; period:

2

seconds

7

T

π

= .

(d) If 500 b = , then

( )( )

2

4 250, 000 4 100 4900 0 b mk − = − < .

The system is underdamped.

(e) 100 500 4900 0 x x x + + = has characteristic equation

2

5 49 0 r r + + = ,

which has roots

1,2

5 1

171

2 2

x i = − ± . Hence, the general solution is

( )

5 2

1 2

171 171

cos sin

2 2

t

x t e c t c t

−

⎡ ⎤

⎛ ⎞ ⎛ ⎞

= + ⎢ ⎥ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟

⎢ ⎥

⎝ ⎠ ⎝ ⎠

⎣ ⎦

.

Using the initial conditions ( ) 0 0.4 x = , ( ) 0 0 x = gives

( )

( )

1

2 1

0 0.4

171 5

0 0

2 2

x c

x c c

= =

= − =

416 CHAPTER 4 Higher-Order Linear Differential Equations

which implies

1

0.4 c = ,

2

2 171

171

c = . Hence, the solution is

( )

5 2

171 2 171 171

0.4cos sin

2 171 2

t

x t e t t

−

⎛ ⎞

= +

⎜ ⎟

⎜ ⎟

⎝ ⎠

.

Adding Forcing

10. We change the unforced equation in Problem 9 to the forced equation

100 500 4900 100cos

f

x x x t ω + + = .

(a) ( )( )

2 2

4 500 4 100 4900 0 b mk − = − < , so the system is underdamped. From Equation (21)

in the text, the amplitude is a maximum when the forcing frequency is

( )

2 2

2 2

4900 500

6.04 rad sec

100 2

2 100

f

k b

m m

ω = − = − = .

(b) Given 7

f

ω = , we have seen that

0

7 ω = , 100 m = , 500 b = , and hence tanδ becomes

infinite, so that

2

π

δ = radians. Hence, by Equation (17)

( )

( ) ( )

( )

( ) ( ) ( )

0

2 2

2 2 2

0

2 2 2

cos

100

cos 7 0.029cos 7 .

2 2

100 49 49 500 7

ss f

f f

F

x t t

m b

t t

ω δ

ω ω ω

π π

= −

− +

⎛ ⎞ ⎛ ⎞

= − ≈ −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

− + ⋅

See the graph for the solution of the IVP

problem. You can see that the steady

state appears to be

( ) 0.029cos 7

2

ss

x t t

π ⎛ ⎞

≈ −

⎜ ⎟

⎝ ⎠

.

(c) The undamped equation is

100 4900 100cos7 x x t + = .

The particular solution now has the form

( ) ( ) cos 7

ss

x t Ct t δ = −

or

( ) ( ) cos7 sin7

ss

x t t A t B t = + .

SECTION 4.6 Forced Oscillations 417

Electric Analog

11. From Problem 10 we found the differential equation for the mechanical system to be

100 500 4900 100cos

f

x x x t ω + + =

or

5 49 cos

f

x x x t ω + + = .

So if 4 ohms R = , then the equivalent electrical equation (making one equation a constant multi-

ple of the other equation)

0

1

cos

f

LQ RQ Q V t

C

ω + + =

would be:

( ) ( )

4 4 4 4

5 49 cos

5 5 5 5

f

Q Q Q t ω + + =

.

This means we have

( )

0.80 henries

1 1

39.2 0.025 farads

39.2

0.80cos .

f

L

C

C

V t t ω

=

= ⇒ = ≈

=

Damped Forced Motion I

12. 8 36 72cos6 x x x t + + =

The given characteristic equation has roots 4 2 5 i − ± , hence in the long run the homogeneous

equation solution always decays to zero. We are only interested in a particular solution, and in

this case that solution is

cos6 sin6 x A t B t = + .

Differentiating and substituting into the differen-

tial equation gives

0 A = ,

3

2

B = .

Hence, the steady-state solution is given by

( )

3

sin6

2

ss

x t t = .

The graph of the steady-state solution is shown.

2 3 1

–2

2

x

t

1

–1

418 CHAPTER 4 Higher-Order Linear Differential Equations

Damped Forced Motion II

13. The initial-value problem

4 20 20cos 2 x x x t + + = , ( ) ( ) 0 0 0 x x = = ,

has

( ) ( )

2

1 2

cos 4 sin 4

t

h

x t e c t c t

−

= + ,

and

( ) cos 2 sin 2

p

x t A t B t = + .

Substituting

p

x into the differential equation we find 1 A = ,

1

2

B = , so

1

cos 2 sin 2

2

p

x t t = + .

Substituting the general solution into the initial

conditions ( ) ( ) 0 0 0 x x = = , we find

( )

2

3 1

cos 4 sin 4 cos 2 sin 2

4 2

t

x t e t t t t

−

⎛ ⎞

= − + + +

⎜ ⎟

⎝ ⎠

.

The steady-state portion of the solution is shown.

(See figure.)

4 6 3 2

Š1.5

1.5

1 5

t

0.5

Š0.5

x t ( )

Calculating Charge

14. 4 100 10cos 4 , (0) 0, (0) 0 Q Q t Q Q′ + = = =

Find Q

h

:

1 2

4 100 0 25 0 cos5 sin5

h

Q Q Q Q Q c t c t + = ⇒ + = ⇒ = +

Find Q

p

: Q

p

has the form A cos 4t + B sin 4t.

Substitution in 4 100 10cos 4

p p

Q Q t + =

leads to A =

5

18

, B = 0 and so Q

p

=

5

cos 4 .

18

t

Thus Q = Q

h

+ Q

p

= c

1

cos 5t + c

2

sin 5t +

5

cos 4

18

t .

The initial conditions Q(0) = 0 and (0) 0 Q′ = give us c

1

=

5

,

18

− c

2

= 0

and the solution of the IVP is Q(t) =

5 5

cos5 cos 4

18 18

t t − + .

SECTION 4.6 Forced Oscillations 419

Charge and Current

15. 12 100 12cos10 , (0) 0, (0) 0 Q Q Q t Q Q′ + + = = =

(a) Find Q

h

:

2 6

1 2

12 100 0 6 8 ( cos8 sin8 ).

t

h

r r r i Q e c t c t

−

+ + = ⇒ = − ± ⇒ = +

Find Q

p

: Q

p

has the form A cos 10t + B sin 10t.

Substitution in 4 12 100 12cos10

p p p

Q Q Q t + + =

leads to A = 0, B =

1

10

and so Q

p

=

1

sin10 .

10

t

Thus Q = Q

h

+ Q

p

=

6

1 2

1

( cos8 sin8 ) sin10 .

10

t

e c t c t t

−

+ +

The initial conditions Q(0) = 0 and (0) 0 Q =

give us c

1

= 0 c

2

=

1

8

−

and the solution of the IVP is Q(t) =

6

1 1

sin8 sin10

8 10

t

e t t

−

− + .

(b) I(t) =

6

3

( ) sin8 cos8 cos10 .

4

t

Q t e t t t

−

⎛ ⎞

= − +

⎜ ⎟

⎝ ⎠

** True/False Questions
**

16. True

The steady-state solution, being a particular solution, has the form A cos ω

f

t + B sin ω

f

t, where

the forcing function is F

o

cos ω

f

t. The steady-state solution can be written in the form

x

ss

= C cos(ω

f

t − δ).

Hence, the frequency of the steady-state is the same as that of the forcing function.

17. False

The amplitude of the steady-state is a function of the frequency of the forcing function.

In fact,

0

2 2 2 2 2

0

( )

( ) ( )

f

f f

F

A

m b

ω

ω ω ω

=

− +

.

We can see that A(ω

f

) → 0 as ω

f

→ ∞ and that

0

F

k

as ω

f

→ 0.

420 CHAPTER 4 Higher-Order Linear Differential Equations

Beats

18. The identity

( ) ( ) cos cos 2sin sin A B A B A B + − − = −

may be used here. In this case, if 2 A t = , B t = ,

and we have 3 A B t + = , A B t − = . Hence,

cos3 cos 2sin 2 sin t t t t − = − .

The Beat Goes On

19. The trigonometric identity

( ) ( ) sin sin 2sin cos A B A B B A + − − =

may be used here. Let 3 A t = and B t = . From

this we get 3 A B t + = and 2 A B − = . Hence,

3 3

sin3 sin 2sin cos

2 2

2sin cos 2 .

t t t t

t t

t t

− + ⎛ ⎞ ⎛ ⎞

− =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

=

Steady State

Note: We must be careful in finding the phase angle using the formula

1

tan

B

A

δ

−

=

because we don’t know in which quadrant δ lies using

1

tan

B

A

δ

−

= . The value of δ you get

might be π units different from the correct value. Unless you know by some other means in

which quadrant δ lies, it is best to use the two equations

cos C A δ = , sin C B δ = .

A good rule of thumb is to think of the AB plane; when both A, B are positive δ will be in the first

quadrant (i.e., δ between 0 and

2

π

), but when both A and B are negative δ will be in the third

quadrant, and so on.

SECTION 4.6 Forced Oscillations 421

20. 4 4 cos x x x t + + =

The homogeneous solution to the equation is

( )

2 2

1 2

t t

h

x t c e c te

− −

= + .

We use the method of undetermined coefficients to find

( ) cos sin

p

x t A t B t = + .

Differentiating, we get

( )

( )

sin cos

cos sin .

p

p

x t A t B t

x t A t B t

′ = − +

′′ = − −

Substituting into the equation gives the equation

( ) ( ) 4 4 4 4 cos 4 4 sin cos

p p p

x x x A B A t B A B t t ′′ ′ + + = − + + + − − + = .

Hence

3

25

A = ,

4

25

B = with the particular solution

( )

3 4

cos sin

25 25

p

x t t t = + .

Nothing in x

p

dies off with time; this is our

steady-state solution. Putting this in polar form

2 2

1

3 4 5 1

25 25 25 5

4

tan 0.93 radians.

3

C

δ

−

⎛ ⎞ ⎛ ⎞

= + = =

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

= ≈

Hence, the steady-state solution is

( ) ( ) ( ) 0.20cos 0.93

p ss

x t x t t = = − .

21. 2 2 2cos x x x t + + =

The roots of the characteristic equation are 1 i − ± . We use for the particular solution:

( ) ( ) cos sin cos

p

x t A t B t C t δ = + = − .

422 CHAPTER 4 Higher-Order Linear Differential Equations

Another approach to

p

x is to note that

0

2 F = ,

0

2 ω = , 1

f

ω = , 1 m = , 2 b = and simply

substitute these numbers into the text solution to

find

( ) ( )

2

cos

5

ss

x t t δ = −

where the phase angle is ( )

1

tan 2 1.1 δ

−

= ≈ radi-

ans.

22. 4cos3 x x x t + + =

The roots of the characteristic equation are

1 3

2 2

i

− ± . Notice that, as in previous problems, there

will be an

2 t

e

−

involved in all of the terms in the homogeneous equation, so

h

x is transient, and

none of these terms will be involved in

p

x . We move on to find a particular solution, using the

method of undetermined coefficients. Let

( ) cos3 sin3

p

x t A t B t = + .

Then we have

3 sin3 3 cos3

p

x A t B t ′ = − + , ( ) 9 cos3 9 sin3

p

x t A t B t ′′ = − − .

Hence,

( ) ( ) 8 3 cos3 3 8 sin3 4cos3

p p p

x x x A B t A B t t ′′ ′ + + = − + + − − = .

Solving we get

32

73

A = − ,

12

73

B = .

In polar coordinates we have

2 2

1168 4

73 73

12

arctan 2.78 radians.

32

C A B

δ

= + = =

⎛ ⎞

= ≈

⎜ ⎟

−

⎝ ⎠

Hence, the steady-state solution is

( ) ( )

4

( ) cos 3 2.78

73

p ss

x t x t t = = − .

SECTION 4.6 Forced Oscillations 423

Resonance

23. The differential equation is given by

12 16cos x x t ω + = .

The circular frequency is

0

12 2 3

k

m

ω = = = radians per second,

the frequency is

0

1

3 f

π

= oscillations per second,

and the period of oscillations is

3

T

π

= seconds.

24. If resonance exists, the input frequency

f

ω is the same as the natural frequency

0

2 3 ω = (see

Problem 23). Hence, we have the initial-value problem

( )

12 16cos 2 3 x x t + = , ( ) ( ) 0 0 0 x x = = .

This equation has homogeneous solution

( )

( ) ( ) 1 2

cos 2 3 sin 2 3

h

x t c t c t = + .

To find a particular solution we seek a function of the form

( ) ( )

cos 2 3 sin 2 3

p

x At t Bt t = + .

Differentiating and substituting into the differential equation yields

0 A = ,

16 3

4

3 4 3

B = = ,

so the general solution is

( )

( ) ( ) ( ) 1 2

4 3

cos 2 3 sin 2 3 sin 2 3

3

x t c t c t t t = + + .

Substituting this into ( ) ( ) 0 0 0 x x = = yields

1

0 c = ,

2

0 c = . Hence, the solution to the IVP is

( )

( )

4 3

sin 2 3

3

x t t t = .

424 CHAPTER 4 Higher-Order Linear Differential Equations

Ed’s Buoy

25. (a) Simple harmonic motion with

0

2000 125

62.5 slugs

32 2

2

5 seconds,

m

T

π

ω

= = =

= =

hence

0

2

5

k

m

π

ω = =

or

2

2 2

0

125 4

10

2 25

k m

π

ω π = = × = .

We measure the displacement of the buoy ( ) x t from the water level with ( ) 0 x t =

corresponding to the position of the buoy with

4 2

3

2

+

= feet being above water.

Because the forced equation in rough seas has an amplitude of 3 feet and a period

of 7 seconds, the frequency of the forced response is

2

7

π

. We therefore get the equation

2

2

62.5 10 3cos

7

t

x x

π

π + = .

We are interested in the steady-state solution of this equation, hence we use the

method of undetermined coefficients to find a particular solution. In this case we let

2 2

cos sin

7 7

p

t t

x A B

π π

= + .

We now differentiate and substitute into the equation yielding

2

49

80

A

π

= , 0 B = . Hence,

we have

( )

2

49 2 2

( ) cos 0.06cos

7 7 80

p ss

t t

x t x t

π π

π

= = ≈ .

[Although no friction term has been included in the preceding DE, there will in reality be

such a term, so the homogeneous solution would go to zero leaving only the oscillation

( )

ss

x t .]

SECTION 4.6 Forced Oscillations 425

(b) The steady-state solution never varies more than

2

49

0.06

80π

≈ feet from its equilibrium

position 3 feet above the level water line. The steady-state solution has the buoy moving

in phase with the waves so when a 3-foot wave crest hits, the buoy’s height above sea

level is approximately 3.06 feet. Thus the buoy is always at least 0.06 feet above the

water and is never submerged.

General Solution of the Damped Forced System

26. (a) We know the form of the particular solution is

( ) cos sin

ss f f

x t A t B t ω ω = + .

Substituting this into the equation

0

cos

f

mx bx kx F t ω + + =

and simplifying, we find

( ) ( )

2 2

0

cos sin cos

f f f f f f f

k m A b B t k m B b A t F t ω ω ω ω ω ω ω

⎡ ⎤ ⎡ ⎤

− + + − − =

⎣ ⎦ ⎣ ⎦

.

Setting the coefficients of the sine and cosine terms equal, yields the two equations

( )

( )

2

0

2

0.

f f

f f

k m A b B F

k m B b A

ω ω

ω ω

− + =

− − =

Solving, we obtain

( )

( )

( )

2

0

2

2 2 2

0

2

2 2 2

.

f

f f

f

f f

F k m

A

k m b

F b

B

k m b

ω

ω ω

ω

ω ω

−

=

− +

=

− +

(b) From part (a) we have

( )

( )

( )

2 0

2

2 2 2

cos sin

ss f f f f

f f

F

x t k m t b t

k m b

ω ω ω ω

ω ω

⎡ ⎤

= − +

⎣ ⎦

− +

.

Rewriting this in polar form, yields

( )

( )

( )

0

2

2 2 2

cos

ss f

f f

F

x t t

k m b

ω δ

ω ω

= −

− +

,

with

( )

2 2

0

tan

f

f

b

m

ω

δ

ω ω

=

−

. From this equation it can be seen that the long-term response

of the system is oscillatory with the same frequency

f

ω as the forcing term, but with a

phase lag.

426 CHAPTER 4 Higher-Order Linear Differential Equations

Phase Portrait Recognition

27. 0.3 cos x x x t + + =

(C) We have damping but we also have a sinusoidal forcing term. Hence, the homogeneous

solution goes to zero and particular solutions consist of sines and cosines, which give rise to

circles in the phase plane. Therefore, starting from the origin ( ) ( ) 0 0 0 x x = = we get a curve that

approaches a circle from the inside.

28. 0 x x + =

(A) The equation models the undamped harmonic oscillator, which has circular trajectories.

29. cos x x t + =

(D) This equation has resonance so the trajectories in phase space spiral to infinity.

30. 0.3 0 x x x + + =

(B) The system is unforced but damped, and hence trajectories must approach ( ) ( ) 0 0 0 x x = = .

Matching 3D Graphs

31. (a) E (b) B (c) C, D (d) A, C

(e) B, D, E (f) C, D (g) A

Mass-Spring Analysis I

32. (a) 4cos 4 3sin 4

h

x t t = − (b) The amplitude of 5

h

x = .

(c) The amplitude (time-varying) of

p

x is 5t .

(d) 5 sin 4

p

x t t = ;

p

x will be unchanged.

(e) Because

0 f

ω ω = ,

0

4

k

k

m

ω = = = , 16 k = .

(f) The system is in a state of pure resonance because

0 f

ω ω = . The mass will oscillate with

increasing amplitude.

Electrical Version

33. (a) 4cos 4 5sin 4

h

Q t t = −

(b) The amplitude of the transient solution is

2 2

4 5 41 A = + =

(c) 6 cos 4

s p

Q Q t t = = (d) 6 cos 4

p

Q t t =

(e)

1 1

4

LC C

= = ,

1

16

C =

(f) The charge on the capacitor will oscillate with ever-increasing amplitude due to pure

resonance.

SECTION 4.6 Forced Oscillations 427

Mass-Spring Analysis II

34. (a)

2 2

3 cos 2 sin

t t

h

x e t e t

− −

= −

(b) From the exponential function

2t

e

−

we see that

2

b

m

−

= −2. Hence if m = 1, 4 b = .

(c) Underdamped

(d) The amplitude (time-varying) of

2

13

t

h

x e

−

= .

(e) ( ) 2 cos 5

ss p

x x t δ = = −

(f) 5 rad sec

f

ω = ,

4 16

1

2

k

β

−

= = so 5 Nt m k = , and

0

5 rad sec ω = . From For-

mula (19), we obtain

( ) ( )

0 0

2

2

2

2

800

5 5 4 5

F F

= =

− + ⋅

. Therefore

0

40 Nt F = .

Perfect Aim

35. (a) The dart is fired straight at the target with initial velocity v

o

.

Let y

D

denote the vertical position of the dart at time t.

D

y g ′′ = −

D

y gt c ′ = − +

(0) sin

D o

y v θ ′ = so

0

sin

D

y gt v θ ′ = − +

Integrating: y

D

=

2

0

1

( sin )

2

gt v t c θ − + +

(0) 0

D

y = so

2

0

1

( sin )

2

D

y gt v t θ = − +

Now consider the target. Let y

T

denote the vertical position of the target at time T. The

initial conditions are y

T

(0) = y

0

and (0)

T

y′ = 0. By similar calculations, y

T

= y

0

−

1

2

gt

2

.

(b) To find the time t

1

when the heights of dart and target are equal, set y

T

(t) = y

D

(t).

Then y

o

−

2 2

0

1 1

sin

2 2

gt v t gt = − so that T

1

=

0

0

sin

y

v t

and x

1

=

0 0

0

0

( cos )

sin tan

y y

v

v

θ

θ θ

⎛ ⎞

=

⎜ ⎟

⎝ ⎠

However, tan θ =

0 0

0 1

y y

x x

= so that x

1

= x

0

(i.e., the dart hits the target).

(c) Substituting t

1

into either equation for the height of the dart or the target at impact yields y

T

.

y

T

= y

0

−

2

0

2

0

2( sin )

gy

v θ

Simplifying by using the diagram so that sin θ =

0

y

d

, we obtain

y

T

= y

0

−

2

0

1

2

d

g

v

⎛ ⎞

⎜ ⎟

⎝ ⎠

.

2 2

0 0

d x y = +

428 CHAPTER 4 Higher-Order Linear Differential Equations

Extrema of the Amplitude Response

36. We write

( )

( )

0 0

2 2

2

2 2 2

2 2

/

f

f f

f f

F F m

A

k b k m b

m m

ω

ω ω

ω ω

= =

⎡ ⎤

⎛ ⎞ ⎛ ⎞ − +

− +

⎜ ⎟ ⎜ ⎟ ⎢ ⎥

⎝ ⎠ ⎝ ⎠

⎣ ⎦

.

Differentiating A with respect to

f

ω , we find

( )

( )

( ) ( )

2

2

2

0

3 2

2 2

2 2

2

2

f

f

f

f f

k b

m m m

A F

k b

m m

ω

ω

ω

ω ω

⎛ ⎞

⎡ ⎤

− − −

⎜ ⎟

⎢ ⎥

⎣ ⎦

⎝ ⎠

′ = ×

⎡ ⎤

− +

⎢ ⎥

⎣ ⎦

from which it follows that

( )

0

f

A ω ′ = if and only if 0

f

ω = or

2

2

2

f

k b

m m

ω = − .

When

2

2 b mk > ,

f

ω is not real. Hence

( )

0

f

A ω ′ = only when 0

f

ω = . In this case

( )

f

A ω damps to zero as

f

ω goes from 0 to ∞. It is clear then that the maximum of

( )

f

A ω

occurs when 0

f

ω = and has the value

( )

1

0 A

k

= .

When

2

2 b mk < , then

f

ω is real and positive. It is easy using the sign of the derivative to see

that the maximum of

( )

f

A ω occurs at

2

2

2

f

k b

m m

ω = − .

Evaluating the amplitude response at this value of

f

ω yields the expression

0

max

2

2

4

F

A

k b

b

m

m

=

−

.

Suggested Journal Entry

37. Student Project

SECTION 4.7 Conservation and Conversion 429

4.7

Conservation and Conversion

Total Energy of a Mass-Spring

1. 0 x x + = , ( ) 0 1 x = , ( ) 0 4 x = −

The total energy of the system is

2 2

1 1

2 2

E mx kx = + . Here 1 m = , 1 k = , so

2 2

1 1

2 2

E x x = + .

Because the system is conservative it does not change over time. Initially we have ( ) 0 1 x = ,

( ) 0 4 x = − , so the initial energy of this system is

( ) ( )

2 1 1 17

4 1

2 2 2

E = − + = ,

which remains constant in time.

Nonconservative Mass-Spring System

2. 2 26 0 x x x + + = , ( ) 0 1 x = , ( ) 0 4 x =

(a) The solution of the IVP is

( ) ( ) ( ) sin 5 cos 5

t

x t e t t

−

⎡ ⎤ = +

⎣ ⎦

.

At time

5

t

π

= we have

5

5

x e

π

π

−

⎛ ⎞

= −

⎜ ⎟

⎝ ⎠

.

Also

( ) ( ) ( ) ( ) ( ) sin 5 cos 5 5cos 5 5sin 5

t t

x t e t t e t t

− −

⎡ ⎤ ⎡ ⎤ = − + + −

⎣ ⎦ ⎣ ⎦

,

so

5 5 5

5 4

5

x e e e

π π π

π

− − −

⎛ ⎞

= − = −

⎜ ⎟

⎝ ⎠

.

(b) Because 1 m = , 26 k = , we have

( ) ( )

2 2

2 5 2 5 2 5

1 1 1 1

16 26 21

5 2 5 2 5 2 2

E m x k x e e e

π π π

π π π

− − −

⎡ ⎤ ⎡ ⎤

⎛ ⎞ ⎛ ⎞ ⎛ ⎞

= + = + =

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢ ⎥ ⎢ ⎥

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

⎣ ⎦ ⎣ ⎦

.

430 CHAPTER 4 Higher-Order Linear Differential Equations

(c) Because the initial energy of the system was

( ) ( ) ( ) ( ) ( ) ( )

2 2

2 2

1 1 1 1

0 0 0 1 4 26 1 21 joules or ergs

2 2 2 2

E m x k x ⎡ ⎤ ⎡ ⎤ = + = ⋅ + ⋅ =

⎣ ⎦ ⎣ ⎦

the energy loss

( )

2 5 2 5

21 21 21 1 e e

π π − −

= − = − .

General Formula for Total Energy in an LC-Circuit

3.

1

0 LQ Q

C

+ =

, ( )

0

0 Q Q = , ( )

0

0 I I =

The total energy of this LC system is the constant value

( )

2 2 2 2

0 0

1 1 1 1

2 2 2 2

E t LQ Q LI Q

C C

= + = +

.

Energy in an LC-Circuit

4.

1

0 LQ Q

C

+ =

, ( )

0

0 Q Q = , ( )

0

0 I I =

The total energy of this LC system is the constant value

( ) ( ) ( )

2 2 2 2 2 2

0 0

1 1 1 1 1 16

4 1 4 130

2 2 2 2 2 2

E t LQ Q LI Q

C C

= + = + = + =

.

Energy Loss in LRC-Circuit

5. 0 LQ RQ CQ + + =

, ( )

0

0 Q Q = , ( )

0

0 I I =

We are given 1 L = henry, 1 R = ohm, 4 C = farads,

0

0 Q = coulomb,

0

2 I = amps. Hence, the

IVP is

0.25 0 Q Q Q + + =

, ( ) 0 0 Q = , ( ) 0 2 I = ,

whose solution is given by

( )

( ) ( ) ( )

2

2 2 2

2

2 2 .

t

t t t

Q t te

I t Q t e te e t

−

− − −

=

= = − = − −

**Hence, the initial energy is
**

( ) ( )( ) ( )

2 2 2

0 0

1 1 1 1

0 1 2 0 2

2 2 2 8

E LI Q

C

= + = + = joules.

At time t the energy is

( ) ( ) ( ) ( ) ( ) ( ) ( )

2 2

2 2 2 2 2

1 1 1 1 1

2 4 2 2 2

2 2 2 8 2

t t t t

E t LI t Q t e t t e e t t e t t

C

− − − −

⎡ ⎤

= + = − + = − + = − +

⎣ ⎦

.

Hence, after time t the energy loss is

( )

2

2 2 2

t

e t t

−

− − + joules.

SECTION 4.7 Conservation and Conversion 431

Questions of Energy

6.

3

0 x x x − + =

(a)

2

1

KE

2

x = ,

( )

3 2 4

1 1

2 4

V x x dx x x = − + = − +

∫

( )

2 2 4

1 1 1

, KE

2 2 4

E x x V x x x = + = − +

(b) To find the equilibrium points, we seek the solutions of

3

0

E

x x

x

∂

= − + =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, we find three equilibrium points at ( ) 1, 0 − , (0, 0) and (1, 0).

Because 0 x = for all these points, we determine which points are stable (local maxima)

by simply drawing the graph of ( ) V x (shown in part (c)).

(c) Graph of the potential energy ( ) V x is

shown. Note that ( ) V x has local minima

at 1 x = ± and a local maxima at 0 x = .

Hence,

( ) 1, 0 − and ( ) 1, 0

are stable points, and

( ) 0, 0

is an unstable point.

1

2

x

3

–0.5

–1 –2 1 2

V x ( )

Potential energy of

3

0 x x x − + =

7.

3

0 x x x − − =

(a)

2

1

KE

2

x = ,

( )

3 2 4

1 1

2 4

V x x dx x x = − − = − −

∫

( )

2 2 4

1 1 1

, KE

2 2 4

E x x V x x x = + = − −

(b) To find the equilibrium points, we seek the solutions of the two equations

3

0

E

x x

x

∂

= − − =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, we find one equilibrium point at ( ) 0, 0 . Because 0 x = we

determine if it is a stable point (local minima) or unstable point (local maxima) by simply

drawing the graph of ( ) V x shown in part (c).

432 CHAPTER 4 Higher-Order Linear Differential Equations

The graph of potential energy ( ) V x is

shown. Note that ( ) V x has local maxima

at 0 x = , and hence ( ) 0, 0 is an unstable

equilibrium point.

(c) See the figure to the right.

–2

–1

x

1

–1 –2 1 2

V x ( )

Potential energy of

3

0 x x x − − =

8.

2

0 x x x − + =

(a)

2

1

KE

2

x = ,

( )

2 2 3

1 1

2 3

V x x dx x x = − + = − +

∫

( )

2 2 3

1 1 1

, KE

2 2 3

E x x V x x x = + = − +

(b) To find the equilibrium points, we seek the solutions of the two equations

2

0

E

x x

x

∂

= − + =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, we find two equilibrium points at ( ) 0, 0 , ( ) 1, 0 . Because 0 x =

for both these points, we determine which points are stable (local minima) and which are

unstable (local maxima) by simply drawing the graph of ( ) V x shown in part (c).

The graph of potential energy ( ) V x is

shown. Note that ( ) V x has local minima

at 1 x = + and a local maxima at 0 x = .

Hence, ( ) 0, 0 is an unstable point and

( ) 1, 0 is a stable point.

(c) See figure.

–2

–1

x

1

–1 –2 1 2

V x ( )

Potential energy of

2

0 x x x − + =

9.

2

0 x x + =

(a)

2

1

KE

2

x = ,

2 3

1

3

V x dx x = =

∫

SECTION 4.7 Conservation and Conversion 433

( )

2 3

1 1

, KE

2 3

E x x V x x = + = +

(b) To find the equilibrium points, we seek the solutions of the equations

2

0

E

x

x

∂

= =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, we find one equilibrium point at

( ) 0, 0 .

To determine if the point is stable, we

note that the potential energy

( )

3

1

3

V x x =

does not have a local maxima or minima

at ( ) 0, 0 , so ( ) 0, 0 is an unstable (or

semistable) equilibrium point.

(c) The graph of ( ) V x is a simple cubic.

–1

2

x

3

–2

–1 –2 1 2

V x ( )

Potential energy of

2

0 x x + =

10. 1 0

x

x e − − =

(a)

2

1

KE

2

x = ,

( )

1

x x

V e dx e x = − − = − −

∫

( )

2

1

, KE

2

x

E x x V x e x = + = − −

(b) To find the equilibrium points, we seek the solutions of the two equations

1 0

x

E

e

x

∂

= − − =

∂

, 0

E

x

x

∂

= =

∂

.

It is clear that the first of these equations

has no root, so the equation has no equi-

librium points.

(c) Note that the graph of the potential energy

( ) V x does not have any local maxima or

minima points, which corresponds to the

lack of equilibrium points found in part

(b).

–1

1

x

2

–2

–1 –2 1 2

V x ( )

Potential energy of 1 0

x

x e − − =

434 CHAPTER 4 Higher-Order Linear Differential Equations

11. ( )

2

1 0 x x + − =

(a)

2

1

KE

2

x = , ( )

2

3 2

1

1

3

V x dx x x x = − = − +

∫

( )

2 3 2

1 1

, KE

2 3

E x x V x x x x = + = + − +

(b) To find the equilibrium points, we seek the solutions of the two equations

2

2 1 0

E

x x

x

∂

= − + =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, yields only one real equilibrium point ( ) 1, 0 . Because 0 x = ,

we determine if it is stable (local minima) or unstable (local maxima) by simply drawing

the graph of ( ) V x show in part (c). Thus, we find that ( ) 1, 0 is an unstable (or

semistable) equilibrium point.

(c) The graph of the potential energy ( ) V x

is shown. Note that ( ) V x has neither a

maximum nor a minimum at 1 x = , and

hence ( ) 1, 0 is an unstable (or semista-

ble) equilibrium point.

–1

1

x

2

–2

–1 –2 1 2

V x ( )

Potential energy of ( )

2

1 0 x x + − =

12.

2

1

x

x

=

(a)

2

1

KE

2

x = ,

2

1 1

V dx

x x

= − =

∫

( )

2

1 1

, KE

2

E x x V x

x

= + = +

(b) To find the equilibrium points, we seek the solutions of the two equations

2

1

0

E

x x

∂ −

= =

∂

, 0

E

x

x

∂

= =

∂

.

Because the first equation does not have a solution, there is no equilibrium point.

SECTION 4.7 Conservation and Conversion 435

(c) The graph of the potential energy ( ) V x

is shown. Note that ( ) V x does not have

any local maxima or minima, which cor-

responds to the absence of equilibrium

points noted in part (b).

–2

2

x

4

–4

–1 –2 1 2

V x ( )

Potential energy of

2

1 x x =

13. ( )( ) 1 2 x x x = − −

(a)

2

1

KE

2

x = , ( )( )

3 2

1 3

1 2 2

3 2

V x x dx x x x = − − − = + −

∫

( )

2 3 2

1 1 3

, KE 2

2 3 2

E x x V x x x x = + = − + −

(b) To find the equilibrium points, we seek the solutions of the two equations

2

3 2 0

E

x x

x

∂

= − + − =

∂

, 0

E

x

x

∂

= =

∂

.

Solving these equations, we find two equilibrium points at ( ) 1, 0 and ( ) 2, 0 . Because

0 x = , we determine which points are stable (local minima) and which are unstable (local

maxima) by simply drawing the graph of ( ) V x show in part (c).

(c) The graph of the potential energy ( ) V x

is shown. Note that ( ) V x has local

minima at 1 x = and a local maxima at

2 x = . Hence, ( ) 1, 0 is a stable point and

( ) 2, 0 is an unstable point.

–1

1

x

2

–2

–2 –4 2 4

V x ( )

Potential energy of ( )( ) 1 2 x x x = − −

436 CHAPTER 4 Higher-Order Linear Differential Equations

Conservative or Nonconservative?

14.

2

0 x x + =

Conservative because it is of the form

( ) 0 mx F x + = .

The total energy of this conservative system is

( ) ( )

2 3

1 1 1

,

2 2 3

E x x mx F x dx x x = + = +

∫

.

–2

2

x

4

–4

–2 –4 2 4

x

We draw contour curves for this surface over the xx -plane to view the trajectories of the

differential equation in the xx plane.

15. 0 x kx + =

Conservative because it has the form

( ) 0 mx F x + = .

The total energy of this conservative system is

( ) ( )

2 2

1 1 1

,

2 2 2

E x x mx F x dx x kx = + = +

∫

.

–2

2

x

4

–4

–2 –4 2 4

x

We draw contour curves for this surface over the xx -plane to view the trajectories of the differen-

tial equation in the xx plane. The trajectories of 0 x kx + = are ellipses each with height k

times its width.

16.

2

1 x x x + + =

Not conservative due to the x term. The spiral

trajectories in its phase plane cannot be level

curves of any surface.

x

5

–5

–5 5

x

SECTION 4.7 Conservation and Conversion 437

17. sin 0 θ θ + =

**Conservative because it is of the form
**

( ) 0 m F θ θ + =

.

The total energy of this nonconservative system

is

( ) ( )

2

1 1

, cos

2 2

E m F d θ θ θ θ θ θ θ = + = −

∫

.

–2

2

4

–4

θ

−π π

θ

We can draw contour curves for this surface over the θθ

**-plane to view the trajectories of the
**

differential equation in the θθ

plane.

18. sin 1 θ θ + =

**Conservative because it can be written in the
**

form

( ) 0 m F θ θ + =

,

where ( ) sin 1 F θ θ = − . The total energy is

( )

2

1

, cos

2

E θ θ θ θ θ = − −

.

–1

1

θ

2

–2

–4 –8 4 8

θ

We can draw contour curves for this surface over the θθ

**-plane to view the trajectories of the
**

differential equation in the θθ

plane.

19. sin 1 θ θ θ + + =

Not conservative due to the θ

term. The

following phase plane portrait shows equilibria

along the axis. Trajectory cannot be level curves

for any surface.

–2

2

4

–4

θ

−π π −2π 2π

θ

Trajectories of a nonconservative system

438 CHAPTER 4 Higher-Order Linear Differential Equations

Time-Reversible Systems

20. (a) ( ) mx F x = . If we introduce backwards time t τ = − , then taking the derivatives, yields

( ) ( )

2 2 2

2 2 2

1

dx dx d dx

dt d dt d

d x d d dx d d x d x

x

dt d d dt dt d d

τ

τ τ

τ

τ τ τ τ

= = −

⎛ ⎞

= = − = − − =

⎜ ⎟

⎝ ⎠

**The conservative system ( ) 0 x F x + = is transformed into exactly the same equation
**

( )

2

2

0

d x

F x

dτ

+ =

in backwards time τ .

(b) The solution of the IVP

0 x x + = , ( ) 0 1 x = , ( ) 0 0 x =

is ( ) cos x t t = . If we replace t by –t, it yields the solution ( ) ( ) cos cos x t t t − = − = . Hence,

running the system backwards looks exactly like running the system forward.

(c) The solution of the IVP x mg = − , ( ) 0 0 x = , ( ) 0 100 x = is

( )

2

1

100

2

x t mgt t = − + .

If we replace t by –t, we get ( )

2

1

100

2

x t mgt t − = − − . Hence, the solution is not the same,

and the system is not time reversible.

(d) If we think of a time-reversible system as a system where equations of motion are the

same when we replace t by –t, we might make the following conclusions.

(i) yes (ii) no (iii) no

(iv) no (v) yes

Computer Lab: Undamped Spring

21. IDE Lab

Computer Lab: Damped Spring

22. IDE Lab

SECTION 4.7 Conservation and Conversion 439

Conversion of Equations

23. ( )

2

0

x x f t ω + =

Letting

1

x x = ,

2

x x = , we have

( )

1 2

2

2 0 1

.

x x

x x f t ω

=

= − +

**In matrix form, this becomes
**

( )

1 1

2

2 2 0

0 1 0

0

x x

f t x x ω

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

24. sin 0

g

L

θ θ + =

Letting

1

x θ = ,

2

x θ =

, we have

1 2

2 1

sin .

x x

g

x x

L

=

= −

**This system is not linear, so there is no matrix form.
**

25. 0 ay by cy ′′ ′ + + =

Letting

1

x y = ,

2

x y′ = , we have

1 2

2 1 2

.

x x

c b

x cx x

a a

=

= − −

**In matrix form, this becomes
**

1 1

2 2

0 1

x x

c b

x x

a a

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

.

26.

1

0 LQ RQ Q

C

+ + =

Letting

1

x Q = ,

2

dQ

x

dt

= , we have

1 2

2 1 2

1

.

x x

R

x x x

LC L

=

= − −

**440 CHAPTER 4 Higher-Order Linear Differential Equations
**

In matrix form, this becomes

1 1

2 2

0 1

1

x x

R

x x

LC L

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

.

27.

( )

2 2 2

0 t x tx t n x + + − =

Letting

1

x x = ,

2

x x = , we have

( )

1 2

2 2

2 1 2

2

1

.

x x

t n

x x x

t t

=

−

= − −

**In matrix form, this becomes
**

1 1

2 2

2 2

2

0 1

1

x x

t n

x x

t t

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

− ⎢ ⎥ ⎢ ⎥

⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

.

28. ( ) 1 sin 0 x t x ω + + =

Letting

1

x x = ,

2

x x = , we have

( )

1 2

2 1

1 sin .

x x

x t x ω

=

= − +

**In matrix form, this becomes
**

( )

1 1

2 2

0 1

1 sin 0

x x

t x x ω

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

29.

( ) ( )

2

1 2 1 0 t y ty n n y ′′ ′ − − + + =

Letting

1

x y = ,

2

x y′ = , we have

( )

1 2

2 1 2

2 2

1

2

.

1 1

x x

n n

t

x x x

t t

=

+

= − +

− −

**In matrix form, this becomes
**

1 1

2 2

2 2

0 1

1 2

1 1

x x

n t

x x n

t t

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

+

⎢ ⎥ ⎢ ⎥

⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

⎢ ⎥

− − ⎣ ⎦

.

SECTION 4.7 Conservation and Conversion 441

30.

4 3 2

4 3 2

3 2 4 1

d y d y d y dy

y

dt dt dt dt

+ + + + =

If we introduce

1

2

2

3

2

3

4

3

x y

dy

x

dt

d y

x

dt

d y

x

dt

=

=

=

=

we have the differential equations

1 2

2 3

3 4

4 1 2 3 4

4 2 3 1

x x

x x

x x

x x x x x

=

=

=

= − − − − +

or in matrix form

1 1

2 2

3 3

4 4

0 1 0 0 0

0 0 1 0 0

0 0 0 1 0

4 1 2 3 1

x x

x x

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Conversion of IVPs

31. 2 sin y y y t ′′ ′ − + = , ( ) 0 1 y = , ( ) 0 1 y′ =

Letting

1

x y = ,

2

x y′ = yields

( )

( )

1 2 1

2 1 2 2

0 1

2 sin 0 1

x x x

x x x t x

′ = =

′ = − + + =

.

In matrix form this becomes

1 1

2 2

0 1 0

2 1 sin

x x

x x t

′

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

;

( )

( )

1

2

0 1

0 1

x

x

⎡ ⎤

⎡ ⎤

=

⎢ ⎥

⎢ ⎥

⎣ ⎦

⎣ ⎦

.

32. 1 y ty y ′′′ ′ + + = , ( ) 0 0 y = , ( ) 0 1 y′ = , ( ) 0 2 y′′ =

Letting

1

x y = ,

2

x y′ = ,

3

x y′′ = yields

( )

( )

( )

1 2 1

2 3 2

3 1 2 3

0 1

0 1

1 0 2

x x x

x x x

x x tx x

′ = =

′ = =

′ = − − + =

.

442 CHAPTER 4 Higher-Order Linear Differential Equations

In matrix form, this becomes

1 1

2 2

3 3

0 1 0 0

0 0 1 0

1 0 1

x x

x x

x t x

′

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ = +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ′ − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

,

( )

( )

( )

1

2

3

0 0

0 1

0 2

x

x

x

⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

.

33. 3 2

t

y y z e

−

′′ ′ + + = , ( ) 0 0 y = , ( ) 0 1 y′ =

2 1 z y z ′′ + + = , ( ) 0 1 z = , ( ) 0 0 z′ =

Letting

1

x y = ,

2

x y′ = ,

3

x z = ,

4

x z′ = yields

( )

( )

( )

( )

1 2 1

2 2 3 2

3 4 3

4 1 3 4

0 0

3 2 0 1

0 1

2 1 0 0

t

x x x

x x x e x

x x x

x x x x

−

′ = =

′ = − − + =

′ = =

′ = − − + =

.

In matrix form this becomes

1 1

2 2

3 3

4 4

0 0 1 0 0

0 3 2 0

0 0 0 1 0

1 0 2 0 1

t

x x

x x

e

x x

x x

−

′ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ − −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ′

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ − −

⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

,

( )

( )

( )

( )

1

2

3

4

0 0

0 1

0 1

0 0

x

x

x

x

⎡ ⎤

⎡ ⎤

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎣ ⎦

⎣ ⎦

.

34. 2 1 y y z ′′′ ′ + + = , ( ) 0 1 y = , ( ) 0 0 y′ = , ( ) 0 1 y′′ =

2 sin z y z t ′ + + = , ( ) 0 1 z =

Letting

1

x y = ,

2

x y′ = ,

3

x y′′ = ,

4

x z = yields

( )

( )

( )

( )

1 2 1

2 3 2

3 2 4 3

4 1 4 4

0 0

0 0

2 1 0 1

2 sin 0 1

x x x

x x x

x x x x

x x x t x

′ = =

′ = =

′ = − − + =

′ = − − + =

.

In matrix form this becomes

( )

( )

( )

( )

1 1 1

2 2 2

3 3 3

4 4 4

0 0 1 0 0 0 0

0 0 0 1 0 0 0

0 0 1 0 2 1 1

0 1 0 0 2 sin 1

x x x

x x x

x x x

x x x t

′ ⎡ ⎤

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′

⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= + =

⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ′ − −

⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ − −

⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎣ ⎦

.

SECTION 4.7 Conservation and Conversion 443

Conversion of Systems

35. 1 1 2

2 2

2

2 0

t

x x x e

x x

−

+ + =

+ =

Letting

1 1

z x = ,

2 1

z x = ,

3 2

z x = ,

4 2

z x = yields the system

1 2

2 1 3

3 4

4 3

2

2 .

t

z z

z z z e

z z

z z

−

=

= − − +

=

= −

**In matrix form this becomes
**

1 1

2 2

3 3

4 4

0 0 1 0 0

1 0 2 0

0 0 0 1 0

0 0 2 0 0

t

z z

z z

e

z z

z z

−

′ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ − −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ′

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

′ −

⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

36.

( )

( )

, , , , ,

, , , , ,

y f t y y y z z

z f t y y y z z

′′′ ′ ′′ ′ =

′′ ′ ′′ ′ =

Letting

1

x y = ,

2

x y′ = ,

3

x y′′ = ,

4

x z = ,

5

x z′ = yields

( )

( )

1 2

2 3

3 1 2 3 4 5

4 5

5 1 2 3 4 5

, , , , ,

, , , , , .

x x

x x

x f t x x x x x

x x

x g t x x x x x

′ =

′ =

′ =

′ =

′ =

37.

1 11 1 12 2 13 3

2 21 1 22 2 23 3

3 31 1 32 2 33 3

x a x a x a x

x a x a x a x

x a x a x a x

= + +

= + +

= + +

If we let

1 1

z x = ,

2 1

z x = ,

3 2

z x = ,

4 2

z x = ,

5 3

z x = ,

6 3

z x = , we get

1 2

2 11 1 12 3 13 5

3 4

4 21 1 22 3 23 5

5 6

6 31 1 32 3 33 5

.

z z

z a z a z a z

z z

z a z a z a z

z z

z a z a z a z

=

= + +

=

= + +

=

= + +

**444 CHAPTER 4 Higher-Order Linear Differential Equations
**

In matrix form = z Az , this becomes

1 1

2 11 12 13 2

3 3

4 21 22 23 4

5 5

6 31 32 33 6

0 1 0 0 0 0

0 0 0

0 0 0 1 0 0

0 0 0

0 0 0 0 0 1

0 0 0

z z

z a a a z

z z

z a a a z

z z

z a a a z

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Solving Linear Systems

38.

1 2

2 1 2

2 3

x x

x x x

′ =

′ = − −

From first DE:

2 1

x x′ = . Substituting in second DE gives

( ) ( )

1 1 1

2 3 x x x

′

′ ′ = − −

or the second order DE

1 1 1

3 2 0 x x x ′′ ′ + + = .

Solving the second order DE gives

2

1 1 2

t t

x c e c e

− −

= + .

Substituting this result in first DE gives

2

2 1 1 2

2

t t

x x c e c e

− −

′ = = − − .

39.

1 1 2

2 1 2

3 2

2 2

x x x

x x x

′ = −

′ = −

From first DE:

2 1 1

1 3

2 2

x x x ′ = − + . Substituting in second DE yields a second order DE to solve for

1

x .

1 1 1 1 1

1 1 1 1 1

1 1 1

2

1 1 2

1 3 1 3

2 2

2 2 2 2

1 3

2 3

2 2

2 0

t t

x x x x x

x x x x x

x x x

x c e c e

−

′

⎛ ⎞ ⎛ ⎞

′ ′ − + = − − +

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

′′ ′ ′ − + = + −

′′ ′ − − =

= +

To find

2

x , substitute the solution for

1

x back into the first DE.

( ) ( )

2 2 2

2 1 1 1 2 1 2 1 2

1 3 1 3 1

2 2

2 2 2 2 2

t t t t t t

x x x c e c e c e c e c e c e

− − −

′ = − + = − − + + = + .

SECTION 4.7 Conservation and Conversion 445

40.

1 1 2

2 1 2

4

x x x

x x x

′ = +

′ = +

From first DE:

2 1 1

x x x ′ = − . Substituting in second DE yields a second order DE to solve for

1

x .

( ) ( )

1 1 1 1 1

1 1 1 1 1

1 1 1

3

1 1 2

4

4

2 3 0

t t

x x x x x

x x x x x

x x x

x c e c e

−

′

′ ′ − = + −

′′ ′ ′ − = + −

′′ ′ − − =

= +

From first calculation,

2 1 1

x x x ′ = − , so

( ) ( )

3 3 3

2 1 2 1 2 1 2

3 2 2

t t t t t t

x c e c e c e c e c e c e

− − −

= − − + = − .

41.

1 2

2 1 2

2 3 5

x x t

x x x

′ = +

′ = − + +

From first DE:

2 1

x x t ′ = − . Substituting in second DE yields a second order DE to solve for

1

x .

( ) ( )

1 1 1

1 1 1

1 1 1

2

1 1 2

2 3 5

1 2 3 3 5

3 2 3 6.

t t

h

x t x x t

x x x t

x x x t

x c e c e

′

′ ′ − = − + − +

′′ ′ − = − + − +

′′ ′ − + = − +

= +

To find

1p

x by the method of undetermined coefficients, substitute

1p

x at b = + ,

1p

x a ′ = ,

1

0

p

x′′ =

to obtain

0 3 2 2 3 6 a at b t − + + = − + .

Comparing like terms,

Coefficients of t: 2 3 a = − so

3

2

a = − .

Constants: 3 2 6 a b − + = so

3

4

b = . Hence

1

3 3

2 4

p

x t = − + . Therefore,

2

1 1 2

3 3

2 4

t t

x c e c e t = + − + .

From first calculation

2 1

x x t ′ = − , so

2

2 1 2

3

2

2

t t

x c e c e t = + − − .

446 CHAPTER 4 Higher-Order Linear Differential Equations

Solving IVPs for Systems

42.

1 1 2

2 1 2

6 3

2

x x x

x x x

′ = −

′ = +

From first DE:

2 1 1

1

2

3

x x x′ = − . Substituting in second DE yields a second order DE to solve for

1

x .

1 1 1 1 1

1 1 1

3 4

1 1 2

1 1

2 2 2

3 3

7 12 0

.

t t

x x x x x

x x x

x c e c e

′

⎛ ⎞ ⎛ ⎞

′ ′ − = + −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

′′ ′ − + =

= +

From first calculation,

2 1 1

1

2

3

x x x′ = − , so

( ) ( )

3 4 3 4 3 4

2 1 2 1 2 1 2

1 2

2 3 4

3 3

t t t t t t

x c e c e c e c e c e c e = + − + = + .

Applying initial conditions:

( )

( )

1 1 2

2 1 2

0 2 2

2

0 3 3

3

x c c

x c c

= ⇒ + =

= ⇒ + =

so

2

3 c = − and

1

5 c = .

The solution to the IVP is

3 4

1

5 3

t t

x e e = − ,

3 4

2

5 2

t t

x e e = − .

43.

1 1 2

2 1 2

3 4

2

x x x

x x x

′ = +

′ = +

From first DE:

2 1 1

1 3

4 4

x x x ′ = − . Substituting in second DE yields a second order DE to solve for

1

x .

1 1 1 1 1

1 1 1 1 1

1 1 1

5

1 1 2

1 3 1 3

2

4 4 4 4

3 8 3

4 5 0

t t

x x x x x

x x x x x

x x x

x c e c e

−

′

⎛ ⎞ ⎛ ⎞

′ ′ − = + −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

′′ ′ ′ − = + −

′′ ′ − − =

= +

From first calculation,

2 1 1

1 3

4 4

x x x ′ = − , so

( ) ( )

5 5 5

2 1 2 1 2 1 2

1 3 1

5

4 4 2

t t t t t t

x c e c e c e c e c e c e

− − −

= − − + = −

SECTION 4.7 Conservation and Conversion 447

Applying initial conditions:

( )

( )

1 1 2

2 1 2

0 1 1

1

0 1 1

2

x c c

x c c

= ⇒ + =

= − ⇒ − = −

so

1

0 c = and

2

1 c = .

The solution to the IVP is

1

t

x e

−

= ,

2

t

x e

−

= − .

Counterexample

44. An example: The degenerate system

1 2 1

1 2 1

0

0

x x x

x x x

+ + =

+ + =

where both equations are exactly the same clearly cannot be written as a second-order equation in

either

1

x or

2

x . The reader might contemplate finding all the solutions of such an undetermined

system.

Another approach: Note that when we write an nth-order equation such as

0 ay by cy ′′ ′ + + =

as a system of first-order equations by letting

1

x y = ,

2

x y′ = , the system has the form

1 1

2 2

0 1

x x

c b

x x

a a

⎡ ⎤

⎡ ⎤ ⎡ ⎤

⎢ ⎥

=

⎢ ⎥ ⎢ ⎥

⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦

⎢ ⎥

⎣ ⎦

.

This shows we cannot obtain a second-order equation in

1

x with

2 1

x x = unless the coefficient

matrix has the preceding form in which the first row contains a 0 and 1. Hence, a system such as

1 1

2 2

1 1

4 1

x x

x x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

**cannot be transformed into a second-order equation in
**

1

x with

2 1

x x = .

Coupled Mass-Spring System

45. Given the linear system

( ) ( )

( )

1 1 1 2 2 1 1 2 1 2 2

2 2 2 1 2 1 2 2

,

mx k x k x x k k x k x

mx k x x k x k x

= − + − = − + +

= − − = −

**448 CHAPTER 4 Higher-Order Linear Differential Equations
**

we let

1 1 3 2

2 1 4 2

.

z x z x

z x z x

= =

= =

We then have the first-order system

1 2

1 2 2

2 1 3

3 4

2 2

4 1 3

.

z z

k k k

z z z

m m

z z

k k

z z z

m m

=

+

= − +

=

⎛ ⎞ ⎛ ⎞

= −

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

**In matrix form this becomes
**

( )

1 1

1 2

2

2 2

3 3

4 4

2 2

0 1 0 0

0 0

0 0 0 1

0 0

z z

k k

k

z z

m m

z z

z z

k k

m m

⎡ ⎤

⎢ ⎥

⎡ ⎤ ⎡ ⎤

+

⎢ ⎥

⎢ ⎥ ⎢ ⎥ −

⎢ ⎥

⎢ ⎥ ⎢ ⎥

=

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎢ ⎥

⎣ ⎦ ⎣ ⎦

−

⎢ ⎥

⎣ ⎦

.

Satellite Problem

46.

( ) ( )

( )

( )

( ) ( )

( ) ( )

( )

2

1

2

2

2

1

k

r r t t u t

r t

t r t

u t

r t r t

θ

θ

θ

= − +

= +

Letting

1 3

2 4

,

x r x

x r x

θ

θ

= =

= =

**we have the system
**

( )

( )

1 2

2

2 1 4 1

2

1

3 4

2 4

4 2

1 1

2 1

.

x x

k

x x x u t

x

x x

x x

x u t

x x

=

= − +

=

= +

**SECTION 4.7 Conservation and Conversion 449
**

Two Inverted Pendulums

47.

( ) ( )

( ) ( )

1 1 2

2 1 2

1

1

mg mg u t

mg mg u t

θ θ θ

θ θ θ

= + + −

= + + −

Letting

1 1

2 1

3 2

4 2

x

x

x

x

θ

θ

θ

θ

=

=

=

=

**we have first-order linear system
**

( ) ( )

( ) ( )

1 2

2 1 3

3 4

4 1 3

1

1 .

x x

x mg x mgx u t

x x

x mgx mg x u t

=

= + + −

=

= + + −

**In matrix form this becomes
**

1 1

2 2

3 3

4 4

0 1 0 0 0

1 0 0 ( )

0 0 0 1 0

0 1 0 ( )

x x

x x mg mg u t

x x

x x mg mg u t

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= +

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

+ −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Suggested Journal Entry

48. Student Project

450

CHAPTER

5

Linear

Transformations

5.1

Linear Transformations

Note: Many different arguments may be used to prove nonlinearity; our solutions to Problems 1–23

provide a sampling.

Checking Linearity

1. ( ) , T x y xy =

If

[ ]

1 2

, u u = u

, [ ]

1 2

, v v = v

,

( ) ( ) ( )( )

( ) ( )

1 1 2 2 1 1 2 2

1 2 1 2

,

.

T T u v u v u v u v

T T u u v v

+ = + + = + +

+ = +

u v

u v

We see that

( ) ( ) ( ) T T T + ≠ + u v u v

,

so T is not a linear transformation.

2. ( ) ( ) , , 2 T x y x y y = +

We can write this transformation in matrix form as

( )

1 1

,

0 2 2

x x y

T x y

y y

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Hence, T is a linear transformation.

3. ( ) ( ) , , 2 T x y xy y =

If we let ( )

1 2

, u u = u

, we have

( ) ( ) ( ) ( )

1 2 1 2 2 1 2 2

, , 2 , 2 cT cT u u c u u u cu u cu = = = u

**SECTION 5.1 Linear Transformations 451
**

and

( ) ( ) ( )

2

1 2 1 2 2

, , 2 T c T cu cu c u u cu = = u

.

Hence ( ) ( ) cT T c ≠ u u

, so T is not a linear transformation.

4. ( ) ( ) , , 2, T x y x x y = +

Note that ( ) ( ) 0, 0 0, 2, 0 T = . Linear transformations always map the zero vector into the zero

vector (in their respective spaces), so T is not a linear transformation.

5. ( ) ( ) , , 0, 0 T x y x =

We let

[ ]

1 2

, u u = u

, [ ]

1 2

, v v = v

so

( ) ( ) ( ) ( ) ( ) ( ) ( )

1 1 2 2 1 1 1 1

, , 0, 0 , 0, 0 , 0, 0 T T u v u v u v u v T T + = + + = + = + = + u v u v

and

( ) ( ) ( ) ( )

1 1

, 0, 0 , 0, 0 cT c u cu T c = = = u u

.

Hence, T is a linear transformation from

2

R to

3

R .

6. ( ) ( ) , , 1, , 1 T x y x y =

Because T does not map the zero vector [ ]

2

0, 0 ∈R into the zero vector [ ]

4

0, 0, 0, 0 ∈R , T is

not a linear transformation.

7. ( ) ( ) 0 T f f =

If f and g are continuous functions on [ ] 0, 1 , then

( ) ( )( ) ( ) ( ) ( ) ( ) 0 0 0 T f g f g f g T f T g + = + = + = +

and

( ) ( )( ) ( ) ( ) 0 0 T cf cf cf cT f = = = .

Hence, T is a linear transformation.

8. ( ) T f f = −

If f and g are continuous functions on [ ] 0, 1 , then

( ) ( ) ( ) ( ) T f g f g f g T f T g + = − + = − − = +

and

( ) ( ) ( ) T cf cf c f cT f = − = − = .

Hence, T is a linear transformation.

452 CHAPTER 5 Linear Transformations

9. ( ) ( ) T f tf t ′ =

If f and g are continuous functions on [ ] 0, 1 , then

( ) ( ) ( ) ( ) ( ) ( ) ( ) T f g t f t g t tf t tg t T f T g

′

′ ′ ⎡ ⎤ + = + = + = +

⎣ ⎦

and

( ) ( ) ( ) ( ) ( ) T cf t cf t ctf t cT f

′

′ = = =

.

Hence, T is a linear transformation.

10. ( ) 2 3 T f f f f ′′ ′ = + +

If we are given that f and g are continuous functions that have two continuous derivatives, then

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 2 3 2 3 2 3 T f g f g f g f g f f f g g g T f T g

′′ ′

′′ ′ ′′ ′ + = + + + + + = + + + + + = +

and

( ) ( ) ( ) ( ) ( ) ( ) 2 3 2 3 T cf cf cf cf c f f f cT f

′′ ′

′′ ′ = + + = + + =

.

Hence, T is a linear transformation.

11.

( )

2

2 T at bt c at b + + = +

If we introduce the two vectors

2

1 1 1

2

2 2 2

a t b t c

a t b t c

= + +

= + +

p

q

then

( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

2

1 2 1 2 1 2 1 2 1 2

1 1 2 2

2

2 2

T T a a t b b t c c a a t b b

a t b a t b T T

+ = + + + + + = + + +

= + + + = +

p q

p q

and

( ) ( ) ( ) ( )

2

1 1 1 1 1 1 1

2 2 T c T ca t cb t cc ca t cb c a t b cT = + + = + = + = p p

.

Hence, the derivative transformation defined on

2

P is a linear transformation.

12.

( )

3 2

T at bt ct d a b + + + = +

If we introduce the two vectors

3 2

1 1 1 1

3 2

2 2 2 2

a t b t c t d

a t b t c t d

= + + +

= + + +

p

q

**SECTION 5.1 Linear Transformations 453
**

then

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( ) ( ) ( )

( ) ( )

3 2

1 2 1 2 1 2 1 2 1 2 1 2

1 1 2 2

3 2 3 2

1 1 1 1 1 1 1 1 1 1

1 1

.

T T a a t b b t c c t d d a a b b

a b a b T T

T c T c a t b t c t d T ca t cb t cc t cd ca cb

c a b cT

+ = + + + + + + + = + + +

= + + + = +

= + + + = + + + = +

= + =

p q

p q

p

p

**Hence, the derivative transformation defined on
**

3

P is a linear transformation.

13. ( )

T

T = A A . If we introduce two 2 2 × matrices B and C, we have

( ) ( ) ( ) ( )

( ) ( ) ( )

T

T T

T

T

.

T T T

T k k k kT

+ = + = + = +

= = =

B C B C B C B C

B B B B

Hence, the transformation defined on

22

M is a linear transformation.

14.

a b a b

T

c d c d

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

Letting

a b

c d

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A

be an arbitrary vector, we show the homogeneous property ( ) ( ) T k kT = A A fails because

( ) ( ) ( ) ( )

2 2 2 2

det

ka kb ka kb

T k T k ad k cb k k T kT

kc kd kc kd

⎡ ⎤

= = = − = = ≠

⎢ ⎥

⎣ ⎦

A A A A

when 1 k ≠ . Hence, T is not a linear transformation.

15. Tr

a b a b

T

c d c d

⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

Let

11 12

21 22

a a

a a

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A ,

11 12

21 22

b b

b b

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

B

so that

11 11 12 12

21 21 22 22

a b a b

a b a b

+ + ⎡ ⎤

+ =

⎢ ⎥

+ +

⎣ ⎦

A B .

Then ( ) ( ) ( ) ( ) ( ) ( ) ( )

11 11 22 22 11 22 11 22

T a b a b a a b b T T + = + + + = + + + = + A B A B

and ( ) ( ) ( )

ka kb

T k T ka kd k a d kT

kc kd

⎡ ⎤

= = + = + =

⎢ ⎥

⎣ ⎦

A A .

Hence, T is a linear transformation on

22

M .

454 CHAPTER 5 Linear Transformations

16. ( ) T = x Ax

( ) ( ) ( ) ( ) T T T + = + = + = + x y A x y Ax Ay x y

and ( ) ( ) ( ) T k k k kT = = = x A x Ax x

.

Hence, T is a linear transformation.

Integration

17. ( ) ( ) ( ) ( )

b b

a a

T kf kf t dt k f t dt kT f = = =

∫ ∫

( ) ( ) ( ) ( ) ( ) ( ) ( )

b b b

a a a

T f g f t g t dt f t dt g t dt T f T g ⎡ ⎤ + = + = + = +

⎣ ⎦ ∫ ∫ ∫

.

Hence, T is a linear transformation.

Linear Systems of DEs

18. ( , ) ( , 2 ) T x y x y x y ′ ′ = − +

( ) ( )

1 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2

1 2 1 2 1 2 1 2

1 1 1 1 2 2 2 2 1 1 2 2

( , ) ( , ) ( , ) ( ) ( ), 2( ) ( )

( , 2 2 )

( , 2 ) ( , 2 ( , ) ( , )

T x y x y T x x y y x x y y x x y y

x x y y x x y y

x y x y x y x y T x y T x y

′ ′ + = + + = + − + + + +

′ ′ ′ ′ = + − − + + +

′ ′ ′ ′ = − + + − + = +

( ) ( )

( )

( , ) ( , ) ( ) , 2( ) ( )

( , 2 ) ( ), (2 ) ( , 2 )

( , )

T c x y T cx cy cx cy cx cy

cx cy cx cy c x y c x y c x y x y

cT x y

′ ′ = = − +

′ ′ ′ ′ ′ ′ = − + = − + = − +

=

19. ( , ) ( , 2 ) T x y x y y x y ′ ′ = + − +

( )

1 2 1 2 1 2 1 2 1 2 1 2 1 2

1 2 1 2 1 2 1 2 1 2

1 1 1 1 1 2 2 2 1 2

1 1 2 2

( , ) ( ) , 2( ) ( )

( , 2 2 )

( , 2 ) ( , 2 )

( , ) ( , )

T x x y y x x y y y y x x y y

x x y y y y x x y y

x y y x y x y y x y

T x y T x y

′ ′ + + = + + + + − + + +

′ ′ ′ ′ = + + + + − − + +

′ ′ ′ ′ = + − + + + − +

= +

( )

( )

( , ) ( ) , 2( ) ( )

( , 2 ) ( ), ( 2 )

( , 2 ) ( , )

T cx cy cx cy cy cx cy

cx cy cy cx cy c x y c y x y

c x y y x y cT x y

′ ′ = + − +

′ ′ ′ ′ = + − + = + − +

′ ′ = + − + =

Laying Linearity on the Line

20. ( ) T x x =

( ) ( ) ( ) T x y x y T x T y x y + = + ≠ + = +

so ( ) ( ) ( ) T x y T x T y + ≠ + . Hence, T is not a linear transformation.

21. ( ) T x ax b = +

( ) ( ) ( ) ( ) T kx a kx b akx b kT x k ax b akx kb = + = + ≠ = + = +

SECTION 5.1 Linear Transformations 455

so ( ) ( ) T kx kT x ≠ . Hence, T is not a linear transformation.

22. ( )

1

T x

ax b

=

+

Not linear because when 0 b ≠ , the zero vector does not map into the zero vector. Even when

0 b = the transformation is not linear because the zero vector (the real number zero) does not map

into the zero vector (the real number zero).

23. ( )

2

T x x =

Because

( ) ( )

( ) ( )

2 3 5 25

2 3 4 9 13

T T

T T

+ = =

+ = + =

we have that T is not linear. (You can also find examples where the property ( ) ( ) T cx cT x =

fails.)

24. ( ) sin T x x =

Because ( ) ( ) ( ) sin and sin , T kx kx kT x k x = =

we have that ( ) ( ) T kx kT x ≠

so T is not a linear transformation. We could also simply note that

( ) ( ) sin 0 0

2 2

T T

π π

π

⎛ ⎞

+ = = =

⎜ ⎟

⎝ ⎠

but sin sin 1 1 2

2 2 2 2

T T

π π π π ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞

+ = + = + =

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

.

25. ( )

3

2

x

T x

π

= −

+

Finally, we have a linear transformation. Any mapping of the form ( ) T x ax = , where a is a

nonzero constant, is a linear transformation because

( ) ( ) ( ) ( )

( ) ( ) ( ) ( ).

T x y a x y ax ay T x T y

T kx a kx k ax kT x

+ = + = + = +

= = =

In this problem we have the nonzero constant

3

2

a

π

= −

+

.

456 CHAPTER 5 Linear Transformations

Geometry of a Linear Transformation

26. Direct computation: the vectors [ ] , 0 x for x real constitute the x-axis and because [ ] , 0 x maps

into itself the x-axis maps into itself.

27. Direct computation: the vector [ ] 0, y lies on the y-axis and [ ] 2 , y y lies on the line

2

x

y = , so the

transformation maps vectors on the y-axis onto vectors on the line

2

x

y = .

28. Direct computation: the transformation T maps points ( ) , x y into ( ) 2 , x y y + . For example, the

unit square with corner ( ) 0, 0 , ( ) 1, 0 , ( ) 0, 1 , and ( ) 1, 1 map into the parallelogram with corners

( ) 0, 0 , ( ) 1, 0 , ( ) 2, 1 and ( ) 3, 1 . This transformation is called a shear mapping in the direction

y.

Geometric Interpretations in

2

R

29.

( ) ( ) , , T x y x y = −

This map reflects points about the x-axis. A

matrix representation is

1 0

0 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

.

30.

( ) ( ) , , 0 T x y x =

This map projects points to the x-axis. A matrix

representation is

1 0

0 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

SECTION 5.1 Linear Transformations 457

31.

( ) ( ) , , T x y x x =

This map projects points vertically to the 45-

degree line y x = . A matrix representation is

1 0

1 0

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

Composition of Linear Transformations

32.

( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

ST S T S T T S T S T ST ST

ST c S T c S cT cS T cST

+ = + = + = + = +

= = = =

u v u v u v u v u v

u u u u u

Find the Standard Matrix

33. ( ) , 2 T x y x y = +

T maps the point ( )

2

, x y ∈R into the real number 2 x y + ∈R. In matrix form,

( ) [ ] , 1 2 2

x

T x y x y

y

⎡ ⎤

= = +

⎢ ⎥

⎣ ⎦

.

34. ( ) ( ) , , T x y y x = −

T maps the point ( )

2

, x y ∈R into the point ( )

2

, y x − ∈R . In matrix form,

( )

0 1

,

1 0

x y

T x y

y x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

35. ( ) ( ) , 2 , 2 T x y x y x y = + −

T maps the point ( )

2

, x y ∈R into the point ( )

2

2 , 2 x y x y + − ∈R . In matrix form,

( )

1 2 2

,

1 2 2

x x y

T x y

y x y

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

458 CHAPTER 5 Linear Transformations

36. ( ) ( ) , 2 , 2 , T x y x y x y y = + −

T maps the point ( )

2

, x y ∈R in two dimensions into the new point

( ) ( )

3

, 2 , 2 , T x y x y x y y = + − ∈R . In matrix form, the linear transformation T can be written

( )

1 2 2

, 1 2 2

0 1

x y

x

T x y x y

y

y

+ ⎡ ⎤ ⎡ ⎤

⎡ ⎤

⎢ ⎥ ⎢ ⎥

= − = −

⎢ ⎥

⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

.

37. ( ) ( ) , , 2 , 2 , 2 T x y z x y x y x y z = + − + −

T maps ( )

3

, , x y z ∈R into ( )

3

2 , 2 , 2 x y x y x y z + − + − ∈R . In matrix form,

( )

1 2 0 2

, 1 2 0 2

1 1 2 2

x x y

T x y y x y

z x y z

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= − = −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − + −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

38. ( )

1 2 3 1 3

, , T υ υ υ υ υ = +

T maps the point ( )

3

1 2 3

, , υ υ υ ∈R into the real number ( )

1 2 3 1 3

, , T υ υ υ υ υ = + ∈R. In matrix

form,

( ) [ ]

1

1 2 3 2 1 3

3

, , 1 0 1 T

υ

υ υ υ υ υ υ

υ

⎡ ⎤

⎢ ⎥

= = +

⎢ ⎥

⎢ ⎥

⎣ ⎦

.

39. ( ) ( )

1 2 3 1 2 3 1 2 3

, , 2 , , 4 3 T υ υ υ υ υ υ υ υ υ = + − + +

T maps ( )

3

1 2 3

, , υ υ υ ∈R into ( )

3

1 2 3 1 2 3

2 , , 4 3 υ υ υ υ υ υ + − + + ∈R . In matrix form,

( )

1 1 2

1 2 3 2 3

3 1 2 3

1 2 0 2

, , 0 0 1

1 4 3 4 3

T

υ υ υ

υ υ υ υ υ

υ υ υ υ

+ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − + +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

40. ( ) ( )

1 2 3 2 3 1

, , , , T υ υ υ υ υ υ = −

T maps the point ( )

3

1 2 3

, , υ υ υ ∈R into ( )

3

2 3 1

, , υ υ υ − ∈R . In matrix form,

( )

1 2

1 2 3 2 3

3 1

0 1 0

, , 0 0 1

1 0 0

T

υ υ

υ υ υ υ υ

υ υ

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

SECTION 5.1 Linear Transformations 459

Mapping and Images

41. ( ) ( ) , , T x y y x = −

T maps a vector [ ]

2

, x y ∈R into the vector [ ]

2

, y x − ∈R . For [ ] 0, 0 =

u ,

( ) [ ] ( ) [ ] [ ]

0, 0 0, 0 0, 0 T T = = − =

u .

Setting

[ ] [ ] [ ] , , 0, 0 T x y y x = − = =

w

yields [ ] [ ] , 0, 0 x y = .

42. ( ) ( ) , , T x y x y x = +

T maps a vector [ ]

2

, x y ∈R into the vector [ ]

2

, x y x + ∈R . For [ ] 1, 0 =

u ,

( ) [ ] ( ) [ ] [ ] 1, 0 1 0, 1 1, 1 T T = = + =

u .

Setting

[ ] ( ) [ ] [ ]

, , 3, 1 T x y x y x = + = =

w

yields 3 x y + = , 1 x = , which has the solution 1 x = , 2 y = , or [ ] [ ] , 1, 2 x y =

43. ( ) ( ) , , , T x y z x y z = +

T maps a vector [ ]

3

, , x y z ∈R into the vector [ ]

2

, x y z + ∈R . For [ ] 0, 1, 2 =

u ,

( ) [ ] ( ) [ ] 0, 1, 2 0, 3 T T = =

u .

Setting

[ ] ( ) [ ] [ ]

, , , 1, 2 T x y z x y z = + = = w

**yields 1 x = , 2 y z + = , which has the solution 1 x = , 2 y α = − , , z α = where α is any real
**

number. These points form a line in

3

R , ( ) { }

1, 2 , α α α − ∈R .

44. ( ) ( )

1 2 1 1 2

, , 2 T u u u u u = +

T maps a vector [ ]

2

1 2

, u u ∈R into the vector [ ]

2

1 1 2

, 2 u u u + ∈R . For [ ] 1, 2 = u

,

( ) [ ] ( ) ( ) [ ] 1, 2 1, 1 2 2 1, 5 T T ⎡ ⎤ = = + =

⎣ ⎦

u

.

Setting

[ ] ( ) [ ] [ ]

1 2 1 1 2

, , 2 1, 3 T u u u u u = + = = w

yields

1

1 u = ,

1 2

2 3 u u + = which yields

1

1 u = ,

2

1 u = .

460 CHAPTER 5 Linear Transformations

45. ( ) ( )

1 2 1 1 2 1 2

, , , T u u u u u u u = + −

T maps a vector [ ]

2

1 2

, u u ∈R into the vector [ ]

3

1 1 2 1 2

, , u u u u u + − ∈R . For [ ] 1, 1 = u

,

( ) [ ] ( ) [ ] [ ]

1, 1 1, 1 1, 1 1 1, 2, 0 T T = = + − = u

.

Setting

[ ] ( ) [ ] [ ]

1 2 1 1 2 1 2

, , , 1, 1, 0 T u u u u u u u = + − = = w

yields

1

1 u = ,

1 2

1 u u + = ,

1 2

0 u u − = , which has no solutions. In other words, no vectors

[ ]

2

1 2

, u u ∈R map into [ ] 1, 1, 0 under the linear transformation T.

46. ( ) ( )

1 2 2 1 1 2

, , , T u u u u u u = +

T maps a vector [ ]

2

1 2

, u u ∈R into [ ]

3

2 1 1 2

, , u u u u + ∈R . For [ ] 1, 2 = u

,

( ) [ ] ( ) [ ] [ ]

1, 2 2, 1, 1 2 2, 1, 3 T T = = + = u

.

Setting

[ ] ( ) [ ] [ ]

1 2 2 1 1 2

, , , 2, 1, 3 T u u u u u u = + = = w

yields

2

2 u = ,

1

1 u = ,

1 2

3 u u + = , which yields

1

1 u = ,

2

2 u = .

47. ( ) ( )

1 2 3 1 3 2 3

, , , T u u u u u u u = + −

T maps a vector [ ]

3

1 2 3

, , u u u ∈R into [ ]

2

1 3 2 3

, u u u u + − ∈R . For [ ] 1, 1, 1 = u

,

( ) [ ] ( ) [ ] [ ]

1, 1, 1 1 1, 1 1 2, 0 T T = = + − = u

.

Setting

[ ] ( ) [ ] [ ]

1 2 3 1 3 2 3

, , , 0, 0 T u u u u u u u = + − = =

w

yields

1 3

0 u u + = ,

2 3

0 u u − = which yields

1 3

u u = − ,

2 3

u u = ,

3

u arbitrary. In other words, the

linear transformation T maps the entire line ( ) { }

3

, , , α α α α − ∈ ∈ R R into [ ]

2

0, 0 ∈R .

48. ( ) ( )

1 2 3 1 2 1 3

, , , , T u u u u u u u = +

T maps a vector [ ]

3

1 2 3

, , u u u ∈R into [ ]

3

1 2 1 3

, , u u u u + ∈R . For [ ] 1, 2, 3 =

u ,

( ) [ ] ( ) [ ] [ ]

1, 2, 1 1, 2, 1 1 1, 2, 2 T T = = + =

u .

Setting

[ ] ( ) [ ] [ ]

1 2 3 1 2 1 3

, , , , 0, 0, 1 T u u u u u u u = + = =

w

yields

1

0 u = ,

2

0 u = ,

1 3

1 u u + = , which yields

1

0 u = ,

2

0 u = ,

3

1 u = , so [0,0,1] maps into itself.

SECTION 5.1 Linear Transformations 461

Transforming Areas

49. Computing Av

**for the four given corner points of the unit square, we find
**

0 0 1 1

0 0 0 2

1 0 0 1

.

1 3 1 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

A A

A A

In other words, the original square (shown in

gray) has area 1; the image is the parallelogram

with vertices (0, 0), (1, 2), (0, 3) and (–1, 1) and

area 3. Note: we have calculated the areas of the

parallelogram by visualizing it as composed of

four right triangles. The parallelogram will have

area 1 0.5 0.5 1 3 = + + + = .

50.

Computing Av

for ( ) 0, 0 , ( ) 1, 1 , ( ) 1, 1 − , we

get the respective points ( ) 0, 0 , ( ) 0, 3 ,

( ) 2, 1 − − . Hence, the image of the original

triangle (shown in gray) is the triangle with the

new vertices ( ) 0, 0 , ( ) 0, 3 , ( ) 2, 1 − − . The

original area is 1, and the new area is 3.

51. For the points

( ) 0, 0 , ( ) 1, 0 , ( ) 1, 2 , ( ) 0, 2 ,

the image is the parallelogram with vertices

( ) 0, 0 , ( ) 1, 2 , ( ) 1, 4 − , ( ) 2, 2 − .

The original rectangle (shown in gray) has area 2;

the new area is 6.

462 CHAPTER 5 Linear Transformations

52. The determinant of

1 1

2 1

− ⎡ ⎤

=

⎢ ⎥

⎣ ⎦

A

is 3 = A . In Problems 49–51 the area of the image is always three times the area of the original

figure.

Transforming Areas Again

53. For the square of Problem 49 we compute Bv

**for the four corner points of the unit square,
**

which yields

0 0 1 2

0 0 0 4

1 1 0 1

.

1 1 1 3

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= =

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

B B

B B

In other words, the image of the unit square with area 1 is the parallelogram with corners

( ) 0, 0 , ( ) 2, 4 − , ( ) 1, 1 − , ( ) 1, 3 −

with area 2.

For Problem 50, we compute Bv

**for the points
**

( ) 0, 0 , ( ) 1, 1 , ( ) 1, 1 − of a triangle; we get the

points ( ) 0, 0 , ( ) 1, 1 − , ( ) 3, 7 − . Hence, the

image of the original triangle with area 1 is the

triangle with the new vertices shown and has area

2.

SECTION 5.1 Linear Transformations 463

For the rectangle of Problem 51 we compute Bv

**for the points ( ) 0, 0 , ( ) 1, 0 , ( ) 1, 2 , ( ) 0, 2
**

yielding ( ) 0, 0 , ( ) 2, 4 − , ( ) 0, 2 , ( ) 2, 6 −

respectively. Hence, the image of the rectangle

with area 2 is the parallelogram with area 4.

The determinant of

2 1

4 3

− ⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

B

is 2 = B ; in each case the area of the transformed image is twice the area of the original figure.

The determinant is a scale factor for the area.

Linear Transformations in the Plane

54. (a) (B) shear; in the x direction.

(b) (E) nonlinear; linear transformations map lines into straight lines.

(c) (C) rotation; a 90-degree rotation in the counterclockwise direction.

(d) (E) nonlinear; ( ) 0, 0 must map into ( ) 0, 0 in a linear transformation.

(e) (A) scaling (dilation or contraction); contraction in both the x and y directions.

(f) (B) shear; in the y-direction.

(g) (D) reflection; through the x-axis.

Finding the Matrices

55.

0 1

1 0

− ⎡ ⎤

=

⎢ ⎥

⎣ ⎦

J describes (C); (90° rotation in the counterclockwise direction)

56.

1 0

1 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

K describes (F); (shear in the y-direction)

57.

1 0

0 1

⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

L describes (G), (reflection through the x-axis)

58.

1

0

2

1

0

2

⎡ ⎤

⎢ ⎥

= ⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

M describes (E), (contraction in both the x and y directions)

59.

1 1

0 1

⎡ ⎤

=

⎢ ⎥

⎣ ⎦

N describes (A), (shear in the x-direction)

464 CHAPTER 5 Linear Transformations

Shear Transformation

60. (a) The matrix

1 0

1 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

produces a shear in the y-direction of one unit. Figure B is a shear in the y direction.

(b) Figure A is a shear of –1 unit and would be carried out by the matrix

1 0

1 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

.

(c) Figure C is a shear of 1 unit in the x direction; matrix is

1 1

0 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

Another Shear Transformation

61. For a shear of 2 units in the positive x direction

the matrix is

1 2

0 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

The vertices of the r-shape are at

( ) 0, 0 , ( ) 0, 1 , ( ) 0, 2 , ( ) 1, 2 − and ( ) 1, 1 .

(3, 1)

(3, 2) (4, 2)

(2, 1)

(0, 0)

2

3

4

1

2 1 4 3

Hence,

1 2 0 0

0 1 0 0

1 2 0 2

0 1 1 1

1 2 0 4

0 1 2 2

1 2 1 3

0 1 2 2

1 2 1 3

.

0 1 1 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

− ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥ ⎢ ⎥

⎣ ⎦ ⎣ ⎦ ⎣ ⎦

SECTION 5.1 Linear Transformations 465

Clockwise Rotation

62. A matrix that rotates points clockwise by 30° is

the rotation matrix with

6

π

θ = − , or

( )

cos sin

6 6

30

sin cos

6 6

3 1

2 2

.

1 3

2 2

π π

π π

⎡ ⎤

⎛ ⎞ ⎛ ⎞

− − −

⎜ ⎟ ⎜ ⎟ ⎢ ⎥

⎝ ⎠ ⎝ ⎠

⎢ ⎥

− ° =

⎢ ⎥

⎛ ⎞ ⎛ ⎞

− −

⎢ ⎥

⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⎣ ⎦

⎡ ⎤

⎢ ⎥

⎢ ⎥

=

⎢ ⎥

−

⎢ ⎥

⎣ ⎦

Rot

The rotated r-shape is shown.

Pinwheel

63. (a) A negative shear of 1 in the y-direction is

1 0

1 1

⎡ ⎤

⎢ ⎥

−

⎣ ⎦

.

An easy way to see this is by observing how each point gets mapped. We have

1 0 0

1 1

x x x

y x y y x

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤

= = −

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥

− − +

⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

.

Each point moves down by value of its x-coordinate. In other words, the further you are

away in the x-direction from the x-axis the more the points move down (or up in case x is

negative). Note that for the pinwheel the line that sticks out to the right is sheared down,

whereas the line that sticks out to the left (in the NEGATIVE x region) is sheared up.

Twelve rotations of 30° give the identity matrix.

(b) ( ) ( )

30

n

I ° = Rot , only when n is a multiple of 12.

466 CHAPTER 5 Linear Transformations

Flower

64. Each individual image is sheared upwards by 1 unit, so we need the matrix

1 0

1 1

⎡ ⎤

⎢ ⎥

⎣ ⎦

.

We then rotate the image 24 times in either direction, each time by

360

15

24

= ° . If we go counter-

clockwise, we would repeatedly multiply by the matrix

( )

cos sin

12 12

15

sin cos

12 12

π π

π π

⎡ ⎤

−

⎢ ⎥

° = ⎢ ⎥

⎢ ⎥

⎢ ⎥

⎣ ⎦

Rot

24 times.

Successive Transformations

65. A matrix for a unit shear in the y-direction

followed by a counterclockwise rotation of 30°

would be

shear

rotation

3 1

1 0 3 1 1

1

2 2

1 1 2

1 3 3 1 3

2 2

⎡ ⎤

−

⎢ ⎥

⎡ ⎤

− − ⎡ ⎤

⎢ ⎥

= ⎢ ⎥

⎢ ⎥

⎢ ⎥

+ ⎣ ⎦ ⎢ ⎥

⎣ ⎦

⎢ ⎥

⎣ ⎦

.

The transformed r-shape is shown.

Reflections

66. (a) A reflection about the x-axis followed by

a reflection through the y-axis would be

1 0 1 0

0 1 0 1

1 0

,

0 1

y x

− ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

− ⎡ ⎤

=

⎢ ⎥

−

⎣ ⎦

B B

which is a reflection through the origin.

The transformed r-shape is shown.

Reflection of r shape through the origin

SECTION 5.1 Linear Transformations 467

(b) An 180° rotation in the counterclockwise direction has matrix

cos sin 1 0

sin cos 0 1

π π

π π

− − ⎡ ⎤ ⎡ ⎤

=

⎢ ⎥ ⎢ ⎥

−

⎣ ⎦ ⎣ ⎦

,

which is equivalent to the steps in part (a).

Derivative and Integral Transformations

67. (a) ( ) ( ) ( )

x

a

d

DI f f t dt f x

dx

= =

∫

(b) ( ) ( ) ( ) ( )

x

a

ID f f t dt f x f a ′ = = −

∫

(c