You are on page 1of 47

Lecture Notes on ODEs

October, 2011

Chapter 1

First Examples
The purpose of this chapter is to develop some simple examples of dierential equations. Later the initial ideas illustrated here will be put into a more systematic exposition.


The Simplest Examples

dx = ax dt

The dierential equation a = const (1.1)

is the simplest dierential equation. First, what does it mean? Here x(t) is an unknown real-valued function of a real variable t and dx/dt (or x ) denotes its derivative. The equation tells us that for every value of t the equality x (t) = ax(t) is true. The solution to (1.1) can be easily guessed. On substituting x(t) = Keat in (1.1), where K is some constant, we obtain an identity in t. Moreover, there are no other solutions. To see this let u(t) be any other solution and compute the derivative of u(t)eat . We get that d (u(t)eat ) = 0. dt Therefore, u(t)eat is a constant K . This proves our assertion. The constant K appearing in the solution is completely determined if the value u0 of the solution at a single point t0 is specied. In such case we have Keat0 = u0 . Thus, equation (1.1) has a unique solution satisfying a specied initial condition x(t0 ) = u0 . Note that if u(t) is a solution to (1.1), then u(t t0 )

is also a solution. Without loss of generality we can take then t0 = 0 and restate (1.1) in the form of an initial value problem x = ax, x(0) = K. (1.2)

A solution x(t) to (1.2) must not only satisfy the rst equation (1.1), but must also take on the prescribed initial value K at t = 0. We have showed that the initial value problem (1.2) has a unique solution. The constant a in the equation (1.2) can be considered as a parameter. If a changes, the equation changes and so do the solutions. Can we describe qualitatively the way the solution changes? The sign of a is crucial here. The qualitative behavior of the solutions is vividly illustrated by sketching the graphs of the solutions.

Figure 1.1 The equation x = ax is stable in a certain sense if a = 0. More precisely, if a is replaced by another parameter suciently close to a, the qualitative behavior of the solution does not change. But if a = 0, the slightest change in a leads to a radical change in the behavior of the solution. We say that a = 0 is a bifurcation point in the one-parameter family of equations x = ax, a R. Consider next a system of two dierential equations in two unknown functions x 1 = a1 x1 x 2 = a2 x2 . (1.3)

This is a very simple uncoupled system, however, many more complicated systems of two equations can be reduced to this form. We can immediately write down the solutions x1 (t) = K1 ea1 t x1 (t) = K2 ea2 t , 2

where K1 , K2 are constants. Here K1 and K2 are determined if two initial conditions x1 (t0 ) = u1 , x2 (t0 ) = u2 are specied. Let us consider system (1.3) from a more geometric point of view. We may think of it as a way of specifying two unknown functions x1 (t) and x2 (t) that dene a curve x(t) = (x1 (t), x2 (t)) in the (x1 , x2 ) plane R2 . The right-hand side of (1.3) expresses the tangent vector x (t) = (x 1 (t), x2 (t)) to the curve. Using vector notation x = Ax, where Ax denotes the vector (a1 x1 , a2 x2 ), which one should think of as being based at x = (x1 , x2 ), for each x R2 . Initial conditions are of the form x(t0 ) = (u1 , u2 ), where u = (u1 , u2 ) is a given point of R2 . Geometrically, this means that when t = t0 the curve is required to pass through the given point u.

Figure 1.2: Vector eld of x Ax, where Ax = (2x1 , x2 /2). The map A : R2 R2 denes a vector eld on the plane. This means that to each point in the plane we assign the vector Ax. The case of Ax = (2x1 , x2 /2) is visualized in Figure 1.2. The family of all solution curves as subsets of R2 is called the phase portrait of equation (1.3). Let us consider (1.3) as a dynamical system. This means that the independent variable t is interpreted as time and the solution curve x(t) could be thought of, for example, as the path of a particle moving in the plane R2 . As time proceeds the particle moves along the solution curve x(t) that satises the initial condition x(0) = u. At any later time t > 0 the particle will be in another position x(t). To indicate the dependence of the position on t and u we denote it by t (u). Thus t (u) = (u1 ea1 t , u2 ea2 t ). We can imagine particles placed at each point of the plane and all moving simultaneously, for example, dust particles under a steady wind. The solution 3

curves are spoken of as trajectories in this context. For each xed t R we have a transformation assigning to each point in the plane another point t (u). This transformation denoted by t : R2 R2 is clearly a linear transformation, that is, t (u + v ) = t (u) + t (v ), t (u) = (u), for all vectors u, v R2 , and all R. As time proceeds, every point of the plane moves simultaneously along the trajectory passing through it. In this way the collection of maps t : R2 R2 , t R, is a one-parameter family of transformations. This family is called the ow or dynamical system on R2 determined by the vector eld x Ax, and is equivalent to the system (1.3).

Figure 1.3: Some solution curves to x = Ax, where Ax = (2x1 , x2 /2). It is seldom that dierential equations come in a simple uncoupled form. Consider, for example, the system x 1 = 5x1 + 3x2 , x 2 = 6x1 4x2 , or in vector notation x = Ax,


( A=

) 5 3 . 6 4

Suppose that there exists an invertible matrix Q such that the matrix QAQ1 is diagonal QAQ1 = diag{1 , 2 } =: B, where 1 , 2 are the eigenvalues of A. Introducing the new coordinates y = Qx in R2 , with x = Q1 y , we nd y = Qx = QAx = QAQ1 y, 4

so the new system

y = By

is uncoupled. If the initial conditions to (1.4) are x(0) = u, then we see that the solution to the initial value problem is given by x(t) = Q1 diag(e1 t , e2 t )Qu. In the concrete case of system ( ) ( (1.4), ) we see by direct inspection that we can 2 1 2 0 take Q = and B = . The solution to the reduced system is 1 1 0 1 y (t) = (v1 e2t , v2 et ), where v = (v1 , v2 ) = Qu, while the one to the original system is computed from x(t) = Q1 y (t) to obtain: x1 (t) = e2t (2u1 + u2 ) et (u1 + u2 ), x2 (t) = e2t (2u1 + u2 ) + et (u1 + u2 ). The phase portrait of (1.4) is sketched in Figure 1.4 (1.5)

Figure 1.4: The phase portrait of (1.4) If we compare the formula (1.5) to Figure 1.4, we see that the diagram instantly gives us the qualitative behavior of the solutions, while the formula conveys little geometric information. In fact, for many purposes, it is better to forget the original equation (1.4) and work entirely with the diagonalized system, its solution and phase portrait.


Linear Systems with Constant Coecients

This section is devoted to generalizing and abstracting the previous examples. The general problem is stated but solutions are postponed till later. Consider the following system of n dierential equations dx1 = a11 x1 + a12 x2 + + a1n xn , dt dx2 = a21 x1 + a22 x2 + + a2n xn , dt . . . dxn = an1 x1 + an2 x2 + + ann xn , dt


Here, aij , i, j = 1, . . . , n, are n2 constants (real numbers), while each xi denotes an unknown real-valued function of a real variable t. At this point we are not trying to solve (1.6), rather, we want to place it in geometric and algebraic setting in order to understand what a solution means. At the most primitive level, a solution of (1.6) is a set of n dierentiable real-valued functions xi (t) that make (1.6) true. A candidate for solution is a curve in Rn x(t) = (x1 (t), x2 (t), . . . , xn (t)). By this we mean a map x : R Rn , which is described in terms of coordinates in the above formula. If each function xi (t) is dierentiable, then the map x is called dierentiable; its derivative is dened to be dx = x (t) = (x 1 (t), x2 (t), . . . , xn (t)). dt It has a natural geometric interpretation as the vector v (t) based at x(t), which is a translate of x (t). This vector is called the tangent vector to the curve at t (or at x(t)). If we imagine t as denoting time, then the length |x (t)| of the tangent vector is interpreted physically as the speed of a particle describing the curve x(t). To write (1.6) in an abbreviated form we use the matrix a11 a12 . . . a1n a21 a22 . . . a2n A = (aij ) = . . . . an1 With this notation (1.6) is rewritten as x = Ax. (1.7) an2 ... ann

Thus the system (1.6) can be considered as a single vector dierential equation (1.7). We think of the map A : Rn Rn as a vector eld on Rn . To each 6

point x Rn we assign a vector based at x which is a translate of Ax. Then a solution of (1.7) is a curve x : R Rn whose tangent vector at any point x(t) is the vector Ax(t). See again Figure 1.4.


a12 a22 ) = (aij )

Problem 1.1. Each of the matrices ( a11 A= a21

given below denes a vector eld on R2 , assigning to x = (x1 , x2 ) R2 the vector Ax = (a11 x1 + a12 x2 , a21 x1 + a22 x2 ) based at x. For each matrix draw enough of the vectors until you get a feeling for what the vector eld looks like. Then sketch the phase portrait of the corresponding dierential equation x = Ax, guessing where necessary. ( ) ( ) ( ) ( ) 1 0 1 0 0 1 1 1 (a) (b) (c) (d) 0 1 0 1 1 0 1 1 Solution.





Chapter 2

Newtons Equation and Keplers Law

We develop in this chapter the earliest important examples of dierential equations, which in fact are connected with the origins of calculus. These equations were used by Newton to derive and unify the three laws of Kepler. We shall be working with a particle moving in a force eld F . Mathematically, F is a vector eld on the conguration space of the particle, which in our case we suppose to be the Cartesian three space R3 . Thus F is a map F : R3 R3 that assigns to a point x R3 another point F (x) R3 . From a mathematical point of view, F (x) is thought of as a vector based at x. From a physical point of view, F (x) is the force which the eld exerts on a particle located at x. The example of a force eld we shall be most concerned with is the gravitational force of the sun. The connection between the physical concept of force eld and the mathematical concept of dierential equation is Newtons second law: F = ma. This law asserts that a particle in a force eld moves in such a way that its acceleration (vector) and the force have the same orientation, while the magnitude of the force is equal to the product between the magnitude of the acceleration and the mass m of the particle. If x(t) denotes the position vector of the particle at time t, then the acceleration vector is the second derivative of x(t) with respect to time. Thus we obtain the second order dierential equation x = 1 F (x). m

In Newtonian mechanics the mass m is always assumed to be a xed positive constant. In what follows we shall use Newtons law of gravitation to derive the exact form of F . First, we take a little de tour in some basic background material.


Harmonic Oscillators

We consider a particle of mass m moving in one dimension. Its position x(t) at time t is some unknown function x : R R. Suppose that the force on the particle at point x is given by mp2 x, where p R. Then according to Newtons second law x + p2 x = 0. This model is called the harmonic oscillator and the above equation is the equation of the harmonic oscillator in one dimension. As an example of the harmonic oscillator we may think of a weight attached on a tense spring; then x(t) stands for the displacement of the spring from its equilibrium. In such a case the explicit form F assumed above is called Hookes law. Another common example of harmonic oscillator is a simple pendulum moving in a plane with x(t) denoting its displacement from the equilibrium by a small angle x. The solution to our equation can easily be guessed to be x(t) = A cos pt + B sin pt for any (real) constants A, B . Note that in a second order dierential equation two independent constants appear in the general solution. We also see that x(t) satises the initial conditions x(0) = A and x (0) = pB . There are no other solutions satisfying these initial conditions as we shall later see. The solution can be put in the following form x(t) = a cos(pt + t0 ), where a = A2 + B 2 is called the amplitude, p is called the frequency, and t0 is called the phase of the harmonic oscillation x(t).


Conservative Force Fields

A vector eld F : R3 R3 is called a force eld if the vector F (x) assigned to the point x is interpreted as a force acting on a particle placed at x. Many force elds appearing in physics arise in the following way. There is a C 1 function V : R3 R such that F (x) = ( ) V V V (x), (x), (x) x1 x2 x3

= grad V (x). The negative sign is traditional. Such a force eld is called conservative. The function V is called the potential energy function.

The planar harmonic oscillation corresponds to the force eld F : R2 R2 , F (x) = mkx.

This eld is conservative, with potential energy V (x) = 1 2 mk |x| 2

as is easily veried. For any moving particle x(t) of mass m, the kinetic energy is dened to be T = 1 2 m |x (t)| . 2

Here x (t) is interpreted as the velocity vector at time t, its magnitude |x (t)| is the speed at time t. If we consider the function x : R R3 as describing a curve in R3 , then x (t) is the tangent vector to the curve at x(t). For a particle moving in a conservative force eld F = grad V , the potential energy at x is dened to be V (x). Note that whereas the kinetic energy depends on the velocity, the potential energy is a function of position. The total energy is E = T + V. If x(t) is the trajectory of a particle moving in a conservative force eld, then E is a real-valued function of time: E (t) = 1 2 |mx (t)| + V (x(t)). 2

Theorem 2.1 (Conservation of Energy). Let x(t) be the trajectory of a particle moving in conservative force eld F = grad V . Then the total energy E is independent of time. (t) = 0. It follows from calculus that Proof. It needs to be shown that E d 2 |x | = 2x , x dt and also that d V (x) = grad V (x), x , dt where , stands for scalar product. A direct substitution in the equation for energy leads to mx + grad V, x = 0, which holds true in view of Newtons second law and our hypothesis.



Central Force Fields

A force eld is called central if it points in the direction of the line through x, for every x. In other words, the vector F (x) is always a scalar multiple of x, the coecient depending on x: F (x) = (x)x. Lemma 2.2. Let F be a conservative force eld. Then the following statements are equivalent: a) F is central, b) F (x) = f (|x|)x, c) F (x) = grad V (|x|). Proof. Items a) and b) are equivalent formulations of the same concept. To show that c) implies b) we need only compute the gradient of V (x): V xj = V (|x|) , xj |x| j = 1, 2, 3.

Thus, we may take f (|x|) = V (|x|)/ |x|. To show that b) implies c) we must prove that V (x) is constant on each sphere Sr = {x R| |x| = r}, r > 0.

To do so let J R be an interval and u : J Sr be a C 1 map. We shall show that V is constant on any curve u in Sr . We have d V (u(t)) = grad V (u(t)), u (t) = f (u(t))u(t), u (t) = dt f (u(t)) d 2 |u(t)| = 0. 2 dt

Let x(t) be the path of a particle moving under the inuence of a central force eld, not necessarily conservative. Such a particle always remains in a xed plane. Indeed, let be the plane through the origin containing the two vectors x(t) and x (t) for some time t. Then the normal vector to that plane x(t) x (t) = 0 remains xed as seen from: d (x x ) = x x +x x = 0. dt If for all time x(t) and x (t) are parallel, cannot be specied. In this case the particle moves along a xed straight line through the origin.


We restrict attention to a conservative central force eld in a plane, which we take to be the Cartesian plane R2 . Thus x denotes now a point of R2 , the potential energy V is dened on R2 and ( ) V V F (x) = grad V (x) = , . x1 x2 Introduce polar coordinates (r, ), with r = |x|. Dene the angular momentum of the particle to be h = mr2 , is the time derivative of the angular coordinate of the particle. where Theorem 2.3 (Conservation of Angular Momentum). For a particle moving in a central force eld the angular momentum remains constant. Proof. Let i(t) be the unit vector in the direction of x(t), so x(t) = r(t)i(t), and j (t) be an unit vector with a 90 angle from it. A computation shows that x = ri + rj. Indeed, we have that di dj = j, = i dt dt since i = i() and j = j (). Dierentiating again yields 2 )i + 1 d (r 2 )j. x = ( r r r dt If the force is central, however, x and x are parallel. Therefore, the component of x along j must be 0. We can prove now one of Keplers laws. Let A(t) denote the area swept out by the vector x(t) from time t0 to t. In polar coordinates dA = 1 2 r d. 2 (2.1)

, the rate at which the postilion = 1 r2 We dene the areal velocity to be A 2 vector sweeps out area. Kepler observed that the line segment joining a planet to the sun sweeps out equal areas in equal times, which we interpret to mean = constant. We have proved this more generally for any moving particle in A a conservative central force eld and have showed that it is a consequence of conservation of angular momentum.



A state of a physical system is information characterizing it at a given time. In particular, the state of the harmonic oscillator is the position and velocity of the associated particle. 12

We may rewrite Newtons equation mx = F (x) as a rst order system in terms of the position x and the velocity v = x : x = v, 1 v = F (x). m (2.3) (2.2)

A solution to (2.3) is the curve t (x(t), v (t)) in the state space R3 R3 , also called the phase space, such that x (t) = v (t), v (t) = m1 F (x(t)), t.

The map R3 R3 R3 R3 that sends (x, v ) into (v, m1 F (x)) is a vector eld on the space of states, and this vector eld denes the equation (2.3). A solution (x(t), v (t)) to (2.3) gives the passage of the state of the system in time. Now, we may interpret energy as a function on the state space, R3 R3 R, 2 dened by E (x, v ) = 1 2 m |v | +V (x). The statement that the energy is an integral for the system then means that the composite function t E (x(t), v (t)) is constant, or that on a solution curve in the state space, E is constant. We abbreviate R3 R3 by S . An integral for (2.3) on S is then any function that is constant on every solution curve of (2.3). It was shown in the previous section that in addition to energy, angular momentum is also an integral for (2.3).


Elliptic Planetary Orbits

We now pass to consideration of Keplers rst law, that planets have elliptical orbits. For this, a central force is not sucient. We need the precise form of V as given by the inverse square law. Newtons law of gravitation states that a body of mass m1 exerts a force on a body of mass m2 . The magnitude of the force is gm1 m2 /r2 , where r is the distance between their centers of mass and g is a constant. The direction of the force on m2 is from m2 to m1 . Thus if m1 lies at the origin of R3 and m2 lies at x R3 , the force on m2 is gm1 m2 x |x|

The force on m1 is the negative of this. We place the sun at the origin of R3 and consider the force eld corresponding to a planet of given mass m. It will help to make the simplifying assumption


that the sun is at rest or that the center of mass of the solar system is located at the origin. The force eld is then F (x) = C x |x|

C = constant.

It is clear that this force eld is central. We drop the constant C from consideration by changing the units of measurement and observe that the eld is also conservative with potential 1 V (x) = . |x| Consider a particular solution curve to our dierential equation x = m1 F (x). We may assume that the angular momentum is not zero, since otherwise the motion is either away of towards the sun in a straight line. Introduce polar coordinates. Along the solution curve they become functions is constant and not zero, the sign of is constant of time (r(t), (t)). Since r2 along the curve. Thus is always decreasing or always increasing with time. Therefore, r is a function of along the curve. We use the following notation: h for the angular momentum, T for the kinetic energy, E for the total energy, and V for the potential energy; in the latter case along the solution curve we consider the function u() = 1 = V. r ( )

Lemma 2.4. We have the following convenient formula for the kinetic energy along a solution curve (( ) ) 2 1 h2 du 2 T = +u . 2m d Proof. From the formula (2.1) for x (t) we have T = Also, by the chain rule 1 du h du = 2 u d m d and the claim follows by substituting this in the expression for T . r = We proceed to nding a solution to dx = v, dt dv 1 x . = dt m |x|3 ) 1 ( 2 )2 . m r + (r 2



Later we shall nd a systematic way of treating systems like (2.4). At the present time we follow the classical method of exploiting the systems conservation laws. Mathematically, we reduce the number of dimensions of the system by using its rst integrals. We have demonstrated that the solution to (2.4) can be sought in the form of the planar curve r = r(), or equivalently, u = u(), where u = 1/r. Along a solution curve we have T = E V = E + u. From Lemma 2.4 we get ( du d )2 + u2 = 2m (E + u). h2
du d ,


Dierentiating both sides by , dividing by of energy), we obtain another equation

= 0 (conservation and using E

d2 u m + u = 2. d2 h


The solution to the equation of the harmonic oscillator (2.6) can be sought in the form m u = C cos( 0 ) + 2 , (2.7) h where C and 0 are arbitrary constants. The right-hand side term of (2.6) can be interpreted physically as a constant disturbing force. d2 u To obtain a solution to (2.5), use (2.7) to compute du d and d 2 , substitute the resulting expressions into (2.5), and solve for C . The result is 1 C = 2 2mh2 E + m2 . h Putting this into (2.7) we get [ ] ( )1 m Eh2 2 u= 2 1 1+2 cos( 0 ) . h m There is no need to consider both signs in front of the radical since cos( + ) = cos . Moreover, by changing the variable to + 0 we can put any particular solution in the form [ ] ( )1 m Eh2 2 u= 2 1+ 1+2 cos . (2.8) h m We recall from Analytic Geometry that the equation of a conic in polar coordinates is r= l . 1 + cos 15 (2.9)

Here l is the semi-latus rectum and 0 is the eccentricity. The origin is a focus and the three cases > 1, = 1, and < 1 correspond respectively to a hyperbola, parabola, and ellipse. The case = 0 is a circle. Since (2.8) is in the form (2.9) we have shown that the orbit of particle moving under the inuence of a Newtonian force is a conic of eccentricity )1 ( 2Eh2 2 . = 1+ m Clearly, 1 if and only if E 0. Therefore, the orbit is a hyperbola, parabola, or ellipse according to whenever E > 0, E = 0, or E < 0. The quantity u = 1/r is always positive. Therefore, )1 ( 2Eh2 2 = 1+ cos > 1. m Suppose that the orbit is such that = radians, hence cos = 1. This is equivalent to E < 0. For all planets complete revolutions have been observed; for them cos = 1 at regular periods. Therefore their orbits are ellipses.



Problem 2.5. A particle of mass m moves in the plane R2 under the inuence of an elastic band tying it to the the origin. Hookes law states that the force on the particle is always directed toward the origin and is proportional to the distance from the origin. Write the force eld and verify that it is conservative and central. Write the equation F = ma for this case and solve it. Verify that for the most initial conditions the particle moves in an ellipse. Problem 2.6. Which of the following force elds on R2 are conservative? a) F (x, y ) = (x2 , 2y 2 ) b) F (x, y ) = (x2 y 2 , 2xy ) c) F (x, y ) = (x, 0) Problem 2.7. Consider the case of a particle in a gravitational eld moving directly away from the origin at time t = 0. Discuss its motion. Under what initial conditions does it eventually reverse direction? Problem 2.8. Let F (x) be a force eld on R3 . Let x0 , x1 be points in R3 and let y(s) be a path in R3 , s0 s s1 , parameterized by the arc length s, from x0 to x1 . The work done in moving a particle along this path is dened to be the integral

F (y(s) y (s))ds,


where y (s) the (unit) tangent vector to the path. Prove that the force eld is conservative if and only if the work is independent of the path. In fact, if F = grad V , then the work done is V (x0 ) V (x1 ). 16

Chapter 3

Linear Systems with Constant Coecients and Real Eigenvalues

3.1 Dierential Equations with Real, Distinct Eigenvalues

Theorem 3.1. Let A be an operator on Rn having n distinct, real eigenvalues. Then for all x0 Rn , the linear dierential equation x = Ax, has a unique solution. Proof. It follows from Linear Algebra the existence of an invertible matrix Q such that the matrix Q1 AQ is diagonal: QAQ1 = diag{1 , . . . , n } =: B, where 1 , . . . , n are the eigenvalues of A. Introducing the new coordinates y = Qx in Rn , with x = Q1 y , we nd y = Qx = QAx = QAQ1 y, so the new system y = By x(0) = x0 , (3.1)

is uncoupled. We know that this system has unique solutions for every initial condition yi (0): y (t) = yi (0)ei t .


Then the solution to (3.1) is x(t) = Q1 y (t) = Q1 diag(e1 t , . . . , en t )Qx(0). To prove that there are no other solutions to (3.1), we note that x(t) is a solution to (3.1) if and only if Qx(t) is a solution y = By, y (0) = Qx(0).

Hence, two dierent solutions to (3.1) would lead to two dierent solutions to the system above, which is impossible since B is diagonal. Observe that the proof is constructive; it actually shows how to nd solutions in any specic situation. We review this process next and consider a few concrete examples. First, given A we nd its eigenvalues by solving the characteristic polynomial det(A I ) = 0. We assume that there are n distinct and real eigenvalues 1 , . . . , n . Then we nd the n eigenvectors corresponding to them by solving the system (A i I )pi = 0. Write out the coordinates of each eigenvector vector in the matrix | | P t = p1 p2 | | pi = (pi1 , . . . , pin ) as a column | pn . |

Clearly, P t is the transpose matrix of P = (pij ), and we remember from Linear Algebra that Q = (P t )1 . We then write the general solution to the diagonalized system yi (t) = ai ei t , i = 1, . . . , n, where ai = yi (0) are arbitrary constants, so the general solution to the original system is xi (t) = pij ai ei t .

To nd the solution of the initial value problem for (3.1) we need to solve P t a = x(0) a = (P t )1 x(0).

However, often in practice the inversion of P t is not necessary and it could be easier to work directly with the rst equation above. We see an illustration of that in the next example. Let us rst nd the general solution to the system x 1 = x1 , x 2 x 3 = x1 + 2x2 , = x1 x3 . 18 (3.2) (3.3) (3.4)

The corresponding matrix is 1 0 1 2 1 0 Since A is triangular, det(A I ) = (1 )(2 )(1 ). Hence the eigenvalues are 1, 2, -1. They are applies. The matrix B is 1 diag(1, 2, 1) = 0 0 In the new coordinates the solution is y1 (t) = a1 et , y2 (t) = a2 e2t , y3 (t) = a3 et , where a1 , a2 , and a3 are arbitrary constants. To relate old and new coordinates we compute the matrix P t next. For example, to nd out the vector p1 = (x, y, z ), we must solve the vector equation (A I )p1 = 0 or 0 1 1 0 1 0 0 x 0 y = 0. 2 z real and distinct, so the theorem 0 2 0 0 0 . 1 0 0 . 1

This leads to the system x+y =0 x 2z = 0. Clearly, the solution set is a one dimensional vector space; it is sucient to take any nonzero element from it, e.g. p1 = (2, 2, 1). We nd p2 and p3 in the same way; our 2 P t = 2 1 computations give 0 0 1 0 . 0 1


From x = P t y we have x1 (t) 2 0 0 a 1 et 2a1 et x2 (t) = 2 1 0 a2 e2t = 2a1 et + a2 e2t . x3 (t) 1 0 1 a3 et a1 et + a3 et The reader should verify that this is indeed a solution to (3.2). To solve the initial value problem for (3.2) we must solve the linear system 2a1 = u1 , 2a1 + a2 = u2 , a1 + a3 = u3 , for the unknowns a1 , a2 , a3 . This amounts to inverting P t but for particular values of u1 , u2 , u3 it might be easier solve the system directly. The following observation is an immediate consequence of the proof of Theorem 3.1. Theorem 3.2. Let the n n matrix A have n distinct real eigenvalues 1 , . . . , n . Then every solution to the dierential equation x = Ax, is of the form xi (t) = ci1 e1 t + + cin en t , i = 1, . . . , n, x(0) = x0 ,

for unique constants ci1 , . . . , cin , depending on x0 . Theorem 3.2 leads to another method of solution of (3.1). Regard the coefcients cij as unknowns and set xi (t) = ci1 e1 t + + cin en t , i = 1, . . . , n. c1n . . . . cnn

Substituting this into x = Ax leads to the matrix equation c11 1 c1n n a11 a1n c11 . . . . . . . . . = . . . . . . cn1 1 cnn n an1 ann cn1

Solving this system of linear algebraic equations for cij gives us the general solution to x = Ax. This is the method of undetermined coecients. As an example we consider the same system as before. This leads to considering the matrix equation c11 2c12 c13 c11 c12 c13 c21 2c22 c23 = c11 + 2c21 c12 + 2c22 c13 + 2c23 . c31 2c32 c33 c11 c31 c12 c32 c13 c33 20

Equating corresponding terms in the two matrices we get: c12 = c13 = 0, for the terms in the rst row. For the terms in the second rows we get c11 = c21 , 11 c23 = 0. Finally, for the terms in the third row we get c31 = c2 and c32 = 0. We have thus found x1 (t) = 2c11 et , x2 (t) = 2c11 et + c22 e2t , x3 (t) = c11 et + c33 et which is equivalent to the previous solution. The conclusion of Theorem 3.2 are denitely false for some operators with real, repeated eigenvalues. This shall be illustrated in the exercises.


Complex Eigenvalues

A class of operators that have no real eigenvalues are the planar operators Ta,b : R2 R2 represented by the matrices of the form ( ) a b Aa,b = , b = 0. b a The characteristic polynomial is 2 2a + (a2 + b2 ), whose roots are a + ib, let r= a ib, i= 1.

We interpret Ta,b geometrically as follows. Let b > 0 for concreteness, and a2 + b2 , a a = arccos , cos = . r r Then, Ta,b is a counterclockwise rotation through radians followed by a stretching (or shrinking) of the length of each vector by a factor of r. That is, if R denotes rotation through radians, then Ta,b (x) = rR (x) = R (rx). To see this observe that a = r cos , b = r sin .

In the standard basis, the matrix of R is ( ) cos sin . sin cos 21

The equality

( a b

) ( b r = a 0

)( 0 cos r sin

sin cos

yields our assertion. Naturally, Ta,b can also be associated with multiplication by the complex number a + ib. Example 3.3. Consider the operator T : R2 R2 with matrix ( ) 0 2 1 2 The characteristic polynomial 2 2 + 2 has roots 1 + i, 1 i. T does not correspond to multiplication by a complex number since its matrix is not of the form Aa,b . But it is possible to introduce new coordinates in R2 , that is to nd a new basis, giving T a matrix Aa,b . Let x1 , x2 be the standard coordinates in R2 . Check that the substitution x1 = y1 + y2 , x2 = y1 , gives T the matrix A1,1 in the y -coordinates. This shows that although T is not diagonalizable, coordinates can be introduced in which T has a simple geometric interpretation: a rotation through = /4 followed by a stretching by 2. If vectors are identied with complex numbers y1 , y2 , then T corresponds to multiplication by 1 + i. In the next chapter we show how the new coordinates were found. Example 3.4. We show now how the complex structure on R2 , that is the identication of R2 with C, may be used to solve a corresponding class of dierential equations. Consider the system dx = ax by, dt dy = bx + ay. dt


We use complex numbers to formally nd a solution, check that what we have found solves (3.5). ( ) a b Thus we replace (x, y ) by z = x + iy , by a + ib = . Then (3.5) b a becomes z = z. The solution, of course, is the complex exponential z = Ket , for some arbitrary K C. To come back to real terms we make use of Eulers formula et = eat eitb = eat (cos bt + i sin bt). 22

We also write K = u + iv , u, v R, and set z (t) = x(t)+ iy (t), where x : R R, y : R R. Taking real and imaginary parts together, we obtain x(t) = ueta cos tb veta sin tb, y (t) = ueta sin tb + veta cos tb.


Chapter 4

Linear Systems with Constant Coecients and Complex Eigenvalues

As we saw in the last section of the preceding chapter, complex number enter naturally in the study and solution of real ordinary dierential equations. In general the study of operators on complex vector spaces facilitates the study of linear dierential equations. In this chapter we make a review of the linear algebra of complex vector spaces and develop methods to study the solutions of the rst order linear ordinary dierential equations with constant coecients whose associated operator has distinct, though perhaps non-real, eigenvalues.


Complex Vector Spaces

In order to gain a deeper understanding of linear operators (and hence of linear dierential equations) we have to nd the geometrical signicance of complex eigenvalues. This is done by extending an operator T on a real vector space E to an operator TC on a complex vector space EC . Complex eigenvalues of T are associated with complex eigenvectors of TC . We rst develop complex vector spaces. The complex Cartesian space C n is the set of all n-tuples z = (z1 , . . . , zn ) of complex numbers. A nonempty subset F of C n is called a subspace or a (complex) linear subspace if it is closed under the operations of addition and scalar multiplication in C n . A complex vector space will mean a subspace of C n. In fact all the algebraic properties of real vector spaces and their linear maps carry over to complex vector spaces and their linear maps. Consider now an operator on a complex vector space F C n . Thus T : F F is a linear map and we may proceed to study its eigenvalues and eigenvectors. An eigenvalue


of T is a complex number such that T v = v has a nonzero solution v F . The vector v of F is called an eigenvector belonging to . This is exactly analogous to the real case. The methods for nding real eigenvalues and eigenvectors apply to this case. Given a complex operator T as above, one associates to it a polynomial p() = det(T I ) whose roots are exactly the eigenvalues of T . As in the real case e have the following analogous result. Theorem 4.1. Let T : F F be a linear operator on an n-dimensional complex vector space F . If the characteristic polynomial has distinct roots, then T can be diagonalized. Observe that the above theorem is stronger than the corresponding theorem in the real case since the latter demanded the quite restrictive condition that the roots of the characteristic polynomial be real. We say that an operator T on a complex vector space is semi-simple if it is diagonalizable. Let F be a complex subset of Cn . The set FR = F Rn is the set of all n-tuples (z1 , . . . , zn ) that are in F and real. Clearly, FR is closed under the operations of addition and scalar multiplication by real numbers. Thus FR is a real vector space (subspace of Rn ). Consider now the converse process. Let E Rn be a subspace and let EC be the subset of Cn obtained by taking all linear combinations of vectors in E , with complex coecients. Thus EC = {z Cn |z = i zi , zi C, i C} and EC is a complex subspace of Cn . Note that (EC )R = E . We call EC the complexication of E and FR the space of real vectors in F .


(z1 , . . . , zn ) = (z 1 , . . . , z n ).

Problem 4.2. By : Cn Cn we denote the operation complex conjugation

Clearly the set of xed points is Rn . Prove that a complex space F Cn can be decomplexied, i.e. expressed in the form F = EC for some real subspace E Rn , if and only if (F ) F . Hint. Use the identity for x iy Cn x= (x + iy ) + (x iy ) 2


Problem 4.3. Let F C2 be the space spanned by the vector (1, i). a) Prove that F is not invariant under conjugation and hence is not the complexication of any subspace of R2 . b) Find FR and (FR )C . Problem 4.4. We look at the question as to when an operator Q : EC EC is the complexication of an operator T : E E . Prove the following. Lemma 4.5. Let E Rn be a real vector space and EC Cn be its complexication. If Q L(EC ) then Q = TC for some T L(E ) if and only if Q = Q, where : EC EC is conjugation. The next problem illustrates the signicance of Lemma 4.5 in the sense that there exists an operator Q : EC EC that is the complexication of an operator T : E E such that its matrix in some basis of EC has non-real entries. The lemma gives us a criterion to recognize such operators which otherwise would not be obvious. Problem 4.6. Give an example of an operator A L(R2 ) whose complexication AC : C2 C 2 has at least one non-real entry in some basis in C 2 . Hint: consider the identity I : R2 R2 and the basis (i, 0), (0, i). Problem 4.7. a) Let E Rn and F Cn be subspaces. What relations, if any, exist between dim E and dim EC . Between F and FR ? b) If F Cn is any subspace, what relation is there between F and FRC ? c) Let E be a real vector space and T L(E ). Show that (Ker T )C = Ker(TC ), (Im T )C = Im(TC ), and (T 1 )C = (TC )1 if T is invertible.


Real Operators with Complex Eigenvalues

We move towards understanding the linear dierential equation with constant coecients dx = T x, dt where T is an operator on Rn . Problem 4.8. Show that if T is an operator on a real vector space E , then the set of its eigenvalues is preserved under complex conjugation, i.e. it has the form 1 , . . . , r , 1 , 1 , . . . , s , s , all real all non-real.


Solution 1. The characteristic polynomial of T has real coecients, hence the roots are either real or occur in conjugate pairs. Solution 2. Observe that T and TC have the same eigenvalues because they have the same characteristic polynomial. We show that if is any eigenvalue of TC and is an eigenvector belonging to it, so is and respectively. Indeed, the claim follows directly from = (TC ) = TC () = TC .

We next recall from Linear Algebra the following denitions. Let E1 , . . . , Er be subspaces of a given vector space E . We say that E is their direct sum if every vector x in E can be expressed uniquely: x = x1 + + xr , This is denoted E = E1 Er . Let T : E E and Ti : Ei Ei , i = 1, . . . , r be operators. We say that T is a direct sum of the Ti if E = E1 Er , and each Ei is invariant under T , that is T (Ei ) Ei , and T x = Ti x if x Ei . We use the notation T = T1 Tr . If, for example, Ti has the matrix Ai in some basis for each Ei , then by taking the union of all the basis elements we obtain a basis for the whole space E , and T has the matrix A1 .. A = diag{A1 , . . . , Ar } = . . Ar Problem 4.9. Let T : E E be a real operator with distinct eigenvalues. Then E and T have a direct sum decomposition E = Ea Eb , T = Ta Tb , Ta : E a E a , Tb : Eb Eb , xi E i , i = 1, . . . , r.

where Ta has real eigenvalues and Tb has non-real eigenvalues. We remark that Problem 4.9 provides an uncoupling of the dierential equation dx = T x, dt where T is an operator on Rn . We may rewrite this equation as a pair of equations dxa = Ta xa , dt dxb = Tb xb , dt 27

where Ta , Tb are as above and xa Ea , xb Eb . We proceed to the study of the operator Tb . Problem 4.10. Let T : E E be an operator on a real vector space with distinct non-real eigenvalues (1 , 1 , . . . , s , s ). Then there is an invariant direct sum decomposition for E and a corresponding direct sum decomposition for T , E = E1 Es , E = T1 Ts , such that each Ei is two dimensional and Ti L(Ei ) has eigenvalues i , i . Theorem 4.11. Let T be an operator on a two-dimensional vector space E Rn with non-real eigenvalues i , i , = a + ib. Then there is a matrix representation A for T ( ) a b A= . b a Let E be an eigenvector of T belonging to a + ib, b = 0. If = u + iv Cn , then {v, u} is a basis for E giving T the matrix A. Proof. The study of such a matrix A and the corresponding dierential equation on R2 , dx/dt = Ax, was the content of Section 3.2. Let TC : EC EC be the complexication of T . Since TC has the same eigenvalues as T , there exist eigenvectors and in EC belonging to , , respectively. Let = u + iv with u, v Rn . Notice that u, v EC , for u= 1 ( + ), 2 v= 1 ( ). 2i

Hence u, v are in EC Rn = E . Moreover, it is easy to see that u and v are independent (use the independence of , ). Therefore {v, u} is a basis for E . To compute the matrix of T in this basis we start from TC (u + iv ) = (a + bi)(u + iv ) = (bv + au) + i(av + bu). Also, TC (u + iv ) = T u + iT v. Therefore T v = av + bu, T u = bv + au. This means that the matrix of T in the basis {v, u} is A, completing the proof.



Application of Complex Linear Algebra to Dierential Equations

x 1 = 2x2 , x2 = x1 + 2x2 .

Consider the equation

The eigenvalues of the corresponding matrix ( ) 0 2 A= 1 2 are = 1 + i, = 1 i. A complex eigenvector belonging to 1 + i is found by solving the equation (A I )w = 0, for w C2 , ( )( ) 1 i 2 w1 = 0. 1 1i w2 One possible solution is the vector w = (1 + i, i) = (1, 0) + i(1, 1) = u + iv. We choose a new basis {v, u} for R2 C2 , with v = (1, 1) and u = (1, 0). To nd the new coordinates y1 , y2 corresponding to that basis, note that any vector x can be written x = x1 (1, 0) + x2 (0, 1) = y1 (1, 1) + y2 (1, 0). Thus x1 = y1 + y2 , x2 = y1 or x = P y , P = The new coordinates are given by y1 = x2 , y2 = x1 + x2 , or y = P 1 x, P

) 1 1 . 1 0

( 0 = 1 (

) 1 . 1

The matrix A in the y -coordinates is P


AP =

) 1 1 =: B. 1 1


Thus, our equation dx/dt = Ax on R2 has the form dy/dt = By in the y coordinates, and can be solved as we showed for the system (3.5). The solution then is y1 = aet cos t bet sin t, y2 = aet sin t + vet cos t. The original equation has its general solution x1 = (a + b)et cos t + (a b)et sin t, x2 = aet cos t + bet sin t, where a, b R are arbitrary constants.


Chapter 5

Linear Systems and Exponentials of Operators

The object of this chapter is to solve the linear homogenous system with constant coecients x = Ax, (5.1)

where A Mn (R). This is accomplished with exponentials of operators. Our approach at solving equation (5.1) in the previous chapters was to diagonalize the system to y = By, (5.2)

where B = diag(1 , . . . , n ), and i are the (assumed dierent) eigenvalues of A. The new system was obtained from the old by a suitable change of coordinates y = Qx, for some Q Mn (R). The solution to (5.2) can be symbolically written as y (t) = etB K, K Rn , where etB Mn (R), eB = diag(e1 t , . . . , en t ). Our goal shall be to dene the operator eA , called the exponential of A, for every operator A L(Rn ). This is done by means of innite series on the operator space L(Rn ). The series for eA is formally the same as the usual series for ea , a R, and are given by eA = I + A + A2 Ak + + + 2! k!

We shall show that the solutions to (5.1) are precisely the maps x : R Rn , x(t) = etA K , for some K Rn .



Exponentials of Operators

A frequently used norm on L(Rn ) is the uniform norm. This norm is dened in terms of a given norm on Rn , which we shall write as |x|. If T : Rn Rn is an operator, the uniform norm of T is dened to be ||T || = max{|T x| : |x| 1}. In other words, ||T || is the maximum value of T x on the unit ball B1 (0); the more general notation BR (a) = {x Rn : |x a| R} is also frequently used. The uniform norm on L(Rn ) depends on the norm chosen for Rn . If no norm on Rn is specied, the standard Euclidean norm
2 2 |x| = (x2 1 + + xn )

is intended. Lemma 5.1. Let Rn be given a norm |x|. The corresponding uniform norm on L(Rn ) has the following properties: a) if ||T || = k , then |T x| k for all x Rn . b) ||ST || S T . c) T m T

for all m = 0, 1, 2, . . .

Proof. The proof is elementary and follows by direct application of the denitions involved. Let us, for example, verify b). For |x| 1 we have |S (T x)| S |T x| S T |x| S T . Since the maximum value of |S (T x)| is ST , the claim follows. We now dene an important series generalizing the usual exponential series. For any operator T : Rn Rn dene exp(T ) = eT = This is a series in the vector space L(Rn ). Theorem 5.2. The exponential series (5.3) is absolutely convergent for every operator T L(Rn ).
Tk k=0




Proof. Let T = 0 be the uniform norm of T (for some norm on Rn ). Then Tk k k! k! k a by the lemma proved earlier. Now the real series k=0 k! converges to e . Therefore, the exponential series for T converges absolutely by the comparison test and eT eT . Lemma 5.3. Let j =0 Aj = A and k=0 Bk = B be two absolutely convergent series of operators in L(Rn ). Then AB =

Cl ,

where Cl =

j +k =l

Aj Bk ,

is also absolutely convergent. Lemma 5.4. Let P , S , and T denote operators in L(Rn ). Then a) if Q = P T P 1 , then eQ = P eQ P 1 , b) if ST = T S , then eS +T = eS eT , c) eS = (eS )1 , ( a d) if n = 2 and T = b ) b , then a ( cos b e =e sin b
T a

) sin b . cos b

Proof. a) It follows from taking the limit n in the identity ( n ) n Tk (P T P 1 )k P P 1 = . k! k!

k=0 k=0

b) Note that because of ST = T S we have the binomial theorem (S + T )n = n! Therefore eS + T Sj T k . j ! k!

j +k=n

( ) j k Sj T k S T (S + T )n = = . = n! j ! k! j! k! n=0 n=0 j =0
j +k =n k=0


c) Put T = S in b). d) We shall consider two dierent ways of proving that. In both cases it is useful to keep in mind the correspondence between the matrices of the given type and the complex numbers ( ) a b a + ib. b a Method 1. Directly compute T k and nd the sum of the corresponding exponential series. We write ( ) ( ) a b 0 b exp = exp (aI ) exp . b a b 0 ( ) 0 1 Set A = and either by direct computation or by using the corre1 0 spondence A i, obtain that the sequence of the powers of A is ( ) ( ) 1 0 0 1 2 3 I, A, A = ,A = , A4 = I, A5 = A, . . . 0 1 1 0 Therefore, (bA)k = bk Ak , and after substitution in the ( corresponding ) power cos b sin b bA series with the actual values, we nd out that e = . sin b cos b ( a ) e 0 Seeing that exp (aI ) is equal to , we conclude our proof in the rst 0 ea method. Method 2. The correspondence mentioned earlier can be shown to be continuous. Therefore ( ) ( ) a b cos b sin b exp ea+ib ea . b a sin b cos b ( ) a 0 Problem 5.5. Compute the exponential of the matrix A = . b a Solution. We write ( 1 A=a 0 ) ( ) 0 0 0 +b = aI + bN 1 1 0

Note that I and N commute and that N is nilpotent with N 2 = 0. Hence ( a ) ( ) e 0 1 0 bN eaI = , e = I + bN = , 0 ea b 1 and eA = eaI +bN = ea ( ) ( a 1 0 e = b 1 bea ) 0 . ea


Lemma 5.6. If x Rn is an eigenvector of T belonging to a real eigenvalue of T , then x is also an eigenvector of eT belonging to the eigenvalue ea of eT . Similarly, if C is an eigenvalue to T and z Cn is an eigenvector to the complexication TC : Cn Cn of T , then e is an eigenvalue to eTC and z is an eigenvector belonging to it. Proof. From T x = x, we obtain ) ( n ) ( n k T kx T = lim x = ea x. e x = lim n n k! k!
k=0 k=0

Consider now the equation x = Ax, x(0) = K, (5.4)

where A L(Rn ). To show that the map R L(Rn ) which to t R assigns the operator etA K is a solution, we need rst to dene the derivative of this map. Proposition 5.7. d tA e = etA A. dt Proof. d tA e(t+h)A etA ehA I e = lim = etA lim h0 h0 dt h h = etA lim (A + o(h)) = etA A.

Theorem 5.8. Let A L(Rn ). Then the solution to the initial value problem (5.4) is etA K, and there are no other solutions. Proof. The computation in the preceding lemma shows that etA K indeed is a solution to (5.4). To see that there are no other, suppose that x(t) is another solution to (5.4), and compute the derivative of y (t) = etA x(t). We have y (t) = AetA x(t) + etA x (t) = 0. Thus y (t) is a constant. Setting y (0) = x(0) = K shows that etA x(t) = K , or x(t) = etA K . Problem 5.9. Compute the general solution of the two-dimensional system x 1 = ax1 , x 2 = bx1 + ax2 , where a, b R. 35

Solution. In Problem 5.5 we computed the exponential of the matrix ( ) a 0 A= b a in view of which we have e

( =e

) 1 0 bt 1

and the general solution is etA K , for any K R2 .



It is often useful to use functions on Rn that are similar to the Euclidean norm, but not identical to it. We dene a norm on Rn to be any function N : Rn R that satises 1) N (x) 0, N (x) = 0 if and only if x = 0, 2) N (x + y ) N (x) + N (y ), 3) N (x) = N (x). Problem 5.10. Prove that the following functions are norms on Rn : |x|max = max{|x1 | , . . . , |xn |}, |x|sum = |x1 | + + |xn |}. Problem 5.11. Let A Mn (R) and let m(A) = max{|aij | : A = (aij )}, i,j s(A) = |aij | , A = (aij ).

Show that m : Mn (R) R and s : Mn (R) R are norms on Mn (R). Moreover, if A is the uniform norm on Mn (R), show that there exist constants c1 , c2 > 0 such that c1 m(A) A c2 m(A), c1 s(A) A c2 s(A), independent of A Mn (R). Problem 5.12. Show that there is no matrix A M2 (R) such that ( ) 1 0 A e = . 0 4 Problem 5.13. If AB = BA, then eA eB = eB eA and eA B = BeA . Problem 5.14. Compute the exponentials of the following matrices: ( ) ( ) 0 0 5 6 0 1 a) b c) 1 0 . 3 4 1 0 0 1 36


Phase Diagram of a General 2 2 Matrix

Since we know how to compute the exponential of any 2 2 matrix, we can explicitly solve any equation of the form x = Ax, where A M2 (R). Without nding explicit solutions, we can obtain important qualitative information about the solutions from the eigenvalues of A.

Figure 5.1: Classication of the pase-portraits of 2 2 matrices Writing A= ( a11 a21 ) a12 , a22

we denote by p = a11 + a22 , the trace of A, by q = a11 a22 a12 a21 , the determinant of A, and by d = p2 4q , the discriminant of the characteristic polynomial 2 p + q = 0 37

of A. Depending on the magnitudes of p, q , and d, the solution curves near the origin are shown in Figure 5.1.

Case q < 0.
The matrix A has real eigenvalues of opposite signs. In this case the origin is called a saddle. The trajectories are graphs of the power function x2 = cx 1, < 0, c R,

or the line x1 = 0. If p = 0 then = 1 and the graphs are hyperbolas.

Case p < 0, q > 0.

The eigenvalues of A have negative real parts, the phase-portrait is a called a sink and is labeled as stable in Figure 5.1. It has the characteristic property that lim x(t) = 0

for every solution x(t). If d > 0 then the eigenvalues are distinct negative real numbers < < 0. This type of sink is called a node. After a change of coordinates, the solutions are x(t) = (c1 et , c2 et ). Therefore, the trajectories are graphs of the power function x2 = cx 1, = c2 = 1, > 0, c = , c1

or the line x1 = 0. If d = 0 then = = p/2. If A is diagonalizable, A = I . The trajectories are straight lines issuing from the center, i.e. the graphs of x2 = cx1 , c R,

or the line x1 = 0. This case is called a stable focus and is labeled in the diagram as a stable star. If A is not diagonalizable, its canonic form is ( ) 0 A= . 1 This is an instance of an improper node, also called a degenerate node. In view of Problem 5.9, the solutions are x1 = c1 et x2 = c2 et + c1 tet .


Figure 5.2: Improper node, = 1 The trajectories are graphs of the functions ( ) 1 x2 = c + ln x1 x1 ,

c R,

or the line x2 = 0, and are plotted in Figure 5.2. If d < 0, then = have nonzero imaginary parts. This is an instance of a spiral sink. The trajectories are spirals circling down to the origin. Indeed, in a suitable basis ( ) cos tb sin tb etA = eta , sin tb cos tb a combination of a rotation around the origin by tb radians and a stretching by a factor of eta . The trajectories are directed counterclockwise if b > 0 and clockwise if b < 0.

Case p > 0, q > 0.

This case is exactly the same as the previous one, except the direction of motion is reversed. All structures are called a source and are labeled in the diagram as unstable.

Case p = 0, q > 0.
The eigenvalues are pure imaginary. This is called a center. It is characterized by the property that all solutions are periodic with the same period. In a suitable basis A has the form ( ) 0 a A= . a 0 Since, e

( =

cos ta sin ta

) sin ta , cos ta


we have that for any solution ( ) 2 x t+ = x(t). a Furthermore, since etA x rotates x around the origin by an angle of ta radians, it follows that the trajectories are concentric circles around the origin directed counterclockwise if a > 0 and clockwise if if a < 0. In an arbitrary basis the trajectories are ellipses.

Case q = 0.
In a suitable basis A= ( ) 0 , 0 0 ( etA = et 0 ) 0 . 1

On the line x2 = 0 the trajectories are xed points. All other trajectories lie on vertical lines and are directed towards x1 = 0 if < 0, or in the opposite direction if > 0. In the original basis the solution lines are slanted as shown in Figure 5.1. A degenerate subcase to this one is when A = 0. All solution curves are xed points. The geometric interpretation of x = Ax is as follows. The map A : Rn Rn which sends x Ax is a vector eld on Rn . Given a point K Rn , there is a unique curve x(t) = etA K which starts at K at time zero, and is a solution to the equation. The tangent vector to this curve at a time t0 is the vector Ax(t0 ) of the vector eld at the point x(t0 ). We may think that the points in Rn are owing simultaneously along these solution curves. The position of a point x Rn at time t is denoted by t (x) = etA x. Thus for each t R we have the map t (x) : Rn Rn (t R), t (x) = etA x.

The collection of maps {t }tR is called the ow corresponding to the dierential equation x = Ax. This ow has the basic property s+t = s t , which is just another way of writing es+t A = esA etA . This ow is called linear because each map t : Rn Rn is a linear map. We shall later study more general nonlinear ows.




Problem 5.15. Find the general solution to the equation x = Ax for each of the following matrices: ( ) ( ) 0 1 1 2 1 0 1 a) A = b) A = c) A = 0 0 1 . 1 2 1 0 0 0 0 Solution. b) The powers of A are I, A, A2 = I, A, . . . Therefore t2 t3 I + tA + A2 + A3 + = 2! 3! (
2 4 3 5

1+ t +t 2! 4! + 3 t t5 t + 3! + 5! + ( ) cosh t sinh t = . sinh t cosh t cosh t sinh t sinh t cosh t )( ) a b

t+ t +t + 3! 5! 2 t t4 1 + 2! + 4! +

Thus x(t) =

where a, b R. Alternative method. From the characteristic equation for A 2 1 = 0 we nd 1 = 1, 2 = 1. Thus in a suitable basis ( ) c1 et x(t) = . c2 et In the original basis we have that x(t) = ( ) a11 et + a12 et a21 et + a22 et

for some coecients aij R to be determined. Substituting in the equation we get ( ) ( ) a11 et a12 et a21 et + a22 et = a21 et a22 et a11 et + a12 et Hence a11 = a21 = a, and the general solution is a12 = a22 = b

( t ) ae + bet x(t) = . aet bet

We leave it to the reader to show that the two solutions are the same if one uses a suitable choice of constants a, b R.


Problem 5.16. Find the solutions satisfying the initial conditions x(0) = ( ) ( ) 0 0 0 a) b) c) 1 . 2 1 1 from the general solutions obtained in Problem 5.15 Problem 5.17. Show that for any A Mn (R) etA is invertible. What is (etA )1 ? Problem 5.18. Let A L(Rn ) leave a subspace E Rn invariant. If x(t0 ) E , show that the solution x(t) to x = Ax remains in E for all time. Problem 5.19. Suppose A L(Rn ) has a real eigenvalue < 0. Then the equation x = Ax has at least one non-trivial solution x(t) such that

lim x(t) = 0.

Problem 5.20. Let t : R2 R2 be the ow corresponding to the equation x = Ax. Show that t is a linear map. Then show that t preserves area if and only if Tr A = 0, that is p = 0 if Figure 5.1. In this case the origin is not a sink or a source. Hint. An operator Rn Rn is area preserving if and only if its determinant (Jacobian if it is nonlinear) is 1. Problem 5.21. Let A M3 (R) be invertible. Show that x = Ax has a nonperiodic solution. Give an example of a matrix A for which all solutions are periodic.


The Inhomogeneous Equation

x = Ax + B (t), (5.5)

In this section we nd the solution to the inhomogeneous equation of the form

where A Mn (R), B (t) : R Rn is a continuous map. The dependence on t in the inhomogeneous term B (t) makes the system non-autonomous. Our ansatz is to look for solutions of the form x(t) = etA f (t) where f : R Rn is some dierentiable curve. The method is also called variation of constants because if B = 0 f (t) is a constant. Every solution can in fact be written in this form since etA is invertible. Dierentiating x(t) we get x (t) = AetA f (t) + etA f (t).


Since x is assumed to be a solution to (5.5), f (t) = etA B (t). By integration f (t) =

+0t esA B (s)ds + K,

so as candidate for a solution to (5.5) we have ( ) t sA sA x(0) + e x(t) = e B (s)ds



To check that (5.6) is a solution to (5.5), we dierentiate x(t) ( ) t sA sA x (t) = B (t) + Ae x(0) + e B (s)ds = B (t) + x(t).

Thus (5.6) is indeed a solution to (5.5). Let x(t) and y (t) be two solutions of the inhomogeneous equation (5.5). Then their dierence z (t) = y (t) x(t) satises z = Az. Since y (t) = z (t) + x(t), we see that the general solution to the inhomogeneous equation (5.5) is a sum of one particular solution and the general solution of the corresponding homogeneous equation. Problem 5.22. Find the general solution to x = Ax + B , where ( ) ( ) 0 1 0 A= , B (t) = . 1 0 t Solution. We have esA = ( ) cos s sin s . sin s cos s

So, the integral in (5.6) is )( ) ) ( ) t( t( cos s sin s 0 s sin s sin t t cos t ds = ds = . sin s cos s s s cos s cos t + t sin t 1 0 0 Hence, the general solution is ( ) ( )( ) x1 (t) cos t sin t sin t t cos t + K1 = x2 (t) sin t cos t cos t + t sin t + K2 where K1 ,K2 R are arbitrary constants. From the explicit form of the solution we also see that ( ) ( ) x1 (0) K1 x(0) = = . x2 (0) K2 43


a) x 4x = cos t b) x 4x = t2 + 1 ( B= ) 0 sin 2t

Problem 5.23. Find all solutions to the following equations

or systems x = Ax + B , where ( ) ( ) ( ) 0 1 0 0 1 c) A = , B= d) A = , 1 0 2 4 0 1 1 1 0 e ) A = 0 2 0 , B = t . 0 0 2 sin t

Problem 5.24. Find all solutions to the second order dierential equations a) s 3s + 2s = t3 b) s + 4s + s = cos 2t

by reducing it to rst order linear systems.


L(Rn ) The space of all linear mappings on Rn , L(Rn ) = Mn (R). 31 Mn (R) The space of n n matrices with real coecients and the usual operations of addition and scalar multiplication of matrices. 31 . 12 angular momentum mr2 = areal velocity The rate at which the position vector sweeps out area, A 1 2 2 mr . Areal can be confused with aerial. 12 conguration space In classical mechanics, the conguration space is the space of possible positions that a physical system may attain, possibly subject to external constraints. For example, the conguration space of a single particle moving in ordinary Euclidean 3-space is just R3 . For n particles the conguration space is R3n , or possibly the subspace where no two positions are equal. 8 conservative force eld In vector calculus, a conservative force/vector eld is a vector eld which is the gradient of a function, known in this context as a scalar potential. Conservative vector elds have the property that the line integral from one point to another is independent of the choice of path connecting the two points: it is path independent. Conversely, path independence is equivalent to the vector eld being conservative. 9 force eld In physics, a force eld is a vector eld that describes a non-contact force acting on a particle at various positions in space. A non-contact force is a force applied to an object by another body that is not in direct contact with it. The most common example of a non-contact force is gravity. 8 phase space In mathematics and physics, a phase space is a space in which all possible states of a system are represented, with each possible state of the system corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. 13


state space The same as phase space. 13 vector eld In vector calculus, a vector eld is an assignment of a vector to each point in a subset of Euclidean space. 3 work Work of a force is the line integral of its scalar tangential component along the path of its application point. 16