Alpha C. Chiang
Professor o f Economics The University of Connecticut

McGraw-Hill, Inc.
NewYork St. Louis San Francisco Auckland Bogot6 Caracas Lisbon London Madrid Mexico Milan Montreal NewDelhi Paris SanJuan Singapore Sydney Tokyo Toronto

The ed~tor the product~onsupervisor was Den~se Puryear L.use ISBN 0-07-1 12568-X Printed in Singapore. by Project supervlslon was done by Science Typographers. Chlang P cm lncludcs b~blrograph~cal references and Index ISBN 0-07-01091 1-7 I Mathemat~caloptlmlzatlon 2 Economics. Inc. The cover was des~gned John Hlte. was Scott D. Inc. I T~tle HB 143 7 C45 1992 9 1-37803 330' 0 1'5 1 ' . Art was prepared electron~callyby Sc~ence When orderlng this t~tle. no part of thls permitted under the Un~ted publlcat~onmay be reproduced or distributed In any form or by any means. Thls book cannot be re-exported from the country to whlch 11 1s cons~gnedby McGraw-HIII Copyright 1992 by McGraw-H111. w~thoutthe prlor wrltten permlsslon of the pubhsher. 5 6 7 8 9 0 KHL SW 9 6 5 L~brary Congress Catalogrng-~n-Publicaton of Data Ch~ang.. All r~ghts reserved Except as States Copyr~ghtAct of 1976.To my parents Rev. & Mrs. Inc. . Mathemat~cal. or stored In a data base or retrieval system. Alpha C. (date) Elements of dynam~c optlmlzatlon I Alpha C. Inc. Typographels. Stratford. Peh-shi Chiang with gratitude ELEMENTS OF DYNAMIC OPTIMIZATION Internat~onalEditions 1992 - Exclus~venghts by McGraw-HIII Book Co-S~ngapore manufacture and for export.dc20 Thls book was set In Century Schoolbook by Sc~ence Typographers.

Chiang received his Ph..D. 3d ed.4 1 Salient Features of Dynamic Optimization Problems Variable Endpoints and Transversality Conditions The Objective Functional Alternative Approaches to Dynamic Optimization 3 4 8 12 17 Part 2 The Calculus of Variations 2 2. New York. A holder of Ford Foundation and National Science Foundation fellowships.2 2.2 1. and Cornell University. He has been a professor of economics at the University of Connecticut since 1964.3 1.3 3.A.1 2.A. China).3 2. but had also taught at Denison University. John's University (Shanghai.1 1. in 1 948 at the University of Colorado. 1984. His publications include Fundamental Methods of Mathematical Economics. 3 Transversality Conditions for Variable-Endpoint Problems 3.4 2.CONTENTS Preface xi Part 1 Introduction 1 The Nature of Dynamic Optimization 1.2 3. McGraw-Hill.4 The General Transversality Condition Specialized T r a n s v e r ~ ~ Conditions ty Three Generalizations The Optimal Adjustment of Labor Demand vii . from Columbia University in 1954 after earning the B. New Asia College (Hong Kong).1 3.5 25 27 28 37 45 49 54 The Fundamental Problem of the Calculus of Variations The Euler Equation Some Special Cases Two Generalizations of the Euler Equation Dynamic Optimization of a Monopolist Trading Off Inflation and Unemployment Alpha C. and the M. in 1946 at St. he served as Chairman of the Department of Economics a t Denison from 1961 to 1964.

3 4.2 4.4 Second-Order Conditions Second-Order Conditions The Concavity/Convexity Sufficient Condition The Legendre Necessary Condition First and Second Variations .1 4.1 7.4 7. 79 79 81 91 95 The Dynamics of a Revenue-Maximizing Firm State-Space Constraints Economic Examples of State-Space Constraints Limitations of Dynamic Optimization Infinite Planning Horizon Methodological Issues of Infinite Horizon The Optimal Investment Path of a Firm The Optimal Social Saving Behavior Phase-Diagram Analysis The Concavity/Convexity Sufficient Condition Again Answers to Selected Exercise Problems Index Constrained Problems Four Basic Types of Constraints Some Economic Applications Reformulated The Economics of Exhaustible Resources Part 3 Optimal Control Theory 7 7.viii CONTENTS 4 4.5 159 161 162 167 177 181 191 193 200 205 205 210 214 221 234 240 240 244 253 264 275 275 Optimal Control: The Maximum Principle The Simplest Problem of Optimal Control The Maximum Principle The Rationale of the Maximum Principle Alternative Terminal Conditions The Calculus of Variations and Optimal Control Theory Compared The Political Business Cycle Energy Use and Environmental Quality - More on Optimal Control An Economic Interpretation of the Maximum Principle The Current-Value Hamiltonian Sufficient Conditions Problems with Several State and Control Variables Antipollution Policy Infinite-Horizon Problems Transversality Conditions Some Counterexamples Reexamined The Neoclassical Theory of Optimal Growth Exogenous and Endogenous Technological Progrew Optimal Control with Constraints Constraints Involving Control Variables .2 7.3 7.

and Sec. the method is used even in recent writings. I decided to present the topic of dynamic optimization in a separate volume. I deem it inadvisable to dismiss the topic of variational calculus. a knowledge of the latter is indispensable for understanding many classic economic papers written in the calculus-of-variations mold. For one thing. Sec. While the basics of the classical calculus of variations and its modern cousin. the present volume can be considered as a continuation of Fundamental Methods of Mathematical Economics. skip Part 2 of the present volume. Since the existing size of that book would impose a severe space constraint. which would have taken us far afield. . 2. differential games and stochastic control are not included. 4. As the title Elements of Dynamic Optimization implies. I exclude the continuous-time version because it needs partial differential equations as a prerequisite. this book is intended as an introductory text rather than an encyclopedic tome.1 (methodological issues of infinite horizon. optimal control theory. Although the advent of optimal control theory has caused the calculus of variations to be overshadowed. Finally. are explained thoroughly.1). if desired. Besides. 5. a background in the calculus of variations facilitates a better and fuller understanding of optimal control theory. Separateness notwithstanding. In developing the Euler equation. Certain features of this book are worth pointing out.2 (checking concavity/convexity). 2 (the Euler equation). But I would strongly recommend reading at least the following: Chap.In recent years I have received many requests to expand my Fundamental Methods of Mathematical Economics to include the subject of dynamic optimization. Dynamic programming is explained in the discrete-time form. The reader who is only interested in optimal control theory may. relevant also to optimal control theory). I supply more details than most other books in order that the reader can better appreciate the beauty of the logic involved (Sec.

10. 9. Although recent economic applications are natural candidates for inclusion. In the numerical illustrations. I am deeply grateful.7 and Sec. The discussion of mathematical techniques is always reinforced with numerical illustrations. and. I purposely present the shortest-distance problem-a simple problem with a well-known solution-in several different alternat'ive formulations.4) with the more recent model of Leland on the dynamics of a revenue-maximizing firm (Sec.2) also illustrates one of the many developments in microeconomic reorientation. and use it as a running thread through the book. also helped me with their questions and reactions. since they involve a failure to recognize the presence of implicit fixed terminal states in those examples (Sec.3) through the neoclassical growth model of Cass (Sec..3) to the Romer growth model with endogenous technological progress (Sec. In the choice of economic examples. Cheryl Kranz. I also try to argue that the alleged counterexamples in optimal control theory against infinite-horizon transversality conditions may be specious. Even though the resulting lengthier treatment requires limiting the number of economic models presented. 5. 9. Stratford. made the production process smooth as well as pleasant.51. economic examples. 2. whose keen eyes caught many sins of commission and omission in the original manuscript. I hope. clarity and readability. Forster of the University of Wyoming. and Ellie Simon at Science Typographers. I have written this volume with a comparable level of expository patience. one sees the shift in the focus of societal concerns from resource-exhaustion to environmental quality. 9.2). To maintain a sense of continuity with Fundamental Methods of Mathematical Economics.1). Many of my students over the years. from the classic Hotelling model of exhaustible resources (Sec. exerted the right amount of encouragement and pressure at critical moments to keep me going. And the cooperative efforts of Joseph Murphy at McGraw-Hill. Chiang . one sees a progressive refinement in the analytical framework. Similarly. on whom I tried the earlier drafts of this book. I attempt to explain each economic model in a step-by-step manner from its initial construction through the intricacies of mathematical analysis to its final solution. For instance. I have not shied away from classic articles. and Sarah Roesser. Alpha C. however. 5. In the writing of this book.3) to the Forster models of energy use and pollution (Sec. Scott D. To all of them.xii PREFACE PREFACE xiii In connection with infinite-horizon problems. Inc. In line with my pedagogical philosophy. As a by-product.41. I have benefited immensely from the numerous comments and suggestions of Professor Bruce A. but also turn out to be excellent for illustrative purposes because their model structures are uncluttered with secondary complicating assumptions. 8. Finally. Thanks are also due to Edward T. 7. from the classic Ramsey growth model (Sec. the juxtaposition of old and new economic models also provides an interesting glimpse of the development of economic thought. 6. and exercise problems. I alone should be held responsible for the remaining imperfections. Some classic articles are not only worth studying in their own right. my editor. I attempt to clarify some common misconceptions about the conditions for convergence of improper integrals (Sec. I believe that the detailed guidance is desirable because it serves to minimize the trepidation and frustration often associated with the learning of mathematics. A comparison of the classic Evans model of dynamic monopolist (Sec. Dowling for ferreting out some typographical errors that lurked in the initial printing of the book. my major criterion is the suitability of the economic models as illustrations of the particular mathematical techniques under study. Since I did not accept all his suggestions. my wife Emily again offered me unstinting assistance on manuscript preparation.



tomorrow. Useful as they are. In particular. detailing the best value of the variable today. m)-literally "from here to eternity. and so forth. In contrast. the optimal time path of a (continuous-time) variable y will be denoted by y*(t). such as the optimal level of output per week and the optimal price to charge for a product. / t: 3 . It does not call for a schedule of optimal sequential action. till the end of the planning period. we shall use the asterisk to denote optimality. For this reason.CHAPTER THE NATURE OF DYNAMIC OPTIMIZATION e Optimization is a predominant theme in economic analysis. say [O. Throughout this book. TI (continuous-time case). so that the relevant time interval is [O. The solution sought in such problems usually consists of a single optimal magnitude for every choice variable. a dynamic optimization problem poses the question of what is the optimal magnitude of a choice variable in each period of time within the planning period (discrete-time case) or at each point of time in a given time interval. the classical calculus methods of finding free and constrained extrema and the more recent techniques of mathematical programming occupy an important place in the economist's everyday tool kit." The solution of a dynamic optimization problem would thus take the form of an optimal time path for every choice variable. It is even possible to consider an infinite planning horizon. such tools are applicable only to static optimization problems.

. For more complicated problems. . Once the decision is made. The question is: How should the firm select the sequence of subprocesses through the five stages in order to minimize the total cost? In Fig.PART 1: INTRODUCTION CHAPTER 1 THE NATURE OF DYNAMIC OPTIMIZATION 5 1. of course. B. the state variable is assumed to take values belonging to a small finite set. each entailing a specific cost. Z ) are referred to as vertices. 1. till state Z is reached. that is. the terminal state Z is shown by the rightmost point (at the end of stage 5). The other arc AC shows State that the substance can also be transformed into state C instead of state B . which takes only integer values. with $14 as the minimum cost of production. This we shall discuss later when we introduce dynamic programming in Sect. . such that the sum of the values of the component arcs is minimized. In every stage.4. starting at A and terminating at Z. . dynamic optimization can be viewed as a problem of multistage decision making. . The other points B. . In that case.1is characterized by a discrete stage variable. Multistage Decision Making - - - The multistage character of dynamic optimization can be illustrated with a simple discrete example. it is also possible to envisage the planning horizon as a sequence of stages in an economic process. remains the fact that the optimal solution would involve more than one single value for the choice variable. The initial state A is shown by the leftmost point (at the beginning of state 1). . . over the span of five stages. Such a sequence of arcs will constitute an optimal path. and so forth. 1. Suppose that a firm engages in transforming a certain substance from an initial state A (raw material state) into a terminal state Z (finished product state) through a five-stage production process.1.2).1 is simple enough so that a solution may be found by enumerating all the admissible paths from A to Z and picking the one with the least total arc values. one-stage-at-a-time optimization procedure will not in general yield the optimal path! For example. however. The first-stage decision is whether to transform the raw material into state B (at a cost $2) or into state C (at a cost of $4). whether to choose arc AB or arc AC. . that a method that can take into account the entire planning period needs to be developed. Our problem is to choose a connected sequence of arcs going from left to right. 1. B. i- Stage T FIGURE 12 . 1. It is precisely for this reason. a systematic method of attack is needed. . the more costly first-stage arc AC should be selected instead. The Continuous-Variable Version The example in Fig. . The example in Fig. because the former involves only half the cost of the latter. yet. These points ( A . This solution serves to point out a very important fact: A myopic. For the time being. If these variables are continu- State O! 1 --+ 1 2 3 4 5 I *ge Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 0 FIGURE 1 1 . there will arise another problem of choice in stage 2. To indicate the possibility of transformation from state A to state B. Each arc is assigned a specific value-in the present example. the firm faces the problem of choosing among several possible alternative subprocesses. Also.1. .1 SALIENT FEATURES OF DYNAMIC OPTIMIZATION PROBLEMS Although dynamic optimization is mostly couched in terms of a sequence of time. however. we illustrate such a problem by plotting the stages horizontally and the states vertically. we draw an arc from point A to point B. The distinguishing feature. . C . K show the various intermediate states into which the substance may be transformed during the process. { A . 1. a cost-shown in a circle in Fig. a myopic decision maker would have chosen arc AB over arc AC in the first stage. let us just note that the optimal solution for the present example is the path ACEHJZ.

where. However. profit. In this case. of course. we shall prove this result by using the calculus of variations. But it must be emphasized that this symbol fundamentally differs from the composite-function symbol g[ f(x)]. say. g is a function of f . and denote them by yI(t). The solution of the problem will. Example 41. that results in a change in path value V. in the special case where the terrain is completely homogeneous. so that the transport cost per mile is a constant. 1. well known-so much so that one usually accepts it without demanding to see a proof of it. let us visualize Fig. the least-cost problem will simply reduce to a shortest-distance problem. a simple type of dynamic optimization problem would contain the following basic ingredients: 1 a given initial point and a given terminal point.2 will depict time paths. depend crucially on how the potential profit is related to and determined by the configuration of the capital path. The straight-line solution is. In the symbol V[y(t)]. not only on the distance traveled. In the latter. 2 a set of admissible paths from the initial point to the terminal point. but also on the topography on that path. many writers omit the "(t)" part of the symbol.) associated with the various paths. and a predetermined target capital stock equal to Z at time T. 2. Instead. the stage variable will represent time. There is also an infinite number of states on each path. because such a path entails the lowest total cost (has the lowest path value).2. In the next chapter (Sec. but a mapping from paths (curves) to real numbers (performance indices). As a concrete example. then the curves in Fig. g is in the final analysis a function of x. it should be clear that. where VI. this type of mapping is given a distinct name: functional. For most of the problems discussed in the following. and 4 a specified objective-either to maximize or to minimize the path value or performance index by choosing the optimal path. V should be understood to be a function of ''y(t)'' as such. y(0) for the initial state or y ( T ) for the terminal state. and write the functional as V[y] or V(y). and f is in turn a function of x.2 to be a map of an open terrain. From the preceding discussion. when we want to stress the specific time interval involved in a Set of admissible paths (curves) Y Set of path values (real line) FIGURE 13 . T I are capable of achieving the target capital at time T. for illustration. 1. consider a firm with an initial capital stock equal to A a t time 0. the y(t) component comes as an integral unit-to indicate time paths-and therefore we should not take V to be a function of t. 1. and the state variable representing the latitude. Note that when the symbol y is used to indicate a certain state. each state being the result of a particular choice made in a specific stage. thereby underscoring the fact that it is the change in the position of the entire y path-the variation in the y path-as against the change in t.3. To make clear this difference. For concreteness. in the path connotation.6 PART 1: INTRODUCTION CHAPTER 1 THE NATURE OF DYNAMIC OPTIMIZATION : 7 ous. The solution in that case is a straight-line path. for it represents a special sort of mapping-not a mapping from real numbers to real numbers as in the usual function. The problem of the firm is to identify the investment plan-hence the capital path-that yields the maximum potential profit. we can interpret the curves in Fig. the classical approach to the continuous version of dynamic optimization. TI. Let us think of the paths in question as time paths. Each possible path is now seen to travel through an infinite number of stages in the interval [0. And each investment plan implies a specific capital path and entails a specific potential profit for the firm. regardless of whether the variables are discrete or continuous. we have drawn only five possible paths from A to Z. on the other hand. of course. In the following.2 as possible capital paths and their path values as the corresponding profits. The Concept of a Functional The relationship between paths and path values deserves our close attention. we may instead have a situation as depicted in Fig. it is s u f i e d . etc. Many alternative investment plans over the time interval [O. 1.and so on. Then the mapping is as shown in Fig. 3 a set of path values serving as performance indices (cost. The cost associated with each possible path depends. The symbol we employ is V[y]. The general notation for the mapping should therefore be V[y(t)]. 1.2. in general. To further avoid confusion. and appears as. Our assigned task is to transport a load of cargo from location A to location Z at minimum cost by selecting an appropriate travel path. In contrast. with the stage variable representing the longitude. the t in y(t) is not assigned a specific value. yII(t). . thus. VII represent the associated path values.

In this section we shall briefly discuss some basic types of variable terminal points. a target inflation rate). path or a segment thereof. mapping a real number (price) into a real number (optimal output). 5 ~ fact shows. which is what Fig. we may note that although the concept of a functional takes a prominent place primarily in dynamic optimization. The result is a truncated vertical terminal line. Z ) ] . Though explicit about the time horizon. this visual image. will be able to achieve a better-or at least no worse -optimal path value.. especially in diagrams. or use the term "y path". in addition. In such a problem. we can find examples of it even in elementary economics. We shall also retain the symbols 0 and T for the initial time and terminal time. exogenously determined price. we may be given a fixed terminal time T. for instance. or fixed-time problem. meaning that the terminal time of the problem is fixed rather than free. the terminal point becomes a part of the optimal choice. 12 months. we simplified matters by assuming a given initial point [a given ordered pair (0. such as Z.More often. the planner obviously enjoys much greater freedom in the choice of the optimal path and. or the y* path. and Z. this name fails to give a complete description of the problem. The optimal time path is then denoted by y*(t). the optimal output of the firm with a given MC curve will depend on the specific position of the MR curve. I t is. the current position. with no inherent need for it to be predetermined. the terminal price will be completely u p to the firm to decide. the profit-maximizing output is found by the decision rule MC = MR (marginal cost = marginal revenue). on the other hand. as we can for its competitive counterpart. however. In such a case. however. The terminal position. we must eliminate from consideration all P < 0. For this reason. We may. we shall alternatively refer to the fixed-time problem as the vertical-terminal-line problem.4a. may very well turn out to be a flexible matter. A more informative characterization of the problem is contained in the visual image in Fig. If there is no legal price restriction in force. 7). any point on the vertical line t = T is acceptable as a terminal point. In the economics of the firm. and the symbols A and Z for the initial and terminal states. To give an economic example of such a problem.8 BART 1:INTRODUCTION CHAPTER 1.. we can therefore express the optimal output as Q* = &*(Po). On the other hand. face only a fixed terminal time. This type of problem is commonly referred to in the literature as a fixed-time-horizon problem. 1.Given the MC curve. than if the terminal point is rigidly specified. Types of Variable Terminal Points As the first type of variable terminal point. where the MR curves are horizontal as in Fig. since the output decision involves a mapping from curves to real numbers. If. but have complete freedom to choose the terminal state (say. But when the MR curves are downward-sloping under imperfect competition. In such a case. we may also use A and Z to designate the initial and terminal points (ordered pairs). we may also be assigned a rigidly specified terminal state (say.. the optimizing plan must start from some specific initial position. Z. The current price enters into the problem as the initial state. In Fig. 5 ~In line with .- -- ---A_ Q Q"2 0 l-----------Q'z 8"1 Q - Q*I a given initial point may not be unduly restrictive. The assumption of . we shall retain this assumption throughout most of the book. Only by implication are we to understand that the terminal state is free. say.which is a function. since nothing is said about the terminal state. but are free to select the terminal time (when to achieve the target). We shall take the stage variable to be continuous time. When no confusion can arise. because. the terminal capital stock). we shall use the notation y[O. A)] and a given terminal point [a given ordered pair (T. 1 . an in 1. TI or y[O.2 VARIABLE ENDPOINTS AND TRCINSVERSALITY CONDITIONS In our earlier statement of the problem of dynamic optimization. Under purely competitive conditions. we shall simply use the y(t) symbol. while the planning horizon is fixed at time T. THE NATURE O F DYNAMIC OPTIMIZATION 9 ( J k-. 1 . but a free terminal state. suppose that a monopolistic firm is seeking to establish a (smooth) optimal price path over a given planning period. precisely because of this inability to express the optimal output as a function of price that makes it impossible to draw a supply curve for a firm under imperfect competition. as a consequence. each MR curve can be represented by a specific. say. for purpose of profit maximization. 1. V*. of course. Po.5a. we in fact have something in the nature of a functional: Q* = Q*[MR]. As an aside. Since negative prices are inadmissible. in the usual problem.

In the former. it is also possible in a problem of this type to have minimum production time (rather than minimum cost) as the objective. T. Such a tradeoff may appear as the curve Z = 4(T) in Fig. TI) with a corresponding terminal state (say. as exemplified by TI. Again. low breakability. But this fact automatically implies that. then further truncation of the vertical terminal line is needed. and (2) a particular quality characteristic. the path with T. 2). for instance. 1.. where y denotes breakability. This permits the rise of a lengthier. Later we shall also encounter problems where the planning horizon is infinite ( T -+ m).. Still. we may cite the case of a custom order for a product. In the third type of variable-terminal-point problem.). depending on the path chosen. not the entire endpoint in the sense of the ordered pair (T. 'I'ransversality Condition The common feature of variable-terminal-point problems is that the planner has one more degree of freedom than in the fixed-terminal-point case. an extra condition is needed to pinpoint the exact path chosen. As illustrated in Fig. as the terminal time becomes preferable to the one ending at T. To make this clear. the planner actually has only one degree of freedom in the choice of the terminal point. the customer accepts a tradeoff between the two considerations. The second type of variable-terminal-point problem reverses the roles played by the terminal time and terminal state. neither the terminal time T nor the terminal state Z is individually preset. In Fig.5c. and T. be that of producing a good with a particular quality characteristic (steel with given tensile strength) at minimum cost. the field of choice is obviously again wider than if the terminal point is fully prescribed. This latter type of problem is called a time-optimal problem. Turning the problem around. regardless of the relative cost entailed. a terminal surface) that associates a particular terminal time (say. 1. To take advantage of the visual image of the problem in Fig. there is greater freedom of choice as compared with the case of a fixed terminal point. 1. such an equation plots as a terminal curve (or. but less expensive production method that might not be feasible under a rushed production schedule. The preceding discussion pertains to problems with finite planning horizon. 1. This type of problem is commonly referred to as a fixed-endpoint problem. now the terminal state Z is stipulated. the horizontal line y = Z constitutes the set of admissible terminal points.PART 1: INTRODUCTION CHAPTER 1: THE NATURE OF DYNAMIC OPTIMIZATION 11 FIGURE 16 .versus the variable-terminal-point cases. . In that case.. for which the customer is interested in having both (1) an early date of completion. A possible confusion can arise from this name because the word "endpoint" is used here to designate only the terminal state Z. but the two are tied together via a constraint equation of the form Z = 4(T). but there is complete discretion over the length of the production period. say. we shall alternatively refer to the fixed-endpoint problem as the horizontal-terminal-line problem. but the terminal time is free. in deriving the optimal solution. To give an economic example of a terminal curve.5c.and. in higher dimension. The task of the planner might. let us compare the boundary conditions for the optimal path in the fixed. where the terminal time T is a finite number. Being aware that both cannot be attained simultaneously. may be associated with a different terminal time. the optimal path must satisfy the boundary (initial and terminal) conditions y(O)=A and y(T)=Z (T. official price ceiling is expected to be in force at the terminal time t = T. We shall call this type of problem the terminal-curve (or terminal-surface) problem.5b. Each of these. Even though the problem leaves both T and Z flexible.A.Zallgiven) In the latter case. the initial condition y(0) = A still applies by assumption.5b. 2.

T I . A problem with an objective functional in the form of (1. 1..5a. y. The continuous-time counterpart of such a sum is a definite integral. since each arc is infinitesimal in length. the characterization of the optimal path must also include another transversality condition in place of the equation y(0) = A.. we must first be able to identify an "arc" on a continuous-time path. that assigns arc values to arcs.12 PART 1 INTRODUCTION But since T and/or Z are now variable. its total value would naturally be a sum. respectively: ( 1 ) t . If there exists some function. these three items are represented by.. As an exercise.(t. let y denote heat resistance in a product. and (3) y l ( t ) = dy/dt. F.. and the arc value is F[t. the discussion in this section can be adapted to a variable initial point as well. Similarly. Z. the arc values on both the y and z paths must be taken into account.It follows that the general expression for arc values is F [ t . suppose that y represents the price variable. to describe how the optimal path crosses the initial line or initial curve.)]. the height and the slope of the curve at t = t o are y. indicating that initial time 0 is given. ending at Z. where P' = dP/dt. the reader is asked to sketch suitable diagrams similar to those in Fig. z'). then the value of the said arc can be written as F [ t o .(t. y. ( 2 ) y ( t ) .(to) and ~. we shall often suppress the time argument ( t ) for the state variables and write the integrand function more concisely as F(t.1. equally satisfy the condition y ( T ) = Z. a quality which takes longer production time to improve. But how do we express the "arc value" / for the continuous case? To answer this. y'. z. Inasmuch as any y path must perforce travel through an interval of time.(to). takes a different set of arc values. Figure 1. is a terminal condition that can conclusively distinguish the optimal path from the other admissible paths.. For instance. : (arc value) dt. stage framework of Fig. 1. Thus. through the arc-value-assigning function F..y. In Fig.). Variable Initial Point Although we have assumed that only the terminal point can vary. in the case of a profit-maximizing. as the symbol V [ y ] emphasizes. y') or F(t. 1. on another path. If the initial point is variable. y.y. the terminal condition y ( T ) = Z is no longer capable of pinpointing the optimal path for us. the arc associated with a specific point of time to is characterized by a unique value y I ( t o )and a unique slope y. or other possible terminal positions.(to). For simplicity.. Or there may be a vertical initial line t = 0..y'(t)].2 1 Sketch suitable diagrams similar to Fig.) that alters the magnitude of V . In order to . Such a condition is referred to as a transversality condition. How would you redraw the terminal curve to depict the tradeoff between early completion date and high heat resistance? It bears repeating that.(t0).2) constitutes the standard problem. therefore. ( 2 ) the starting state.5c. The definite integral sums those arc values on each y path into a path value. and the path-value functional-the sum of arc values-can generally be written as the definite integral EXERCISE 1.1) or (1. 1. y and z. respectively. the path value is the sum of the values of its component arcs. in the problem. one that maximizes or minimizes the path value V [ y ] . which. ~. If there y e two state variables. on a given path y.1 suggests that three pieces of information are needed for arc identification: ( 1 ) the starting stage (time). but the initial state is unrestricted.. What is needed.1) may arise. and (3) the direction in which the arc proceeds.y(t). there may be an initial curve depicting admissible combinations of the initial time and the initial state. In the discrete- A Microeconomic Example A functional of the standard form in (1. As Fig.5 for the case of a variable initial point. 1. The objective functional should then appear as 1.. The Integral Form of Functional An optimal path is. With continuous time. by definition.5 shows.. all admissible paths.3 THE QBJECTIVE FUNCTIONAT.5 for the case of a variable initial 2 point. How would an official price ceiling that is expected to take effect at time t = T affect the diagram? 3 In Fig. because it normally appears as a description of how the optimal path crosses the terminal line or the terminal curve (to "transverse" means to "to go across"). for instance. 1.y. long-range-planning monopolistic firm with a dynamic demand function Qd = D ( P . it is the variation in the y path ( y I versus y.(to)]. Each different y path consists of a different set of arcs in the time interval [O. P').

where the G function is based on what happens at the terminal time T only. it is also known as a terminal-control problem.3) is called a problem of Mayer. we can write the composite function C = C(Q) = C [ D ( P . respectively. since there is no need to sum the arc values over an interval. P')] It follows that the total profit also depends on P and P': 7 R . the objective functional appears as which conforms to the general form of (1. F[K. (1. except that the argument t in the F function happens to be absent. with two state variables y and z. the optimization criterion in a problem may not depend on any intermediate positions that the path goes through. This implies that the utility function can be rewritten as U(C) = U[Q(K. z(t). P r ) ] = T(P.3) and (1. no definite integral arises.C = R ( P . the t argument is absent from the F function. where the two state variables y and z refer in the present example to K and L.3) enter simultaneously in the objective functional. i where the G function may represent. pl) 7 Summing functional over. L. To each price path in the time interval [O. we can then write . Then the corresponding objective functional would be exactly in the form of (1. However. Since only the terminal position matters in V. Rather.2).3) would become where K ' = I denotes net investment. the scrap value of some capital equipment. In that event. Then we have A Macroeconomic Example Let the social welfare of an economy at any time be measured by the utility from consumption.4) can again be expanded to include more than one state variable. Kt]. L). = Qd (to allow no inventory accumulation or decumulation). If we adopt the production function Q = Q(K. Moreover. In terms of (1. say. P') . but may rely exclusively on the position of the terminal point attained. As another possibility.21. the functionds in (1. 51. results in the objective C T ( P. the firm's output should be Q = D(P. the F function contains only three arguments in the present example: F[Y(~). if either the revenue function or the cost function can shift over time. U = U(C). For example. so that its total-menue function is If the societal goal is to maximize the sum of utility over a period [O. or Other Forms of Functional Occasionally. too.L) -K1] A problem with the type of objective functional in (1. then we have a problem of Bolza.11. Consumption is by definition that portion of output not saved (and not invested). Note that while the integrand function of this example does contain both K and K' as arguments. the L variable appears only in its natural form unaccompanied by L'. P') dt This exemplifies the functional in (1. 51 that maximizes the five-year profit figure. for instance. there must correspond a particular five-year profit figure. in that case the T function would also have t as an argument. then that function should contain t as a separate argument. ~'(t)]. It may also happen that both the definite integral in (1. and the objective of the firm is to find the optimal price path P*[O. P').C[D(P.4) is the form of the objective functional. then its objective functional takes the form Assuming that the total-cost function depends only on the level of output.1) and the terminal-point criterion in (1. the variable t can enter into the integrand via a discount factor e-pt. TI. If (1. Moreover. . a five-year period.1).14 PART 1: INTRODUCTION CHAPTER 1: THE NATURE OF DYNAMIC OPTIMIZATION 15 set Q. and assume away depreciation.

Thus we have transformed a problem of Mayer into a standard problem.y(t)I3dt ( d l ID(^) . write the specific form of the G function in (1. All deviations from D(t). Mayer.8) subject to and V[y]=/T~[t. the optimal y path can be deduced through the relationship in (1. By a similar procedure. 3. and Bolza-are all convertible into one another.6).5). One of the earliest problems posed is that of determining the shape of a surface of revolution that would encounter the least resistance when moving through some resisting medium (a surface of revolution with the minimum area).7) in the new variable z(t). y(t).TI. yf(t)]." the objective is to move the state variable from a given initial value y(0) to a given terminal value y(T) in the least amount of time.7) \ r ~T~f(t)dt=z(t)(O=z(T)-z(0)=z(T) [bytheinitialcondition] T . we shall deem it sufficient to couch our discussion primarily in terms of the standard problem. . In view of this convertibility. We have earlier mentioned the calculus of variations and dynamic programming. we wish to minimize the functional V[y] = T . we can convert a problem of Bolza into a standard problem. To formulate an appropriate dynamic minimization problem. this will be left to the reader.y(t). with initial condition z(0) = 0.0.4 ALTERNATIVE APPROACHES TO the functional in (1. the integral in (1. that is. [Hint: Introduce a new variable z(t) characterized by zl(t) = F[t. zl(t) = dG[t..7) still falls into the general form of the objective functional (1. Suppose we are given a function D(t) which gives the desired level of the state variable at every point of time in the interval [0.1) that will produce the preceding functional. The integrand.Zgiven) y(T)=Z his problem will be discussed in Sec. as in (1. because they inflict a negative payoff (cost.D(t)l dt (b) j:[y(t) . Once we have found the optimal z path.3) can be transformed into the form (1.6).y(t)]. Since (1. are undesirable. published in 1687.6) z(t) = G [t.1 4 Transform the objective functional (1.y(t)l dt 3 Transform the objective functional (1. In other words. y ( t ) ] with initial condition z(0) = would be acceptable? Why? i 0 It should be noted that it is "t" rather than "T" that appears in the G function in (1. or disappointment).2. y(t)]/dt. The remaining one. (a)Taking it as a standard problem. The Calculus of Variations Dating back to the late 17th century.1) or (1. positive or negative. 2.3). with the objective functional in the form of an integral.4) of the problem of Bolza lnto the format of the problem of Mayer. the functional (1. there are three major approaches.3 In a so-called "time-optimal problem. the calculus of variations is the classical approach to the problem. Other mathematicians of that era (e.D(t)I2dt (c) /:[~(t) .3) that will produce the preceding functional. as in (1. the powerful modern generalization of variational calculus.1 (a)/:[y(t) 1. can be replaced by the integral in (1. These problems can be represented by the following general formulation: Maximizeorminimize (1. An economic example of this type of problem can be found in Sec. with the arguments t and z(t) absent.yf(t)]dt 0 EXERCISE 1. goes under the name of optimal control theory. For example. is easily recognized as a special case of the function F[t. (b) Taking it as a problem of Mayer. DYNAMIC OPTIMIZATION To tackle the previously stated problem of dynamic optimization. [Hint: Introduce a new variable z(t) = G[t. the truth is that the three types of problems-standard.1 6 PART 1 INTRODUCTION CXM'TER 1: THE NATURE OF DYNAMIC OPTIMIWTION - Although the problem of Bolza may seem to be the more general formulation.' Issac Newton solved this problem and stated his results in his Principia. z(t).4) of the problem of Bolza into the format of the standard problem. which of the following objective functionals y(0) =A ( A given) (T. John and James Bernoulli) also studied problems of a similar nature.1) by defining a new variable (1. Y(T)].3) or (1. write the specific form of the F function in (1.2). We shall give a brief account of each.g.with initial condition z(0) = 0.4. zl(t)].1). pain. G[T. Example 3.

For this reason. This assumption is needed because the basic methodology underlying the calculus of variations closely parallels that of the classical differential calculus. Pontryagin and his associates V. shows how. Series A. consideration is given to a control variable u(t). the problem Such an equation. Furthermore. Interscience. Mishchenko. for instance. In optimal control theory. who were jointly awarded the 1962 Lenin Prize for Science and Technology. but it has also been changed from V[y] to V[u]. 1965. the integral must be convergent). Hestenes. once given an initial condition on y. which is the more appropriate name for the principle in a slightly modified formulation of the problem. Hestenes' Rand report is titled A General Problem in the Calculus of Variations with Applications to Paths of Least Time. We shall assume this condition is met whenever we write an integral of the general form. the maximum principle.Zgiven) and y(T)=Z Optimal Control Theory The continued study of variational problems has led to the development of the more modern method of optimal control theory. the planner's choice of u will drive the state variable y over time. 23-48.e. the set 8 may be the closed interval [0. Magnus R. In fact. New York. This would be acceptable only if the decision on a control path u(t) will.8). In sum. with an integral functional in a single state variable. Boltyanskii. Pontryagin. u ( t ) ] dt y 0 (1. F. it allows the study of problems where the admissible values of the control variable u are confined to some closed. bounded convex set 8. This reflects the fact that u is now the ultimate instrument of optimization. we immediately obtain (1. Wiley.~ The powerfulness of that principle lies in its ability to deal directly with certain constraints on the control variable. This principle is commonly associated with the Russian mathematician L. then such a constraint. The English translation of their work. The main difference is that. To focus attention on the control variable implies that the state variable is relegated to a secondary status.9) subject to ~ ' ( t = f [t. New York. instead of dealing with the differential dx that changes the value of y = f(x). the dynamic optimization problem is viewed as consisting of three (rather than two) types of variables. and E.y(t). .18 PART 1: INTRODUCTION CHAPTER 1: THE NATURE OF DYNAMIC OPTIMIZATION 19 Such a problem. R. independently produced comparable work in a Rand Corporation report in 1949. Nonetheless.For instance. pp. If the marginal propensity to save is the control variable. We shall use the original name. The study of variational calculus will occupy us in Part 2. and with no constraints.81. 1966. 0 < s(t) < 1. is known as the fundamental problem (or simplest problem) of calculus of variations. at any moment of time. may very well be appropriate. In order to make such problems meaningful. we will now deal with the "variation" of an entire curve y(t) that affects the value of the functional V[y]. Trirogoff. we shall assume that all the functions that appear in the problem are continuous and continuously differentiable. Gamkrelidze. done by K. Aside from the time variable t and the state variable y(t). Specifically. Indeed. Some writers prefer to call the principle the minimum principle. it is necessary that the functional be integrable (i.. Vol.8) is as follows: Maximize or minimize u] = I T ~ [ t . 3. N. is The Mathematical Theory of Optimal Processes. not only does the objective functional contain u as a n argument. this control problem is intimately related to the calculus-of-variations problem (1. The optimal control problem corresponding to the calculus-of-variations problem (1. requiring O I u(t) I 1throughout the planning period. the equation of motion would make it possible to construct the related optimal state-variable path y*(t). The expanded version of this work is contained in his book Calculus of Variatzons a n d Optimal Control Theory. as in (1. in (1. given the value of the state variable. unambiguously determine a state-variable path y(t) as a by-product. The single most significant development in optimal control theory is known as the maximum principle. u(t)l ) ~ ( 0= A ) ( A given) (T. Control. it is the latter type of variable that gives optimal control theory its name and occupies the center of stage in this new approach to dynamic optimization. Once we have found the optimal control-variable path u*(t). 11.( t ) . V. an optimal control problem must contain an equation that relates y to u: Note that.9). with completely specified initial and terminal points. and adopting the differential equation yf(t) = u(t) as the equation of motion. : $ 9 he maximum principle is the product of the joint efforts of L. 1962. But his work was not easily available until he published a paper that extends the results of Pontryagin: "On Variational Theory and Optimal Control Theory. although an American mathematician. G.8). by replacing y'(t) with u(t) in the integrand in (1. S. called an equation of motion (or transition equation or state equation). S.91." Journal of SIAM.

C. Z ) which says that we can determine an optimal path value for every possible initial point.Y' 2 3 Stage 4 4 5 Stage 5 - WY - Stage 1 ' FIGURE1. V*.. All this is best explained with a specific discrete illustration. . u(t) E State 9 for01tsT may be appended to it. Moving back to stage 4 and utilizing the previously obtained optimal-value information in (1. The answer is that the embedding process is what leads to the development of a systematic iterative procedure for solving the original prohlem. which will tell us how best to proceed from any specific initial point i . 1. NJ. it embeds the given control problem in a family of control problems.9). each of which is associated with a different initial point. In this light. it is possible to write an optimal value function V* = V*(i) ( i = A . In fact. 1. we consider the larger problem of finding the least-cost path from each point in the set { A . with the consequence that in solving the given problem.6 i . we can determine V*(G) as well as the optimal path GZ (from G to Z ) as . Referring to Fig. 1. Dynamic Progmmnring. but one may still wonder why we should go to the trouble of embedding the problem.. 1957. thereby multiplying the task of solution. B . E.9) constitutes a special (unconstrained) case where the control set 8 is the entire real line. Since every component problem has a unique optimal path value. . J .20 PART 1 INTRODUCTION CHAPTER 1 THE NATURE OF DYNAMN: OPTIMIZATION 21 addressed by optimal control theory is (in its simple form) the same as in (1. and K. J. There then exists a family of component problems. in order to attain V*(i) by the proper selection of a sequence of arcs leading from point i to the terminal point 2. problems for the other pseudo initial points are not trivial. This is. we are actually solving the entire family of problems.1). B. . imagine that our immediate problem is merely that of determining the optimal values for stage 5. 1 Having found the optimal values for I . the control problem (1. The most important distinguishing characteristics of this approach are two: First.6 (adapted from Fig. for it does not permit any real choice or control. Our original problem has thus been "embedded" in a family of meaningful problems. Princeton University Press. primary attention is focused on the optimal value of the functional.6. The problem involving the pseudo initial point Z is obviously trivial. The answer is easily seen to be 3~ichard Bellman. we have adopted many pseudo initial points (B. Second. C. Z ) to the terminal point Z. rather than on the properties of the optimal state path y*(t) (as in the calculus of variations) or the optimal control path u*(t) (as in optimal control theory). Returning to Fig. . the task of finding the least-cost values V*(G) and V*(H) becomes easier.9). not to be confused with the variable-initial-point problem in which our task is to select one initial point as the best one. and K. an optimal value function-assigning an optimal value to each individual member of this family of problems-is used as a characterization of the solution. we are to consider every possible point as a legitimate initial point in its own right. except that an additional constraint. Given the original problem of finding the least-cost path from point A to point Z. etc). it is being included in the general problem for the sake of completeness and symmetry. for each member of this family of problems. That is. aside from the genuine initial point A. we can also construct an optimal policy function. From this. i ere. however.10). Dynamic Programming Pioneered by the American mathematician Richard Bellman: dynamic programming presents another approach to the control problem stated in (1. let us first see how the "embedding" of a problem is done. . The detailed discussion of optimal control theory will be undertaken in Part 3. . Princeton. But the component 1 . associated with the three initial points I . The purpose of the optimal value function and the optimal policy function is easy to grasp.

find V*(B) and V*(C). which states. and the set of all the numerals on the arrows constitutes the optimal value function. And.1 that the minimum cost of ~roduction for the example in Fig. 1.11) and (1. for example. we shall not venture further into dynamic programming in this book. Determine the optimal paths DZ.6. . 1.6 are profit (rather than cost) figures. if HJZ is already known to be the optimal path from H to Z. This explains why we must embed the original problem. the numeral on the arrow shows the optimal path value V*(G). 2 On the basis of the preceding problem. 1. achieved on the path ACEHJZ.. 1. we will be back to stage 1.CHAPTER 1: THE NATURE OF DYNAMIC OPTIMIZATION 23 follows: (1. Even though the essence of dynamic programming is sufficiently clarified by the discrete example in Fig.6 (same as Fig.] = The fact that the optimal path from G to Z should go through I is indicated by the arrow pointing away from G . Because of this. Unfortunately. min(value of arc H J min(4 + V* ( J ) . . 3 Verify the statement in Sec.12). 1. find V*(D). Note again the arrow pointing from H toward J and the numeral on it. we can then move back one more stage to calculate V*(D). and FZ-in a similar manner.1) is $14.4 = = min{value of arc GI min(2 -t 3 . 6 + 2) = 5 ( a ) the optimal (maximum-profit)value V*(i). that if you chop off the first arc from an optimal sequence of arcs.6. we find (1. . that is.6. V*(E). and V*(F)-and the optimal paths DZ. EZ.11) V*(G) EXERCISE 1.Z ) . with two more such steps. By the same token. the solution of continuous-time problems of dynamic programming involves the more advanced mathematical topic of partial differential equations. 1. partial differential equations often do not yield analytical solutions. then HJZ must be the optimal path from H to Z. where we can determine V*(A) and the optimal path AZ. the full version of dynamic programming includes the continuous-time case. find + 1 . But note that in order to apply the principle of optimality and the iterative procedure to delineate the optimal path from A to Z. The rest of the book will be focused on the methods of the calculus of variations and optimal control theory. Conversely. and FZ.] 4 Suppose that the arc values in Fig.value of arc HK + V* ( K)) [The optimal path HZ is HJZ. the remaining abridged sequence must still be optimal in its own right-as an optimal path from its own initial point to the terminal point.Determine the optimal paths BZ and CZ. Besides. For every point i in the set ( A . roughly. we must find the optimal value associated with every possible point in Fig. then a longer optimal path that passes through H must use the sequence HJZ at the tail end. The essence of the iterative solution procedure is captured in Bellman's principle of optimality. B . . and V*(F). This reasoning is behind the calculations in (1. both of which only require ordinary differential equations for their solution. With the knowledge of V*(G) and V*( H ). 8 -t I} + V*(l). The set of all such arrows constitutes the optimal policy function. If EHJZ is the optimal path from E to Z.12) V*( H ) = = 1 From Fig. 1.value of arc G J + V * ( J ) ) 5 [The optimal path GZ is GIZ. solve the original given problem. and ( b ) the optimal path from i to Z. V * ( E ) . EZ.


. .CHAPTER THE FUNDAMENTAL PROBLEM OF THE CALCULUS OF VARIATIONS We shall begin the study of the calculus of variations with the fundamental problem: Maximize or minimize (2.Since the the calculus of variations is based on the classical methods of calculus.1) subject to and V [ y] y(0) y(T) = l T F [ fy( f). we shall restrict the set of admissible paths to those continuous curves with continuous derivatives. requiring the use of first and second derivatives. We shall also assume that the integrand function F is twice differentiable. Z given) . yr(f)] dt . A smooth y path that yields an extremum (maximum or minimum) of V [ y ]is called an extremal. but they share the same first-order condition. The task of variational calculus is to select from a set of admissible y paths (or tra~ectories) one that yields an extreme value of V [ y ] . 0 =A =Z ( A given) ( T . The maximization and minimization problems differ from each other in the second-order conditions.

4) is not operational because it involves the use of the arbitrary variable E as well as the arbitrary perturbing function p(t). In view of its importance and the ingenuity of its approach. y(t) . to differentiate a definite integral with respect to a variable x which is neither the variable of integration (t) nor a limit of integration ( a or b).y*(t).11. where the solid curve represents F(t.I/--" " . Since. one may be thinking of either an absolute (global) extremum or a relative (local) extremum.= p . by assumption. must pass through the given endpoints (0. x). by specification in (2.4) into an operational form.. 2. we can perturb the y*(t) path. 2. THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 29 I I .212. 2. however. By adding ~ p ( tto y*(t). This change in viewpoint enables us to apply the familiar methods of classical calculus to the function V = V(E). Consequently. The intuition behind Leibniz's rule can be seen from Fig.OP VARUTIOMS CHAPTER 2. The vertical distance between the two curves (if the-change is infinitesimal) measures the partial . an extremal yields an extreme value of V only in comparison with the immediately "neighboring9' y paths. it remains the most important result in this branch of mathematics. What the Euler equation accomplishes is to express this necessary condition in a convenient operational form. 2).. as E -t 0. and ) by varying the magnitude of E. chosen arbitrarily except for the restrictions that it be smooth and pass through the points 0 and T on the horizontal axis in Fig.(t. Differentiating a Definite Integral : Consider the definition integral where F(t. A simple way of generating such neighboring paths is by using a perturbing curve.-. and the dotted curve represents the displaced position of F(t. it can directly deal only with relative extrema. however. thereby generating the desired neighboring paths. we may view the integral as a function of x-Z(x).28 Y FAItT 8: THE W U L U 5 . Since any change in x will affect the value of the F function and hence the definite integral.1.1. so that We have chosen for illustration one with relatively small p values and small slopes throughout. b].1. let the solid path y*(t) be a known extremal. In locating an extremum of V[y]. we can now consider it as a finction of the variable E-V(E). x ) in the time interval [a. To do this. it is worthwhile to explain its rationale in some detail. condition (2.1 THE EULER EQUATION The basic first-order necessary condition in the calculus of variations is the Euler equation.. one can simply differentiate through the integral sign with respect to x. Such a property would constitute a necessary condition for an extremal. and hence one particular value of V[y]. X ) dt dx dl [Leibniz 's rule] +~p(t) [implying y l ( t ) = y*'(t) + ~p'(t)l In words. 1 -\ p(t) I I I "">+ with the property that.. Since the calculus of variations is based on classical calculus methods. We seek to find some property of the extremal that is absent in the (nonextremal) neighboring paths. To transform (2. As written.3) y ( t ) = y*(t) This constitutes a defining property of the extremal. these neighboring paths has been drawn in Fig. instead of considering V as a functional of the y path. requires a knowledge of how to take the derivatives of a definite integral. we must have T FIGURE 2 1 . That is. we need for comparison purposes a family of neighboring paths which.6) . The latter paths can be denoted generally as (2. It follows that dV/d E = 0 is a necessary condition for the extremal. '------. Although it was formulated as early as 1744. The effect of a change in x on the integral is given by the derivative formula: (2. displacing it to various neighboring positions. To avoid clutter. The fact that both y*(t) and p(t) are given curves means that each value of E will determine one particular neighboring y path. x ) after the change in x. where E is a small number. ( t . x ) is assumed to have a continuous derivative F. 2. With reference to Fig. A ) and (T. the curve y*(t)-which is associated with E = 0-yields an extreme value of V. only one of . 2.

an increase in b is reflected in a rightward displacement of the right-hand boundary of the area under the curve. we have the following pair of derivative formulas: EXAMPLE 3 To differentiate j. by Leibniz's derivative Fx(t.For an increase in the lower limit. for instance.. When the displacement is infinitesimal. we need the chain rule as well as (2. d b ( x ) dx EXAMPLE 1 The derivative of /. EXAMPLE 2 Similarly.6).9). If. The preceding derivative formulas can also be used in combination.5) can also be affected by a change in a limit of integration. equivalently.2 The first term on the right. and the derivative with respect to its lower limit of integration a is the negative of the integrand evaluated at t = a .8).2b. 2. on the other hand. 2. d I / d x . This explains the meaning of (2. or. follows from (2. the effect on the definite integral is measured by the value of the F function at the right-hand boundary-F(b. The value of the definite integral in (2. the definite integral takes the form _ where x not only enters into the integrand function F.is based on (2.2c.6)and (2. . In Fig. This provides the intuition for (2.e-$ dt with respect to x is. the definite integral of Fx(t. the resulting displacement. as illustrated in Fig.x ) at each value of t .the second term.6). is a rightward movement of the left-hand boundary. but also affects the upper limit of integration. Defining the integral alternatively to be rule. the derivative of a definite integral with respect to its upper limit . which reduces the area under the curve. corresponds to the area between the two curves. This is why there is a negative sign in (2. It follows that the effect of the change in x on the entire integral. to get the total derivative FIGURE 2.8). an integral.x ) over the interval [ a .30 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 31 of integration b is equal to the integrand evaluated at t = b. dK db(x) representing the chain ----.8). then we can apply both (2.2"3t2 dt with respect to x which appears in the upper limit of integration. x).8). The result is In words.b ] .

S t e p ii To that end. the Euler equation will be developed in four steps. by using the formula: Note that the Euler equation is completely free of arbitrary expressions.the arbitrary perturbing curve p ( t ) is still present along with its derivative p'(t).. we obtain another version of the necessary condition for the extremal: - To obtain the derivative d V / d r .15)-with 0 and b = T-. Leibniz's rule tells us to differentiate through the integral sign: S t e p iii Although p l ( t ) is no longer present in (2. Substituting (2. otherwise. we first integrate the second integral in (2. and setting d V / d e = 0. and take its derivative. T ] [Euler equation] = 0 While this form of necessary condition is already free of the arbitrary variable E.+ 2 - dFy.1). Then we have and dv dFyt dv=-dt=---dt dt dt du du=-dt= dt a ~'(t) dt = -. precisely because p ( t ) enters in an arbitrary way. we may conclude that condition (2. To make the necessary condition fully operational./dt] is made to vanish for every value of t on the extremal. Thetotal derivative dFyl/dt therefore consists of three terms: Let v = Fy.32 PART 2 THE CALCULUS OF VARIATLONS CHAPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS a 33 Development of the Euler Equation For ease of understanding.171.2).y"(t) + .17)can be satisfied only if the bracketed expression [Fy . we must also eliminate p ( t ) and pl(t).y.p(t).+ 2 .3) into the objective functional in (2.18) - (2.+ FyyPyf(t) Fyry.16)to (2.dFy.Applying (2. y'). it is a necessary condition for an extremal that Breaking the last integral in (2. we get a more specific form of the necessary condition for an extremal as follows: (2.14) and combining the two integrals therein. However. the partial derivative F.we have gives us since the first term to the right of the first equals sign must vanish under assumption (2. Because F is a function with three arguments ( t . Consequently. the arbitrary p ( t ) still remains. and can thus be applied as soon as one is given a differentiable F(t.14) i T ~ y p ( dt) + t / F y t p f ( tdt ) T 0 d Fy . dt aFyr at aF r dy a~ dt aF dy' ayl dt t Substitution of these expressions into (2. should also be a function of the same three arguments. and u =.14) by parts. S t e p i Let us first express V in terms of E . y') function. The Euler equation is sometimes also presented in the form which is the result of integrating (2.0 dt Y' - for all t E [0.. the integral may not be equal to zero for some admissible perturbing curve p(t).18)with respect to t .13) into two separate integrals. Step iv The nature of the Euler equation (2.-F .18)can be made clearer when we expand the derivative dFyt/dt into a more explicit form. = Fty.y .

[general solution] To definitize the arbitrary constants c. Thus. Since F = t and + y 2 + 3y1.19) now reduces to + FYyty1(t) Ft. it follows that c. multiplying through by .19) FY.1) comes with two boundary conditions (one initial and one terminal). however. it calls attention to one of two peculiar results that can arise when the integrand function F is linear in y'. The other possibility..12t = 0 which. This last example is of interest because it serves to illustrate that certain variational problems with given endpoints may not have a solution. F. The extremal is thus the cubic function y* ( t ) = - t3 [definite solution] EXAMPLE 5 Find the extremal of the functional Note. we may write the Euler equation as 2y 0. These two equations give us c.= 3 = 3t2 + c. we first set t = 1 to find y(1) = c. upon integration.. Thus. = 2. we F = 12t y Fyt 2y' = Fytyt 2 = and Fyy. with solution t3 + clt + c2 To definitize the arbitrary constants c. = 3 (by the initial condition). More specifically. and c.. we must conclude that there exists no extremal among the set of continuous curves that we consider to be admissible. which integrates to the solution y* ( t ) = This expanded version reveals that the Euler equation is in general a second-order nonlinear differential equation. c2 = 7 (by the terminal condition).18). EXAMPLE 7 Find the extremal of the functional with boundary conditions y(1) = 3 and y(5) = 7. Since our problem in (2. Since F = 12ty + f2. . we first set t = 0 in the general solution to get y(0) = c. Its general solution will thus contain two arbitrary constants. Here we have F = 3t + (y')'l2. it follows that c.. Therefore. we write yl(t) = c. L c. the Euler equation is 2yU(t). any admissible path is optimal.18).we or y"(t) = 6t F = 2y y Fy.y"(t) The Euler equation (2. the extremal takes the form of the linear function + + EXAMPLE 4 Find the extremal of the functional y*(t) ' = t +2 [definite solution] EXAMPLE 6 Find the extremal of the functional with boundary conditions y(0) = 0 and y(2) have the derivatives = 8. from the terminal condition. we arrive at a more explicit version of the Euler equation: (2. and since it is automatically satisfied. we get y(2) = 8 + 2c. and c. One result. it violates the terminal condition y(5) = 3. . = 0.PART 2: THE CALCULUS OF VARIATIONS " CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 35 Substituting this into (2. = 0. Next. setting t = 2 in the general solution. in order that y" = 0..191. yields y'(t) y*(t) = = with boundary conditions y(O) have = 0 and y(5) = 3.Fy= 0 + for all t E [0. and [general solution] By (2. and rearranging. is that no solution exists. Thus. is that the Euler equation is an identity. = 1 and c. c..1. we should normally possess sufficient information to definitize the two arbitrary constants and obtain the definite solution.. as illustrated in Example 6.. shown in Example 7.. from the initial condition. and then set t = 5 to find y(5) = 5c..t + c.*= 0 = By (2. T ] [Euler equation] The only way to satisfy this equation is to have a constant y'. that although this solution is consistent with the initial condition y(0) = 0.

36 PART z THE CALCULUS OF VARIATIONS CHAPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 37 with boundary conditions y(0) = a and y(t) and = p. The value of V depends only on the given terminal and initial states. unless the solution path happens to pass through the fixed endpoints by coincidence. the left-hand-side expression in the resulting + t 2 y r ) with y(O) = 0 and y(2) = 2 dt. no mention was made of the derivative with respect to the variable t . Hence.y ' ) Since F is free of t in this case. we can derive special versions of the Euler equation which may often (though not always) prove easier to solve.20) gives us t = tyr + Y'2 and Fy. and will not provide two arbitrary constants in its general solution to enable us to adapt the time path to the given boundary conditions. With F = y'. Fyr is a constant.with y(0) = 1 and y(1) = 2 7 V [ y l = \. In this example. y. y') In this special case. with y(0) = 0 and y(1) = 1 8 V [ y l = /.. it cannot qualify as an extremal. The extremal is therefore the quadratic path y* ( t ) = . .1 1 In discussing the differentiation of definite integrals. Is that a justifiable F (2. For some problems. The only circumstance under which a solution can be guaranteed for such a problem (with F linear in y' and with fixed endpoints) is when Fy = 0 as well. Since EXERCISE 2. The Euler equation then loses its status as a second-order differential equation. the integrand function may not contain all three arguments. Here is another example of this special case. - Special case I: F = F ( t . with the solution (2.i t 2 + clt + c2 [general solution] With the help of the boundary conditions y(0) = y(1) = 1.= 0. implying that Fy = 0. regardless of the path adjoining the two given endpoints. or omission? Find the derivatives of the following definite integrals with respect to x: Upon direct integration.:t2 Find the extremals. and y'. It is easy to verify that the application of (2. the F function is free of y. y. of the following fundionals: 6 V [ y l = \$ty + ft + 1 [definite solution] + 2 y t 2 ) d t . but it turns out that if we multiply through by y'.18) into an identity.. with y(0) = 2 and y(2) = 2e2 -t e-' The solution to this equation is by no means obvious. which. and FyrYr 0. we have + y2 + yr2)d t . we have 2. For such special cases. as in Example 7. y') d t .(2yet 9 V [ y l = j:(y2 F. It may be noted that Example 5 of the preceding section falls under this special case. we obtain y* ( t ) = . EXAMPLE 1 Find the extremal of the functional with boundary conditions y(0) - = y(1) = 1. although at that time we just used the regular Euler equation for its solution.2 SOME SPECIAL CASES It follows that the Euler equation (2. in which the integrand function F has three arguments: t . Consequently. it is easily verified that c1 = f and c . = 1. if any. so the first term in the Euler equation (2.20) Fyr = constant - The reason behind these peculiarities is that when F is linear in y'.19) simplifies to Special case 11: F = F ( y . it is in fact clear from straight integration that We have written the objective functional in the general form /.19) = vanishes.20) will indeed lead to the same result.18) is always satisfied. the Euler equation reduces to dFy. together with the fact that Fyr= constant (implying dFyr/dt = O)./dt = 0. so the Euler equation (2.TF(t. = t + 2y' + 2y1(t)= constant. would turn the Euler equation (2.'tyyf dt.

Hence. EXAMPLE 2 Find the extremal of the functional with boundary conditions y(0) = 1 and y(i-r/2) = 0. 4 ~ . we also have two other parameters b and c which relate to the period and the phase of the function. F . .19). we conclude that the boundary conditions require a = 1 and c = 0.y'Fy.21) gives us Y'2 = a cos c = 1 [by the initial condition] ~ / 2 we have . for -(y1FYt F) - d dt = -(y1FYt) -F(y. we must have either a = 0 or cos(i-r/2 + c) = 0. New York. 2. 14.=-2y' ~ direct substitution in (2. The same solution can. Fundamental Methods of Mathematical Economics.. But a cannot be zero. or c = P. McGraw-Hill. however. the equation a cos c = 1 becomes a ( .' + To satisfy this last equation. the time path it implies is shown in Fig. 14.3. But the values of a and c are still unknown.19). but since the cosine function shows a period of 2 ~ / b (obtained by setting the bt term to 2 ~ 1we infer that b = 1.3 in that section. they must be determined from the given boundary conditions. 1 4 . we have y(0) When t = F = ~ ~ . With c = i-r. = constant with the y values bounded in the closed interval [-a. Since y' is plotted against y in the diagram. because otherwise a cos c cannot possibly be equal to 1.38 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 39 new equation will be exactly the derivative d(y1FY. where a is a nonnegative real number.4. and that the time path of y which qualifies as the extremal is y*(t) = cos t 'phase diagrams are explained in Alpha C. Chiang. yielding a = 1. y') - d dt d dt Consequently. respectively. say] y (%) cos (i + c) = 0 [by the terminal condition] This last constant is nonnegative because the left-hand-side terms are all squares. 2. The phase line here is similar to phase line C in Fig. With c = 0. the circular loop shape of this phase line suggests that it gives rise to a cyclical time path. as in Fig. 3d ed. what amounts to be the same thing.21) may yield results that would not be discernible from (2. In our case. in analytical (as against computational) applications.18) or (2. Sec. 1984. (2.. c o s ( ~ / 2+ c) must vanish. which is inadmissible because we have restricted a to be a nonnegative number. as will be illustrated in Sec. as in a cosine a]. aside from the amplitude parameter a . Moreover.and ' ~F. F)/dt. and . thus we can justifiably denote it by a2. which may under some circumstances be easier to handle than the original Euler equation (2. At t = 0. or. also be obtained in a straightforward manner from the original Euler equation (2. (2. Since where. giving us a = .6. the Euler equation can be written as d(yfFYt F)/dt = 0. with the solution y1FYt F = constant. The reader will recognize that the equation y'2 y2 = a 2 can be plotted as a circle. =a + y 2 = constant [ = a2. function with amplitude a and period 2i-r. with radius a and with center at the point of origin. the equation a cos c = f becomes a cos 0 = 1 or a(1) = 1. Hence. Moreover.1) = 1. the period should be 2i-r. implying two possibilities: either c = 0. Such a cosine function can be represented in general by the equation This result-the simplified Euler equation already integrated once-is a first-order differential equation.19). as we would expect.1.21) 1 FIGURE 2 3 . this circle constitutes the phase line for the differential equation.

In order to express ds in terms of the variables y and t . It follows that ( d ~ ) ~ / ( d t= '(dy)'/(dt)' + 1. An expression for the length of the curve AZ can be obtained with the help of Fig. We illustrate the latter. we get dt l~hich yields the desired expression for ds in terms of y and t as follows: :2. EXAMPLE 3 Among the curves that pass through two fixed points A and 2. 2 a y ) . but also to illustrate some other techniques (such as the circular phase diagram in Fig. Chiang. 2. McGraw-Hill. or . 2..4 as a candidate for the desired extremal. Fundamental Methods of Mathematzcal Econom~cs. it may be helpful to consider the curve AZ in Fig.3). The reader will have to choose the appropriate version of the Euler equation to apply to any particular problem. 15.19) turns out to be easier to use than the special one in (2. Thus. we end up with the same definite solution: - y*(t) = cos t \x For this example. See. eh' = 1 and vt = t . each point on curve AZ traces out a circle parallel to the xy plane.5. but it may be of interest because it is one of the earliest problems in the development of the calculus of variations. thus. with its length equal to the differential ds. respectively. namely /:(I + y'2)1/2dt. When rotated about the t axis in the stipulated manner. all we have to do to calculate the surface of revolution is to sum (integrate) the circumference over the entire length of the curve AZ. ed. To construct the objective functional. r. see Alpha C.4 ~C@WXW 1D(Yt. 2. TO sum the circumference 2 ~ r y over the length of curve AZ will therefore result in the functional an explanation of complex roots.40 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS After normalizing and rearranging. arc MN can be approximated by a straight line. which curve will generate the smallest surface of revolution when rotated FIGURE 2..22) ds = about the horizontal axis? This is a problem in physics. Here we have h = 0. Since the circumference of such a circle is 2 a R (in our present case. with its center on the t axis. 1984. Let us imagine that M and N represent two points that are located very close to each other on curve AZ.21). Taking the ) ( 1 + y t 2 ) 1 / 2 dt [arc length] rhe entire length of curve AZ must then be the integral of ds. and with radius R equal to the value of y at that point. and 3d v = 1. the Euler equation can be expressed as the following homogeneous second-order linear differential equation: Since this equation has complex characteristic roots r. not only to present it as an alternative.3. solution takes the form of ' = + i. 811 both sides. the original Euler equation (2. Because of the extreme proximity of M and N. we resort to Pythagoras' theorem to write (ds)' = (dy)' + (dt)'.18) or (2. New York. the general y*(t) = a cos t + p sin t The boundary conditions fur the arbitrary constants a and 63 at the values of 1 and 0.

23) y=i(et+e-') [catenary] In this last equation. 28th ed. Press.4. With reference to Fig.. and solving for y in terms of t. meaning "chain. only the first term remains.[e(t+h)/c e-"+k'/c] + 2 [general solution] where the two constants c and k are to be definitized by using the boundary conditions. squaring both sides and canceling out the y 2 term. For purposes of minimization.t f c2.19) will disappear. F. then obviously y'(t) = c. (2. (In fact. subtracting y from both sides. whereas the negative-exponent term produces an ever-decreasing curve. The result is C . within the integral sign. the average of the two has a general shape that portrays the way a flexible rope will hang on two fixed pegs. 2. However. But the left-hand side is more complicated. we may actually disregard the constant 2ar and take y(1 + y'2)1/2 to. If . 2. However. Since the positive-exponent term gives rise to a curve that increases at an increasing rate. the integral of d t being in the simple form t + k.. This extremal is a modified form of the so-called catenary curve. Boca Raton. The c in the denominator of the log expression on the right may be omitted without affecting the validity of the formula. including Fyy. in which the exponent of one term is the exact negative of the exponent of the other. (3) square both sides and solve for yt2 in terms of y and c. and (4) take the square root. we have or Since F is free of t. then a larger surface of revolution can be generated. where k is an arbitrary constant. CRC . (2) cancel out yyt2 and -yy'2 on the left-hand side. and y(t) = c. Beyer. our result will come out in a more symmetrical form. we must either have y"(t) = 0 or F.. and 3''. = 0.ly2 c2 C [by definition of natural log] Multiplying both sides by c... under the 3 ~ e efor example.be the expression for the F function. hence. if we replace the AZ curve already drawn by. the catenoid cannot possibly be a maximum. indicating that the . at any rate. many of the derivatives in (2.. a new AZ curve with the opposite curvature. Formula 157. we may apply (2. we note that the variables y and t have been separated.. The right-hand side poses no problem.42 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 43 The reader should note that. Special case 111: F = F(yf) When the F function depends on y' alone.") This general shape is illustrated by curve AZ in Fig. ed.. Hence. Even though we have found our extremal in a curve of the catenary family. which can be absorbed into the arbitrary constant of integration c. CRC Standard Mathematical Tables. so that each side can be integrated by itself. we finally find the desired extremal to be y* ( t ) = This equation can be simplified by taking the following steps: (1) multiply through by (1 + y'2)1/2. 1987. so the Euler equation becomes (2. with constant k ) . because its presence or absence merely makes a difference amounting to the constant -c In c. to attempt to integrate it "in the raw" would take too much effort.21) to get e("k)/c = y + . the name catenary comes from the Latin word catena. it is not certain whether the resulting surface of revolution (known as a catenoid) has been maximized or minimized. FL. geometrical and intuitive considerations should make it clear that the surface is indeed minimized. say. In fact. (and subsuming the constant c.24) y"(t) FypPyw(t) 0 = To satisfy this equation. = 0. one should consult prepared tables of integration formulas to find the re~ult:~ Equating this result with t +k whose distinguishing characteristic is that it involves the average of two exponential terms. by William H.4. by inserting the c in the denominator. while it is legitimate to "factor out" the " 2 ~ " (constant) part of the expression ~ T Rthe "y" (variable) part must remain . In fact.

with a zero coefficient.. the extremal mav not satisfy the boundary conditions except by sheer coincidence. the argument y' is missing from the F function..1. Equivalently. which again represents a family of straight lines.'(y + yy' + y' + iYr2) y(0) = 2 and y(1) = 5 dt. We can therefore conclude immediately that the extremal is a straight line.. = Multiplying botb sides by (1 + yt2). dF.. F(t. is dependent on y' alone. y').18). . it is intuitively obvious that the distance between the two given points is indeed minimized rather than maximized by the straight-line extremal. The reason is that the function F(t. y') with y' entering via a single additive term.Yr2) with y(O) = 0 and y(~r/2) 1 dt. Example 6). the functional becomes . y) In this special case. y) can be considered as a special case of F(t.25) measures the total length of a curve passing through two given points. and factoring out y'.0. the slope of y(t). we yf in h r m s of c as follows: yt2 = c2/(1 . Oy'. If. we can write (after squaring).44 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 45 general solution is a two-parameter family of straight lines. Suppose there are one or more real solutions y' = hi. . y. in a special sense. with y(O) = 2e1/2 and y(1) = 1 + e 6 V[yl = \. or. EXAMPLE 4 Find the extremal of the functional . with y(0) = 9 and y(2) = 11 3 V[yl = /. Thus. we can use (2. 2. more explicitly.c2) Inasmuch as yl(t)./. the Euler equation is simply . with (Find the general solution only.then we can deduce that y = hit + c. However.22). Strictly speaking. Simple generalizations can be made. F = (1 + y'2)1/2. we have only found an "extremal" which may either maximize or minimize the given functional. we can always take its extremal to be a straight line. y./dt = 0. and its solution is Fyr constant."'2(y2 . 2. . y) is.c2). the = solution of F. Since there are no arbitrary constants in its solution to be definitized in accordance with the given boundary conditions. The problem is degenerate.. In view of the fact that = F.. This situation is very much akin to the case where the F function is linear in the argument y' (Sec. given an integrand function that depends on y' alone.27y'3 dt. Yt2 The fact that the derivative y' does not appear in this equation means that the Euler equation is not a differential equation.3 TWO GENERALIZATIONS OF THE EULER EQUATION The previous discussion of the Euler equation is based on an integral functional with the integrand F(t. But if it is desired to examine this particular example explicitly by the Euler equation. like F itself. because there is no such thing as "the longest distance" between two given points. Since we now have Fy. The astute reader will note that this functional has been encountered in a different guise in Example 3. with y(0) = 0 and y(1) = 2 2 V[y] = /. with boundary conditions y(0) = A and y(T) = 2. is a constant. FYry. rearranging. "linear" in y'. Consequently. = yl/(l + y'2)1/2. = constant ( 1 . rn y1 = " . The Case of Several State Variables With n > 1 state variables in a given problem.= 0.2 Find the extremals of the following functionals: 1 V[yl = /i(t2 + y'2)dt.'(y2 + 4yy' + 4y'2)dt. is. Recalling the discussion of arc length leading to (2. .) 4 V[yl = /. on the other hand. then since F.. with F = 0. however. a function of y' alone. The problem of finding the extremal of this functional is therefore that of finding the curve with the shortest distance between those two points. the Euler equation reduces simply to F = 0. The integrand function.bt-3y'2 dt 5 V[yl = /. the desired extremal y*(t) must be a straight line. Special case IV: F = F ( t . EXERCISE 2. we know that (2. = 0 should appear as specific values of y'. to the case of several state variables and to the case where higher-order derivatives appear as arguments in the F function.

the Euler equation (2..c 4 ) can be definitized with the help of four boundary conditions relating to y(O). = 0 + t ) + forall t E Since many derivatives are present. .19) itself: (2. To tackle this case. z . the first yields y' = i t + c.(t). Consequently. y"./dt.y. .. say. Upon integration. and hence. but also of the derivatives y'.F. and the partial derivatives Fyr and F. and yr(T) = P.. y and z . ) into the format of (2. y ( T ).19). . z') = Ftyz+ Fyy'yf(t)+ F Z y p z t ( t+ FyFy'yn(t) Fiy. . . dt Z.n).y.*(t). by (2. The expanded version of simultaneous Euler equations corresponding to (2.27). assume for simplicity that there are only two state variables. One type of neighboring path arises from varying only one of the functions y J ( t ) at a time. .zr(t) FtZt.. ~ ' ( 0= a.18) should be replaced by a set of n simultaneous Euler equations: Thus. y'. F ( t . in our new problem.46 PART 2. . Although (2. we find that with boundary conditions y ( 0 ) = A. the single Euler equation (2. along with the boundary conditions. . to reflect the new problem. enable us to determine the solutions y:(t). Therefore. = 1 Fyt = 2y' F.yn(t) F z t z I ~ " (+)F y Z r y l ( t+ FZz. while all the other y J ( t )functions are kept fixed. Moreover. 2') with respect to t will include five terms: y The Case of Higher-Order Derivatives As another generalization. . the Euler equation (2.(t) as if we are dealing with the case of a single state variable. we have two simultaneous Euler equations The same result can also be obtained from (2. the boundary conditions should in this case prescribe the initial and terminal values not only of y . making a total of 2n boundary conditions. for the case of n state variables.zn(t) ) + with a similar five-term expression for dF. Moreover. and z ( T ). to generate other Euler equations. provided we change the y symbol to y .28) can again be applied.26).18)-replacing the symbol y by yj-the same procedure cannot be used to generalize (2. we only have to introduce a new variable F. Analogously. .. the extremal of z is The four arbitrary constants ( c .19) thus looks much more complicated than (2. = 1 Fd = 22' z = y' [implying z' = y"] . we first note that an F function with a single state variable y and derivatives of y up to the nth order can be transformed into an equivalent form containing n state variables and their first-order derivatives only. Any extremal y. y. too. this can be written as d -Fyv(t. Consequently. ..T] EXAMPLE 2 Transform the functional V [ y ] = l T ( t y 2 + yy' 0 EXAMPLE 1 Find the extremal of + yr'2)dt k o m the integrand. one at a time. consider a functional that contains high-order derivatives of y ( t ) .*(t).. To achieve this task. The F function will then be a function with five arguments. the total derivative of Fyr(t. To see why not.28) Fytypy"(t) F. In this particular case. .18)must still hold as a necessary condition. the functional in (2. . . . y'.z'(t) + + + Ftyl .#ytz"(t) FyyFy1(t)+ F. THE CALCULUS OF VARIATIONS CHAPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 47 and there will be a pair of initial condition and terminal condition for each of the n state variables. Then the functional depends only on the variation in y.27) is a straightforward generalization of the Euler equation (2.Fy = 0 Fytz. y'.26). z(0).29) can be transformed into the form in (2. z'). (4 = 1 .28). .27) or (2. Thus. it happens that each of the Euler equations contains one variable exclusively. y. [O. z .. will be. Generally. must by definition yield the extreme path value relative to all the neighboring paths. up to and including the derivative y(*-'). this procedure can be similarly used to vary the other yj functions. In other words. [Euler equations] These n equations will. the said transformation can automatically take care of the boundary conditions as well. y ( T ) = 2.

. b > 0. Essentially the same material appears in Chapters 14 and 15 of a book by the same author. This complete the transformation. -b(l + a b ) P 2 + ( a + 2aab + pb)P ."'2(2yz + y'2 + z t 2 ) with y(0) = z(0) = zl dt.bP +bP) -y the Euler-Poisson equation is which is a function of P and P'. February 1924. The other two conditions for y' can be directly rewritten as the conditions for the new state variable z : z(0) = a and z ( T ) = /3.bP + hP')' = . Mathematzcal Introduction to Economics. = l. with no derivatives higher than the first order. giving us an upward-sloping total-cost curve for Q r 0. a necessary condition for an extremal can be found for (2./3(a . we shall discuss the classic Evans model of a monopolistic firm. As the first example. But with p positive. 143-153. with y(0) = 0 and yf(0) = y(1) = ~ ' ( 1 ) 1.* - A Dynamic Profit Function Consider a monopolistic firm that produces a single commodity with a quadratic total cost function5 for all t E [0.a ( a .48 I Then we can rewrite the integrand as \ PART 2 THE CALCULUSOF VARIAT- CXUPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 49 - which now contains two state variables y and z .26). 4 ~ C.'(I + yr2) dt.26).b(yt2 + zf2 + y t z l ) d t (general solution only). it is also possible to deal with it directly instead of transforming it into the format of (2. his quadratic cost function plots as a U-shaped curve. The condition. New York. Substitwing the new F into the functional will turn the latter into the format of (2.33) r ( P .291. What about the boundary conditions? For the original state variable y.32) Q = a .29). which can be definitized with the help of the 2n boundary conditions. we have the dynamic profit function (2.3 1 Find t h e extremal of V[y] = /. but also on the rate of change of price P1(t): (2." American Mathematical Monthly. is 3 Find t h e extremal of V [ y . only the right-hand segment of the U appears in the first quadrant. pp. By a procedure similar to that used in deriving the Euler equation. The quantity demanded is assumed to depend not only on price P ( t ) .z] = /.h ( 2 a a + @)PI + h ( l . Thus its solution will involve 2n arbitrary constants. Hence. one of the earliest applications of variational calculus to economics. ) = 4 Example 3 of this section shows that. T ] [Euler-Poisson equation] - This equation is in general a differential equation of order 2n. 1930. P') which is a fourth-order differential equation. Multiplying out the above expression and collecting terms. Since we have The firm's profit is thus ?T=PQ-C Fy = 2ty + y' Fyr y = Ff = 2y" =P(a - bP + hP') . + 2. "The Dynamics of Monopoly. known as the EulerPoisson equation. Derive t h e same result by applying t h e definition z = y' and t h e Euler equations (2. = 2 Find t h e extremal of V[y. Since no inventory is considered. 0 and y ( ~ / 2= z ( ~ / 2 ) 1. the output Q is always set equal to the quantity demanded. for the problem stated in Example 2. the conditions y(0) = A and y ( T ) = Z can be kept intact.bP(t) + hP1(t) ( a . a necessary condition for t h e extremal is 2ty 2y(4' = 0. . McGraw-Hill. we shall use a single symbol Q ( t )to denote both quantities.27)t o t h a t problem.ah2P'2 .pp. Given the functional (2. Evans.( a a 2 + pa + 7 ) + 2ab)PP' [profit function] EXERCISE 2. 77-83. h =+ 0) EXAMPLE 3 Find an extremal of the functional in Example 2.4 DYNAMIC OPTIMIZATION OF A MONOPOLIST Let us turn now to economic applications of the Euler equation.

=O the specific form The two arbitrary constants A. and r . Chiang. 3d ed. they are the exact negatives of each other.. a.the omission of a discount factor.339. successively. in (2.T I .. The determination of these two constants completes the solution of the problem.H . The objective of the monopolist is therefore to Maximize where the characteristic roots r .34) pertains to Special Case 11. Of these parameters. and A . . and rewrite the solution as : (2.=-2ah2 rrP. PT.35) P" ah2 + 2aab + pb 2ah2 P. P. as well as. Besides. we can see that its algebraic sign will not affect our results.34) subject to and r l . both the initial price Po and the terminal price P . for the entire price path P * ( t ) is now specified in terms of the known parameters T.+A. = 0). in (2.36'1. it turns out that for computation with specific functions it is simpler to use the original Euler equation (2. take the values r .19)-after normalizing-into b(l + ab) a p= (2. with solution values = AlerT [Euler equation] + A2eWrT F + This is a second-order linear differential equation with constant coefficients and a constant term. + 2aab fib 2b(l + ab) + Note that the two characteristic roots are real and distinct under our sign specifications.P)erT = 1 . Fundamental Methods of Mathematical Economics. as the first approach to the problem. y .. New York.rr for F. .=A.50 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 2: THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 61 The Problem The objective of the firm is to find an optimal path of price P that maximizes the total profit rI over a finite time period [0. When we set t = 0 and t = T.37) A. we get two simultaneous equations in the two variables A . P . .19).36') P * ( t ) = Aler' + A2e-rt + F [general solution] and n-. a . in the general format of (2.H .e2rT A. Moreover.. we have I I [ P ] = L T T ( p . P. and h . all have specified signs except h . r2 = + ( . and P for y. and A2: These expression turn (2. This period is assumed to be sufficiently short to justify the assumption of fixed demand and cost functions. . and only in the form of a squared term h2. But inasmuch as the parameter h enters into the solution path (2.F)e-rT = 1 .( P . it is easily found that F.36')can be definitized via the boundary conditions P ( 0 ) = Po and P ( T ) = P. From the profit function (2. . are assumed to be given. for the Euler equation of the present model (where a. Sec.PTgiven) and a The Optimal Price Path Although (2. 1984. where we should obviously substitute . . Po. ( P Ogiven) where (2. McGraw-Hill. 15. b. and the particular integral * J-) 3 is Thus.+P P.a .36') only through r . although its numerical value will. r2 = 5~ F = b(l + ab) [characteristic roots] [particular integral] (T.e-2rT Its general solution is known to be6 his type of differential equation is discussed in Alpha C. P') d t P ( 0 ) = Po P(T)=P.=h(l+2ab) a.1. We may therefore let r denote the common absolute value of the two roots.( P .

the firm is likely to have discretionary control over P ( T ) even though the terminal time T has been preset. In reality. or it may have to be lowered slightly before rising to the level of PT at the end of the period. "Monopoly over Time. If PT> Po. first look into the economic meaning of the constant c.eCr'0 [i. If we multiply through by P / a .38) is divided through by rr. 3 and 4). Tintner also tried to generalize the Evans model to the case where T depends on higher-order derivatives of P. the monopolist's price may optimally rise steadily in the interval [O. the first-order condition for profit maximization is simply d a / d P = 0. Thus. we are not equipped to discuss whether the solution path indeed maximizes (rather than minimizes) profit.36')? Unfortunately. the rule becomes couched in terms of elasticity as follows: E.T 1 unless there is a value 0 < to < T such that A.52 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMBWl"TL PWBLEM OF CALCULUS OF VARWl'Ub% 53 At this moment. 160-170. the P * ( t ) curve 7~erhard Tintner. 2 Verify that A. Let us therefore denote it by rr.38) reduces to rr = c. if we are dealing with the static monopoly problem as a special case of the dynamic model-then arr/aP' = 0. it turns out that formula (2. depending on parameter values.rrS/5r... must be replaced by an appropriate transversality condition. We shall develop such transversality conditions in the next chapter. that can be given a clear-cut economic interpretation. and (2.37). So the constant c represents the static monopoly profit. the equation can be rearranged into the interpretation. If profit rr does not depend on the rate of change of price P'-that is. the dynamic monopolist must instead watch the elasticity of profit with respect to the rate of change of price and set it equal to 1 . optimization rule A More General View of the Problem Evans' formulation specifies a quadratic cost function and a linear demand function. pp. Next. Indeed. the optimal price path is characterized by a similar indeterminacy.36') are both positive. Compare the location of the lowest point . In a more general study of the dynamic-monopolist problem by Tintner. and A. (subscript s for static). P'). TI from Po.. = 0.4. then a relevant question is: What can we say about the general configuration of the price path (2. The Matter of the Terminal Price The foregoing discussion is based on the assumption that the terminal price P ( T ) is given. however. the second term on the left-hand side will involve EXERCISE 2. while the state monopolist watches the elasticity of profit with respect to price and sets it equal to zero. Although this aspect of the problem can be pursued a little further (see Exercise 2. will take the shape of a catenary. should indeed have the values shown in (2." Econornetnca. Assuming that it indeed does.36) an economic which represents the partial elasticity of rr with respect to P'. and checkthat it has the correct algebraic sign.36') are equal at t = to]. In such a general formulation. we note that if (2. The reader will note that this analytical result would not have emerged so readily if we had not resorted to the special-case formula (2. it will face the variable-terminal-point situation depicted in Fig. 1. these functions are left ~nspecified. but the economic meaning of the result is difficult to interpret.21). specific parameter values are needed before more definite ideas about the optimal price path can be formed. after performing the indicated division. Since in the latter case profit depends only on P . It directly yields a simple necessary condition This rule states that the monopolistic firm should always select the price in such a way that the elasticity of profit with respect to the rate of change of price be equal to 1 .erLo = A. in (2. To see this. Then compare the values of P. and A.. and give the particular integral in (2. 4 If the coefficients A. where the boundary condition P ( T ) = P. to P.21) [for Special Case I11 can be used to good advantage. It is of interest to compare this elasticity rule with a corresponding elasticity rule for the static monopolist. Probs. 1937.~ Thus. no simple answer can be given. In the opposite case of Po > P.5a. what price will the firm charge for profit maximization? Call that price P. satisfying the condition that the first two terms on the right-hand side of (2.4 1 If the monopolistic firm in the Evans model faces a static linear demand ( h = O). 3 Show that the extremal P*(t) will not involve a reversal in the direction of price movement in the time interval (0.rrs/rr. and P .e. If so. the profit function is simply written as d P .

5 TRADING OFF INFLATION AND UNEMPLOYMENT The economic maladies of both inflation and unemployment inflict social losses.PART 2. However. a policy target. Prob. instead. we are able to express the loss function entirely in terms of n and nl: (2. by plugging (2. Then we can write the social loss function as follows: n' p = _ + n J And. if.Y)-the shortfall of current national income Y from its full-employment level Yf. unlike its usage in the preceding section.18) to derive the Euler equation. (el Apply formula (2..21) still applicable? ( b ) Apply formula (2. positive or negative... The expectations-augmented Phillips tradeoff between (Yf . ( b ) A. "~ean Taylor. When a Phillips tradeoff exists between the two.39). the actual rate p falls short of the expected ng . and express the result as a rule regarding the rate of growth of q.. then n' < 0.19) to this F expression to derive the new Euler equation. Exercise 1. on the other hand. now means the expected tion. and ( c ) A. Spring 1989. The formation of inflation expectations is assumed to be i(= z ) dP =j(p -n) (O < j i 1) 2. 6 If a discount rate p > D is used to convert the profit d P .34) will take the general form F = d P .and the terminal value of n. P )at any point ' of time to its present value. then the integrand in (2. of actual income Y from Y is considered undesirable. . 2). then n' > 0. is assumed to be set at 0.42) yields .18). In our adapted version.' In this formulation. THE CALCULUS OF VARIATIONS CHAPTER 2 THE FUNDAMENTAL PROBLEM OF CALCULUS OF VARIATIONS 55 I on the price curve in the following three cases: ( a ) A. what would be the best combination of inflation and unemployment over time? The answer to this question may be sought through the calculus of variations. (2. 199-216.Y) which can be rearranged as Yf-Y= Pj When substituted into (2.44) A(P. > A. and so is any deviation of actual rate of f inflation p from zero. Which of these cases can possibly involve a price reversal in the interval [O. "Stopping Inflation in the Dornbusch Model: Optimal Monetary Policiea with Alternate Price-AdjustmentEquations.3. all social losses are discounted to their present values via a positive discount rate p. positive and negative deviations are counted the same way (cf. it is proxied by (Yf .n' 1 The Socid Loss Function Let the economic ideal consist of the income level Y coupled with the f inflation rate 0. Y deviations and p deviations do enter the loss function with different weights. and rr will be revised The last two equations together imply that n' = -pj(Yf .Y) and p is captured in the equation p = -p(Yf . and a will be revised upward. The initial (current) value of P is given at no. the unemployment variable as such is not included. To recognize the importance of the present over the future. reflecting the different degrees of distaste for the two types of deviations. TI. P')e-pf.T]? What type of price movement characterizes the remaining cases? 5 Find the Euler equation for the Evans problem by using formula (2. ( $) + 2 2 a(. pp.42) and (2. ( a ) In that case. If the actual rate of inflation p exceeds the expected rate of inflation T (proving a to be an underestimate). n') = Because the deviation expressions are squared. in the ratio of 1to a. is formula (2. = A.Y ) +a ( p > 0) where n.401. In this section we present a simple formulation of such a problem adapted from a paper by Taylor. < A. + P) [loss function] The Problem The problem for the government is then to find an optimal path of n that minimizes the total social loss over the time interval [O. Any deviation.43) into (2."Journal ofMacroeconomics.rr to be an overestimate). we have altered the planning horizon from m to a finite T.

the integrand function A(P. formula (2. and negative r. 2. its particular integral is zero. we know that (2. AlerlT + A2erzT= 0 Solved simultaneously. which is vertically equidistant from the two component curves. we successively set t = 0 and t = T in (2. = J Consequently. we know that (2.. .49) A. e-@ d t ( rr.47) rr*(t) where = Aler'' + A2er2' = [general solution] r.47). the Alerlt component of the path emerges as the mirror image (with reference to the horizontal axis) of an exponential growth curve. = 2a -e-~t J To definitize the arbitrary constants A. f (p k d m .<O The characteristic roots are real and distinct. the Azer2' component is..TER 2 THE PART 2 THE CALCULUS OF VARIATIONS FUNDAMENTAL PllOIlLEM OF CALCULUS OF VARIATIONS 57 In vm of these considerations. Moreover. + A z = rr. and positive r. > 0 given) ( T given) - .<O A.46) rr" .rrr)e-fit yields the first derivatives - with m a d derivatives Fvri and = 2(%) e-pt 1 + ap2 F. just a regular exponential decay curve.48).48) r.f r = 0 i where R = [Euler equation] rroerlT A2 e IT . > 0 and From the sign information in (2. ) r. Because of the signs of r. the objective of the policymaker is to i Minimize (2.48). these relations give us -rroerzT (2. and use the boundary conditions to get the pair of relations A. we can deduce that the a*(t) path should follow the general configuration in Fig.e r ~ T = erlT - e r z ~ ap2j(p + j ) 1 + apZ . starts from the point (0.rr(0) = rro a(T) = 0 The Solution Path On the basis of the loss function (2. and A.prr' . With a negative A.. r.-and descends steadily toward the point (T.45) subject to and A[rr] = k T ~ ( r rlip) .. in (2.44).50) and (2.6. On the other hand.>O Since this differential equation is homogeneous. in view of the positive A. and r. The rr* path.50) A. A. P.)-where rr. the-sum of the two component curves.. = A. and its general solution is simply its complementary function: (2. 0).19) gives us (after simplification) the specific necessary condition (2. inasmuch as the square root has a numerical value greater than p. The fact that the P* path shows a steady downward movement can also be + .

Such will be the case. 2 Let the objective functional in problem (2.42).For these. we can simply use the relations in (2.PART 2: THE CALCULUS OF VARIATIONS verified Prom the derivative rr*'(t) = r. For problems with fixed initial and terminal points.erlt + r2A2erz' < 0 \ CHAPTER Having found rr*(t) and rr*'(t). = 0 < TT < TO. (a9 What would be the values of A. for instance. respectively. if the dynamic monopolist of the preceding chapter faces no externally imposed price at time T.46) by using Euler equation (2. and can treat the PT choice as an integral part of the optimization problem. In that case.1 THE GENERAL TRANSVERSALITY CONDITION For expository convenience. we can also derive some conclusions regarding p and (Yf Y). and A. and A. Once we learn to deal with that.? The Euler equation-the basic necessary condition in the calculus of variations-is normally a second-order differential equation containing two arbitrary constants. .? ( b ) Can you ambiguously evaluate the signs of A. then a boundary condition will be missing.18).43) and (2. the technique is easily extended to the case of a variable initial point. we shall assume that only the terminal point is variable. 3 TRANSVERSALITY CONDITIONS FOR VARIABLEENDPOINT PROBLEMS EXERCISE 2. the boundary condition P ( T ) = PT will no longer be available. 3.45) be changed to T(T) r T . ( a ) Do you think the solution of the problem will be different? ( b ) Can you think of any advantage in including a coefficient integrand? in the S Let the terminal condition in problem (2.5 1 Verify the result in (2. and the void must be filled by a transversality condition. rr)e-ptdt.A. But if the initial or terminal point is variable (subject to discretionary choice). In this chapter we shall develop the transversality conditions appropriate to various types of variable terminal points. the two given boundary conditions provide sufficient information to definitize the two arbitrary constants.45) be changed to j:ih(~.

71. .=. AT I d The same E is used in conjunction with the perturbing curve p(t) to generate neighboring paths of the extremal y*(t): Substituting these into (3.17) still applies here. It is to be understood that although T is free.6). Note that since T* is known and AT is a prechosen magnitude. This establishes the fact that the Euler equation remains valid as a necessary condition in the variableendpoint problem. and AT (the arbitrarily chosen change in T ) in the third.171. are where we should look for the transversality condition.7) is set equal to zero.:FYr] dt + [ F f ] t = T ~ ( T + [ F ] ~ = T = 0 ) IT However. and AT represents an arbitrarily chosen (and fixed) small change in T. each contains its own independent arbitrary element: p(t) (the entire perturbing curve) in the first term. but takes the value [Fy.1) subject to and Yy] y(0) = Deriving the General Transversality Condition / * ~ ( t . the other condition-p(t) I 0-should now be dropped.7). Y(~)=YT (T. With this amendment to the earlier result in (2. as in (2.60 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 3 TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 61 The Variable-Terminal-Point Problem Our new objective is to Maximize or minimize (3.6) [Fit=. since we have assumed that p(0) = 0 but p(T) z 0. by (2. T can be considered as a function of E.4) into the functional V[y]. Y(t> ~'(t) Our problem is to optimize this V function with respect to Of the three terms on the left-hand side of (3. as before. we get a function V(E) akin to (2.13) during our earlier development of the Euler equation. In fact. we obtain the new condition (3. The other two terms.5) ~ ( c = k T ' " ~ [ t . = [F. although the p(t) curve must still satisfy the condition p(0) = 0 [see (2. we shall.2). and setting d V / d ~= 0.10). y*'(t) + r p f ( t ) ] dt E. First. To develop the necessary conditions for an extremal. the Euler equation emerges. each of the three terms must individually be set equal to zero. with one exception. Then any value of T in the immediate neighborhood of T* can be expressed as The integral on the right closely resembles the one encountered in (2. It is a familiar story that when the first term in (3. we can also write Second term in (3.7) l T p ( t ) [ F y .y. This latter derivative is a total derivative taking into account the direct effect of E on V. The derivative d V / d ~is therefore.3).with derivative 7rn [ [ p(t) F y -F'! = 1 dt + [FY*].p(t)]. which relate only to the terminal time T. i order to n satisfy the condition (3.6) = where E represents a small number.121.111. only positive values of T are relevant to the problem. Step i The definite integral (3. the upper limit of integration in the V function will also vary with E: (3.~~free) This differs from the previous version of the problem in that the terminal time T and terminal state y. Consequently. is = free.5) falls into the general form of (2.2)1 to force the neighboring paths to pass through the fixed initial point. much of the derivation process leading to (2. T(E). we have First term in (3.16) does not vanish in the present problem. p(T) (the terminal value on the perturbing curve) in the second. as well as the indirect effect of E on V through the upper limit of integration T.p(T). employ a perturbing curve p(t).=.18). y*(t) ) + r p ( t ) .. but since T is a function of E by (3. are now "free" in the sense that they have become a part of the optimal choice process. and use the variable E to generate neighboring paths for comparison with the extremal. suppose that T * is the known optimal terminal time.=. because y. The expression [Fyrp(t)]. By substituting (3.f in (2. y f ) dt 0 =A ( A given) The first-order necessary condition for an extremum in V is simply d V / d ~ = 0.]. Thus we cannot presume any offsetting or cancellation of terms.p(T) By (3.

Since AT is arbitrary. 3. and dropping the first term in (3. This gives rise to the transversality condition which is sometimes referred to as the natural boundary condition. e is It might be useful to give an economic interpretation to (3. = 0 but AT is arbitrary. is stated in leading to-(3. '~lthough we are initially interested only in the solid portion of the AZ path.9) vanish for sure is to have Fyr= 0 at t = T . Note that while the two curves share the same initial point because p(0) = 0 . if AT > 0 .9) can be written in various specialized forms. But inasmuch as T can also be altered by the amount E A T (= AT since E = 11. 3 ~ h result in (3. which. 1. The perturbing curve is then applied to the extended version of AZ. Depending on the exact specification of the terminal line or curve. 3. from point Z to point Z" is Vertical Terminal Line (Fixed-Time-Horizon Problem) The vertical-terminal-line case. and y' represents net investment. unlike the Euler equation. is relevant only to one point of time. Hence. the AZ' curve should. the only way to make the second term in (3. 3. the equation for that path should enable us also to plot the broken portion as an extension of AZ.7). and truncated horizontal terminal line.PART 2: THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 63 Step iii Using (3. horizontal terminal line. the two principal variables in the variable-terminal-point problem.7).5a. the general condition (3.1. 3. the situation is exactly reversed. shown by the vertical distance ZZ'. but the first does not. they have different heights at t = T because p ( T ) # 0 by construction of the perturbing curve.1. this second change in y . Rearranging this relation allows us to express p ( T ) in terms of AyT and AT:3 Horizontal Terminal Line (Fixed-Endpoint Problem) For the horizontal-terminal-line case.. Its role is to take the place of the missing terminal condition in the present problem. as illustrated in Fig. The AZ' curve represents a neighboring path obtained by perturbing the AZ path by ~ p ( t )with E set equal to one for . y') as a profit function.8) to eliminate p ( T ) in (3.1 should be labeled T * . the point T on the horizontal axis in Fig. we first get rid of the arbitrary quantity p ( T ) by transforming it into terms of AT and AyT-the changes in T and y. For a small AT. y. can be approximated ' by y ' ( T ) AT. and the first term in (3. This can be done with the help of Fig. Thus the transversality condition is '~echnically. let us interpret F(t. however.5b.9) drops out. This is justifiable because the result that this discussion is a transversality condition. Net investment . We are omitting the * for simplicity.' The magnitude of p ( T ) . we finally arrive at the desired general transversality condition This condition.11).2 SPECIALIZED TRANSVERSALITY T FIGURE 3.1 FIGURE 3. is arbitrary and can take either sign.10) and (3. terminal curve. as illustrated in Fig. To f x ideas. So the second term in (3. resulting from the perturbation.1 CONDITIONS In this section we consider five types of variable terminal points: vertical terminal line.9) automatically drops out. the only way to make the first term vanish for sure is to have the bracketed expression equal to zero. convenience. But since Ay. measures the direct change in y . yT is further pushed up by the vertical distance between Z and 2". we now have Ay. where y i represents capital stock. be extended out to 2". truncated vertical terminal line. S t e p ii To this end.8) valid even if E is not set equal to one as in Fig. T . the total change in y .2As a result. involves g fixed T . as a standard practice.9)-is terms of T (without the *) anyway. Thus AT = 0 . 1.

. 1. in order to maximize profit in the interval [0. the terminal-curve case requires us to determine both y. the terminal curve implies that Ay. Fundamental r . as in Fig.ln [ F -y1FYp F. all the profit opportunities should have been fully taken advantage of by the optimally chosen terminal time.1). a t time T. Fyto. Hence. 3d ed.*. the other is supplied by the equation y.PART 2 THE CALCULUS OF VARIATIONS CHAPTER 3.~rnln)[~Y']t=TO = [for maximization of V 1 Unlike the last two cases. so that Ay. there exists a tradeoff between current profit and future profit. then we can add it to the current profit F(t.. Thus two relations are needed for this purpose.* > ymin. Ay. 2 y. . New York. r 0 would mean that E 2 0 ( E = 1 in Fig. The general expression for this is F . the I type of inequality is called for.13) should become (3. y. So it is possible to eliminate Ay.17) [F~)]. it will entail a negative value equal to -Fyg.. there are admissible neighboring paths with terminal values both above and below y. Thus. Thus.must = be changed to an inequality as in the Kuhn-Tucker conditions. it enters through the intermediary of capital. if we can convert that into a value measure.vqS]t-TAT = 0 + Since AT is arbitrary. a specific investment decision-say. means that the transversality condition (3. or TI. Truncated Vertical Terminal Line The usual case of vertical terminal line. and T. (3. at t = T.12) only provides one.1. See. Now we can interpret the transversality condition (3. As to the effect of the investment decision on future profits.15) And since Ay. 4 ~ o an explanation of the Kuhn-Tucker conditions. so as to build up capital which will enhance future profit.2. the only way to satisfy (3. The transversality condition (3.) .10): Terminal Curve With a terminal curve yT = +(T).).11). it is nonbinding. y. as in (3.13)-which has its roots in the first-order condition d V / d ~ O.10)-which can equivalently be written as [ -FY. under this outcome. we can deduce the transversality condition Combining (3. the firm should. 3. we may write the following summary statement of the transversality condition for a maximization problem: (3. T ] but not beyond.= 0 at t = T.]t=TsO for~. with AT = 0.* > yminor y. In other words.16) 2 [ F y l ] t = T A I~ ~ 0 0. 3. y'.16). illustrated in Fig. = O-instructs the firm to avoid any sacrifice of profit that will be incurred by leaving a positive terminal capital.If y. In other words. so neither term in (3. which involve a single unknown in the terminal point (either y. neither AyT as nor AT is assigned a zero value. the terminal restriction is automatically satisfied. However.9) and combine the two terms S into the form The basic reason for this is that. specidixes (3.* = y. us up all the capital it ever accumulated. = y. (3. y'. TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS entails taking resources away from the current profit-making business operation. the transversality condition is in that event the same as (3. This means that if we decide to leave (not use up) a unit of capital at the terminal time. that is.. the overall profit implication of the decision to choose the investment rate y'. only admits neighboring paths with terminal values y.13) is to have Fy. 21. = q AT. but is restricted to be nonnegative.14) and (3. in turn. The imputed (or shadow) value to the firm of a unit of capital is measured by the derivative Fy.5c.) to see the overall (current as well as future) profit implication of the investment decision. y. y.11) to mean that.=. Therefore. This means that Ay. is -y'. y'. (3. with a given capital stock y..y'Fy.*. the value measure of y'.9) to When the line is truncated-restricted by the terminal condition yT 2 ymin where yminis a minimum permissible level of y-the optimal solution can have two possible types of outcome: y.--will result in the current profit F(t.].=.* For a maximization problem. 1984. a decision to select the investment rate yl. in a free-terminal-state problem. At any time t.. for a small arbitrary AT.yb Fyg. the firm should select a T such that a decision to invest and accumulate capital will. is no longer completely arbitrary (positive or negative). The rate of capital accumulation is y'.* = ymin. in a problem with a free terminal time.0 5 YT*2 Ymin (YT*. Accordingly. In addition. and (3. on the other hand. is F(t.9) drops out.15) implies condition [Fy. . no longer yield any overall (current and future) profit. Methods of Mathematical Economzcs. in (3. = +(T 1. at t = T. McGraw-Hill.. The nonnegativity of E .*=~. see Alpha C.yT* can take either sign. Assuming the perturbing curve to have terminal value p(T ) > 0. The other outcome. Chiang.

. l ) and the line y = 2 . y*' = a . and the transversality condition is + yr2)-l" = o (at t = T) (3. on the other hand. and the transversality condition becomes (3. What the transversality condition does in this case is to require the extremal to be orthogonal to the terminal line. .1 .17') may seem complicated. .. where T. the extremal is a straight line that meets the s terminal line at the point ( 1. t 1 2 FIGURE 3 2 . I ( T * .. 2. represents a maximum permissible time for completing a task-a deadline.=. then the optimal y . O ~ .= 0 T* I T.T.T .. we must have a = 1. . respectively. But the extremal actually has a constant slope. .ytFY. but otherwise it is similar to the shortest-distance problem in Example 4 of Sec.. y* = a t +b with two arbitrary constants a and b. In practical problem solving.2.]. we can readily determine that b = 1. we see that this is a problem with a terminal curve y ( T ) = 2 .17)or (3.66 PART 2. we can derive the following transversality condition for a maximization problem: It has previously been shown that with the given integrand.2. then the inequality sign in (3. So. the first inequality in (3. then the terminal restriction is satisfied. say.]= .17') [ F Y * ] t =2 0 T YT* 2Ymin (YT* .* < y. and [F. [ F .Y ' F ~ ~ ] ~0.15) must be reversed. Truncated Horizontal Terminal Line The horizontal terminal line may be truncated by the restriction T 5 T.18) ( T * . . but its application is not.1 and + 1.=. Referring to Fig. [for minimization of V] Multiplying through by (1 + y'2)1/2and simplifying.y l ) r l ( l If the problem is to minimize V.T.L . ) terminal line and the extremal are..) [ F . t = 0 [for maximization of V] the transversality condition can be written as (1 + y " ) 1 / 2 + ( .ymin).. and the problem solved. .. If yT* 2 ymin.* = y..18') [ F . Since we have (3.. From the initial condition y(0) = 1.t .2. and check the resulting yT* value.]. . the Euler equation yields a straight-line extremal.14) first. 3. we can try the [Fy#1. 3. we resort to the transversality condition (3. Thus. THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINTPROBLEMS Y 67 If the problem is instead to minimize V. By analogous reasoning. y(0) y(T) = 1 EXAMPLE 2 Find the extremal of the functional = 2 -T . And the extremal is therefore EXAMPLE 1 Find the curve with the shortest distance between the point ( 0 .18) must be changed.. If y. a .y1FY. treating the problem as one with a fixed terminal point Cl". we can reduce this equation to the form yt = 1 (at t = T ) . The analysis of such a situation is very similar to the truncated vertical terminal line gust discussed.17').~ ~ F ~ ~ ] ~T * I T . the problem is to y*(t) = t +1 Minimize subject to and ny] / * ( I = 0 + y12)1'2 d t A shown in Fig. Thus the two lines are orthogonal (perpendicular) to each other. fails to reach zero on the truncated terminal line. Here.17) or (3. in order to satisfy the complementary-slackness condition in (3. 0 = condition in (3. Moreover.12). To determine the other constant. we note that the slopes of the . lies below the permissible range. we just set y. at all values of t .) [ F .~ r n i n ) [ F y # ]=~0= ~ [for minimization of V ] The tripartite condition (3.

As required by the transversality condition. The solution values of T are therefore rt 6 .371. The general solution of the Euler equation for this problem has previously been found to be a quadratic path (Sec. = T / 2 and c2 = 1 into the general solution. we differentiate the general solution to get y*'(t).For simplicity. and the extremal is then. = 0 .y = 1000] Since y(0) = 1. in the fixedterminal-point problem.3. Since F = ty' + y'2. However. y . which reduces to -y'2 = 0 or y' = 0 at t = T . and let y*'(T) = 0 . To find T . with boundary conditions y(0) = 1.that condition becomes t y ' + y 1 2..y p ( t + 2y') = Then the profit function becomes which implies that 0 (at t = T) This is the derivative needed in the transversality condition.68 PART 2 THE CALCULUS OF VARIATIONS TER 3 TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 69 into an orthogonality requirement. we shall in this discussion assign specific numerical values to the parameters. for otherwise the solution expressions will become too unwieldy. To meet this condition. should. Then it follows that c . we make use of the information that y. Substituting c .3. it follows that c .20) P * ( t ) = Ale0 12' - + A2e-0 12' + 14: and T = 2 - If we further assume the boundary conditions to be P. = 10. The result is the equation . 2.=rt0.101.2. Rejecting the negative value. and letting the resulting expression take the stipulated value 10. = 6. we still cannot determine the specific value of cl without knowing the value of T .933 A. Unlike in Example 1. have the values (after rounding): A. we obtain the equation the general solution of the Euler equation is (3. = 3. = 1. = 10 (horizontal terminal line).. . as required. 2 P. we end up with T * = 6 .e.=11$ P.933 This time path is shown in Fig.11). = t + 2y1. = -9. Let the cost function and demand function be FIGURE 3 3 . we have c. and A .20) does (apart from rounding errors) produce the terminal price P*(2) = 15. and T free. 3.r. the constants A .T / 2 + c . p = 0. so that F.17). but a variable terminal state P . the transversality condition in the present example dictates that the extremal share a common slope with the given horizontal terminal line at t = T. then set t = T .3611: rl. the extremal indeed attains a zero slope a t the terminal point (6.. the extremal is required to have a zero slope a t the terminal time. Example 1): C = &Q2 + 1000 [i. setting t = T . must be definitized with the help of the transversality condition (3. = T / 2 . a = &. Application to the Dynamic Monopoly Model Let us now consider the Evans model of a dynamic monopolist as a problem with a fixed terminal time T . That is. according to (2.12 B=14. 3. the appropriate transversality condition is (3. Since the postulated parameter values yield the ch and particular integral [see (2.. where the transversality condition translates The reader can verify that substitution of these two constants into (3. With the vertical terminal line thus truncated. But the other constant c .=15. Here we have a horizontal terminal line as depicted in Fig.

= -7.5 has a fixed terminal point that requires the expected rate of inflation.21) can be rewritten more specifically as (3. what value of T * ( T ) emerges from the solution? To see this. + A 2 = . and A. = 2 1 + ap" J e-Pt = 0 (at t = T) To solve for A. of course. however. Now that the terminal state is endogenously determined.24) and (3.25") are solved simultaneously.23).37. and A. Thus. we finally find the definite values of the constants A. Upon normalizing. The answer turns out to be original treatment does include a discussion of the variable terminal price. Setting t = T = 2 in (3. Since this does meet the stipulated restriction. + A . see his Mathematical Introduction to Economics.22). 1930. to be obtained from the general solution ~ * ( tand its derivative.20) by setting t = 0 and equating the result to Po = 11%.. =T. we can thus give condition (3. For his alternative method of treatment. 2. 150-153. Using ) (3. In the tradeoff model. and evaluate the result at t = T . this yields the condition From (2.: The Inflation-Unemployment Tradeoff Again The inflation-unemployment problem in Sec. (3.. > 0.26) into the general solution (3. the general solution of the Euler equation is The initial condition. we now must use the transversality condition [Fyr]t=T 0 .24) A. still assumed to be ~ ( 0=) t = 0 in the general solution) that The P ( T ) term here refers to the general solution P * ( t ) in (3. both evaluated at t = T .20) evaluated at t = T = 2. McGraw-Hill.25') the specific form ffp"j A. expression in (3.716 and A.47).25) F. That is. we substitute (3.231. It may be interesting to ask: What will happen if the terminal condition comes as a vertical terminal line instead? These can be used to turn the general solution into the definite solution. this takes the = form Thus. = 4.19) equal to zero at t = T = 2. we need to couple this condition with-the conditim that governs the initial point. %bile Evan's T*(T)= 0 : i In view of the given loss function. this requirement-driving the rate of expected inflation down to zero at the terminal time-is not surprising.716 giving us the definite solution It remains to check whether this solution satisfies the terminal specification PT >_ 10. And . to attain a value of zero at the given terminal time T : T ( T ) = 0 . and A.20) evaluated at the same point of time. The solution values of A. Using the transversality condition (3. we find that P*(2) = 14.PART 2: THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 71 Now adopt a variable terminal state PT 2 10.~ When (3. New York. With a vertical terminal line at the given T .1 + ap2 The T ( T ) and T ' ( T ) expressions are.171. T . which can be satisfied if and only if the expression in the parentheses is zero. turn out to be (after rounding): where u = . And the P Y T ) is the derivative of (3. the transversality condition can be written (after simplification) as A. we first set the 5. it neither makes use of the transversality condition nor gives a completely determined price path for that case.3 obtained from (3. requires C y Mting b (3.pp. the problem is s01ved.

(y. ] . then two transversality conditions have to be used to definitize the two arbitrary constants that arise from the Euler equation. there cannot be any presumption that the terms in (3... How will the answer to the preceding problem change. 1). there is no need for further discussion. the general solution to the Euler equation is y*(t) = c. Prob.=#'AT . + [ Fzr].. then AyJT = 0 and the j t h term in (3.~. we may expect the following to hold: AyT=$AT and dz.2. the general solution to the Euler equation is y*(t) = clt + c2 (see Exercise 2.27) drops out. and an initial transversality condition is needed to fill the void.9) must be modified into (3. The following examples illustrate the application of (3.=.hT + + [ F Y . k : A Variable Initial Point If the initial point is variable.27) can cancel out one another. Consequently.= 0 and [c'lt=a = 0 ..27) drops out because AT = 0. Find the extremal(s) if the initial condition is y(0) = 0. yT free. with the state variables denoted by y and z .AzT = 0 . Similarly. The Case of Several State Variables EXERCISE 3. For the terms that remain. = 4(T) and z . the same condition (3.27) [ F . if the terminal condition is altered to: T = 2. For the truncated-horizontal-terminal-lineproblem. Show that if the perturbing curve is such that p(T) < 0 instead. The discussion leading to condition (3.Y~. so that the integrand is F(~. What is the optimal terminal time T*? (b) Sketch a diagram showing the initial point.27) when n = 2. we get the transversality conditions 3. Then we have AT = 0. which the reader should compare with (3.FY. T free. Since the transversality conditions developed in the preceding section can be applied. (b) Sketch a diagram to show the terminal curve and the extremal(s). points variable.72 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 73 since the earlier version of the problem in Sec. = *(T) Then we have a pair of terminal curves. and the extremal. Prob.10). the first term in (3. and the terminal condition is yT = 3 .=. (a) Find the extremal if the initial condition is y(0) = 4 and the terminal condition is T = 2. =Ay. When several state variables appear in the objective functional.5 already has the target value of n. (a)For the functional V[y] = 1T(y'2/t3)dt.2 1 For the functional V[yl = j:(t2 + yt2)dt. the terminal line. ' ] t = T A ~ n T = l " O It should be clear that (3. + ~ n ' ~ y AT) ] ~ . mutatis mutandis.16) will emerge.9) constitutes a special case of (3. we may expect all the A expressions to represent independently determined arbitrary quantities. The general transversality condition for n = 2 is 3 4 5 " j 6 d "3. and the extremal.27) with n = 1.-. use the same line of reasoning employed for the truncated vertical terminal line to derive transversality condition (3. EXAMPLE 1 Assume that T is fixed.set at n-. Eliminating the first term in (3. [FY. but Ay. (a) Find the new extremal.. For a small AT. if any variable y. has a fixed terminal value. + ' ' . each term that remains will give rise to a separate transversality condition.~Y'~) the general (terminal) transversality condition (3.)] .Y~~. ~ ~ 2 + [ F y { ] t = T A ~+ T ' + [ F y . (b) Sketch a diagram showing the initial point. then the boundary condition y(0) = A no longer holds. however.27') f [ F . the terminal point.It=.3 THREE GENERALIZATIONS What we have learned about the variable terminal point can be generalized in three directions. and AzT are arbitrary. z'F. yT 2 3? Let the terminal condition in Prob.t B f c2(see Exercise 2. 1 be changed to: yT = 5. = 0. but yT and z. are both free. 4).16) for a truncated vertical terminal line is based on the assumption that p(T) > 0.. Given a fixed terminal time.T.2. to the case of variable initial points. the switch to the vertical-terminal-line formulation does not modify the result.Y~.(yrF. 2. Thus. If a problem has both the initial and terminal m L E 2 Suppose that the terminal values of y and e satisfy the restrictions b 3 Y.27') and setting the other two terms individually equal to zero.18).

) ~ . + 0) i EXERCISE 3 3 . The cost of adjusting L is assumed to be (3. in (3.. y") only: i The Problem For simplicity. In terms of Fig. By'. + yy' + y' + $y12)dt. Since the profit rate at time T is . D are constants). too. So the capitalized value of that present value is.y'. then the last term will call for the condition F. .the (terminal) transversality condition again requires modification. THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 75 Using these to eliminate Ay. = 4 ( T ) and z. The labor input is taken to be the sole determinant of profit because we have subsumed all aspects of production and demand in the profit function. (a) How many transversality condition(s)does the problem require? Why? (b) Write out the transversality condition(s). Inasmuch as it must choose not only the optimal LT. A. but keep the terminal line unchanged.2. let the profit of the firm be expressed by the general function d L ) . then Ay'. general solution the of the Euler equation is y*(t) = i t 2 + c.rr(L. AyrT. 1 For the functional V[y] = /T(y Thus the net profit a t any time is T(L) . In view of the rarity of high-order derivatives in economic applications. yT = C. C are constants).4.T free (A.. and zT = $(T). with a"(L) < 0. 674-689.27'). (see Exercise 2. The adjustment of labor input is assumed to entail a cost C that varies with L'(t). and T.C(L'). itself. . its present value is T ( L .4 THE OPTIMAL ADJUSTMENT OF LABOR DEMAND Consider a firm that has decided to raise its labor input from Lo to a yet undetermined level L. pp.. we have both the terminal state and terminal time free. but also an optimal time T* for completing the adjustment.~where p is the given discount rate and T is to ~. and the last term in (3. Hamermesh. zT = D. z(0) = B. but also the capitalized value of the profit in the post-T* period. and k > 0 when L' The new symbol appearing in this condition. y~ = C.1. Another feature to note about the problem is that n should include not only the net profit from t = 0 to t = T * (a definite integral).29) C(L') = b ~ + 'k ~ ( b > 0. and the initial conditions y(0) = yo and z(0) = z. ( a ) How many transversality condition(s)does the problem require? Why? (b) Write out the transversality condition(s). = $(T).28) drops out. = 0 at t = T. y'. 3. T which the reader should compare with (3. y'.)." The Case of Higher Derivatives When the functional V[y] has the integrand F(t." American Economic Reuiew.74 PART 2. z(0) = B. and use them to definitize the constants in the general solution. 3). With this transversality condition. This is the essence of the labor adjustment problem discussed in a paper by Hamermesh. y.. 2 Let the vertical initial line in the preceding problem be truncated with the restriction y*(O) 2 1. suppose that we have instead y(0) = A. "Labor Demand and the Structure of Adjustment Costs. AT is arbitrary. after encountering a wage reduction at t = 0. ' ~ a n i e lS.12). free (A. z. we now have five equations to determine T * as well as the four arbitrary constants in the two Euler equations. 3. 3. as illustrated in Fig. write out the transversality conditions. as well as the magnitude of L. = 0. 4 In the preceding problem. If the terminal slope is free to vary.. the transversality condition emerges als 3 In a problem with the functional / : ~ ( t . (a) Is the original solution still acceptable? Why? ( b ) Find the new extremal. z') dt. If we have a vertical initial line at t = 0 and a vertical terminal line at t = 1. and Az. The problem of the firm is to maximize the total net profit I over time I during the process of changing the labor input. y. Thus the firm has to decide on the best speed of adjustment toward L. .Y(")). means the change in the terminal slope of the y path when it is perturbed. would mean the difference between the slope of the AZ" path at Z and the slope of the AZ path a t 2. the rate of change of L. be set equal to T*. Prob. If the problem specifies that the " terminal slope must remain unchanged. we shall state here the general transversality condition for the case of F(t. we obtain . September 1989. C. B . the terminal curves y. which is affected by the choice of L. B. suppose that y(0) = y.t + c.

since LT should be strictly greater than Lo.31') Substitution of (3. we get the condition (3.' The condition [Fc]. If the last term in the functional were a constant. can provide the information needed for definitizing the arbitrary constants in the solution path.=.10). we have learned that The transversality condition in this problem is twofold. but not the l optimal L path.31') into the functional in (3.32) that 0 L FIGURE 34 .30) as (3. we could ignore it in the optimization process. But since that term-call it z(T)-varies with the optimal choice of LT and T.L'FL. .33) p'(L) L t r . = 0 means that . because the same solution path would emerge either way. and T.30) yields. And the condition [ F ..rr(L. complementary-slackness the requirement would reduce (3. we must explicitly take it into account.p L ' + -= o 2b [Euler equation] (3.)e-~~/~.Tfree) L(T)=L.76 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 3: TRANSVERSALITY CONDITIONS FOR VARIABLE-ENDPOINT PROBLEMS 77 While this functional still contains an extra term outside of the integral. . the new but equivalent function where we take the positive square root because L is supposed to increase from Lo to L. To obtain a specific quantitative solution. from formula (2. n q the optimal values of L. plus the initial condition L(0) = L o . So it affects only the optimal l value.]. it is necessary to specify the form of the profit function . However. Both LT and T being free. = 0 means (after simplification) that Thus.17) to (3.. we must satisfy both transversality conditions (3. and T.34) and (3.condition (3. The Solution To find the Euler equation. according to the standard capitalization formula.189.10).. Since our primary purpose of citing this example is to illustrate a case with both the terminal state and the terminal '~echntcally.rr(L).11). it is understood that the L symbol refers ' to the derivative of the general solution of the Euler equation. The two transversality conditions.10) and (3..30) subject to and L(0) = Lo (Lo given) (LT>LOfree. The statement of the problem is therefore full Thus.rr'(L)Lt P I e-~" we can write the post-T* term in (3. as well as for determining the optimal L. that term is a constant. From our earlier discussion of the problem of Mayer and the problem of Bolza. after combining the two integrals. we see from (3.17) for a truncated vertical terminal line should be used in lieu of (3.rr(L) + -. by defining 1 ~ ( t = --p(L)e-0' ) P SO that zf(t) = -. evaluated at t = T. In using (3.351.



time free, and to illustrate the problem of Bolza, we shall not delve into specific quantitative solution.


1 ( a ) From the two transversality conditions (3.34) and (3.35), deduce the location of the optimal terminal state LT* with reference to Fig. 3.4. (b) How would an increase in p affect the location of LT*? How would an increase in b or k affect L,*? (c) Interpret the economic meaning of your result in (b). 2 In the preceding problem, let the profit function be


( a ) Find the value of L,*. (b) At what L value does a reach a peak? (c) In light of (b), what can you say about the location of L,* in relation to the T ( L )curve? 3 ( a ) From the transversality condition (3.35), deduce the location of the optimal terminal time T * with reference to a graph of the solution path L*(t). (b) How would an increase in k affect the location of T*? How would an increase in b affect T*?

Our discussion has so far concentrated on the identification of the extremal(s) of a problem, without attention to whether they maximize or minimize the functional V[y]. The time has come to look at the latter aspect of the problem. This involves checking the second-order conditions. But we shall also discuss a sufficient condition based on the concept of concavity and convexity. A simple but useful test known as the Legendre necessary conditions for maximum and minimum will also be introduced.

4.1 SECOND-ORDER CONDITIONS By treating the functional V[yl as a function V(E) and setting the first derivative d V / d ~equal to zero, we have derived the Euler equation and transversality conditions as first-order necessary conditions for an extremal. To distinguish between maximization and minimization problems, we can take the second derivative d2V/de2, and use the following standard second-order conditions in calculus:
Second-order necessary conditions:

-< 0 dE2 -20 de2


[for maximization of V] [for minimization of V]






Second-order sufficient conditions:
-< 0

d2V de2

[for maximization of V] [for minimization of V]

-> 0

d2V dE2

The Second Derivative of V
To find d2V/de2, we differentiate the d V / d ~ expression in (2.13) with respect to E, bearing in mind that (1) all the partial derivatives of F(t, y, y') are, like F itself, functions of t, y, and y', and (2) y and y' are, in turn, both functions of E, with derivatives (4.1)
- =p(t)

can be established that the quadratic form-with Fyy, Fyyp, and Fyfyr evaluated on the extremal-is negative definite for every t, then d2V/dc2 < 0, and the extremal maximizes V. Similarly, positive definiteness of the quadratic form for every t is sufficient for minimization of V. Even if we can only establish sign semidefiniteness, we can a t least have the secondorder necessary conditions checked. For some reason, however, the quadratic-form test was totally ignored in the historical development of the classical calculus of variations. In a more recent development (see the next section), however, the concavity/convexity of the integrand function F is used in a sufficient condition. While concavity/convexity does not per se require differentiability, it is true that if the F function does possess continuous second derivatives, then its concavity/convexity can be checked by means of the sign semidefiniteness of the second-order total differential of F. So the quadratic-form test definitely has a role to play in the calculus of variations.

dy dE


-=pl(t) de


[by (2.3)]

Thus, we have


SUFFICIENT CONDITION A Sufficiency Theorem for Fixed-Endpoint Problems
Just as a concave (convex) objective function in a static optimization problem is sufficient to identify an extremum as an absolute maximum (minimum), a similar sufficiency theorem holds in the calculus of variations: For the fixed-endpoint problem (2.1), if the integrand function F(t, y, y') is concave in the variables (y, y'), then the Euler (4.3) equation is sufficient for an absolute maximum of V[y]. Similarly, if F(t, y, y') is convex in (y, y'), then the Euler equation is sufficient for an absolute minimum of V[y]. It should be pointed out that concavity/convexity in (y, y') means concavity/convexity in the two variables y and y' jointly, not in each variable separately. We shall give here the proof of this theorem for the concave case.' Central to the proof is the defining property of a differentiable concave function: The function F(t, y, y') is concave in (y, y') if, and only if, for any


l T [ p ( t ) ; F + pl(t) iy ;



[by Leibniz's rule]

In view of the fact that



dy dy' - + Ft - = Fyyp(t) + Fyyp1(t) "de "de

[by (4.1)]

aml, similarly,

the second derivative (4.2) emerges (after simplification) as

The Quadratic-Form Test
The second derivative in (4.2') is a definite integral with a quadratic form as its integrand. Since t spans the interval [O, T 1, we have, of course, not one, but an infinite number of quadratic forms in the integral. Nevertheless, if it

his proof is due to Akira Takayama. See his Mathematical Economics, 2d ed., Cambridge University Press, Cambridge, 1985, pp. 429-430.




pair of distinct points in the domain, (t, y*, Y*') and ( t ,y, y'), we have

on the right-hand side of the second and the third lines of (4.5). Upon rearrangement, (4.5) now becomes

Here y*(t) denotes the optimal path, and y(t) denotes any other path. By integrating both sides of (4.4) with respect to t over the interval [0, TI, we obtain

y*, [integration of the Fyr(t, y*')pl(t) term by parts as in (2.16)]

[since y* ( t ) satisfies the Euler equation (2.18)]

In other words, V[yl r V[y*], where y(t) can refer to any other path. We have thus identified y*(t) as a V-maximizing path, and at the same time demonstrated that the Euler equation is a sufficient condition, given the assumption of a concave F function. The opposite case of a convex F function for minimizing V can be proved analogously. Note that if the F function is strictly concave in (y, y'), then the weak inequality I in (4.4) and (4.5) will become the strict inequality < . The result, V[y] < V[y*], will then establish V[y*] to be a unique absolute maximum of V. By the same token, a strictly convex F will make V[y*] a unique absolute minimum.

is where Fyf to be evaluated along the optimal path, and (y - y*) represents the deviation of any admissible neighboring path y(t) from the optimal path y*(t). If the last term in the last inequality is zero, then obviously the original conclusion-that V[y*] is an absolute maximum-still stands. Moreover, if the said term is negative, we can again be sure that V[y*] constitutes an absolute maximum. It is only when [Fyf(y- y*)],=, is positive that we are thrown into doubt. In short, the concavity condition on F(t, y, y') in (4.3) only needs to be supplemented in the present case by a nonpositivity condition on the expression [Fy,(y - y*)],=,. But this supplementary condition is automatically met when the transversality condition is satisfied for the vertical-terminal-line problem, namely, [FYt],=,= 0. As for the truncated case, the transversality condition calls for either [FyjIt=,= 0 (when the minimum acceptable terminal value y , is nonbinding), or y* = y,, (when that terminal value is binding, thereby in effect turning the problem into one with a fixed terminal point). Either way, the supplementary condition is met. Thus, if the integrand function F is concave (convex) in the variables (y, y') in a problem with a vertical terminal line or a truncated vertical terminal line. then the Euler equation plus the transversality condition are sufficient for an absolute maximum (minimum) of V[y].

Checking Concavity/Convexity
For any function f , whether differentiable or not, the concavity feature can be checked via the basic defining property that, for two distinct points u and u in the domain, and for 0 < 0 < 1, F is concave if and only if

Generalization to Variable Terminal Point
The proof of the sufficiency theorem (4.3) is based on the assumption of fixed endpoints. But it can easily be generalized to problems with a vertical terminal line or truncated vertical terminal line. To show this, first recall that the integration-by-parts process in (2.16) originally produced an extra term [Fy3p(t)];, which later drops out because it reduces to zero. The reason is that, with fixed endpoints, the perturbing curve p ( t ) is characterized by p(0) = p(T) = 0. When we switch to the problem with a variable terminal point, with T fixed but y(T) free, p ( T ) is no longer required to be zero. For this reason, we must admit an extra term For convexity, the inequality I is reversed to >_ . Checking this property can, however, be very involved and tedious. For our purposes, since F(t, y, y') is already assumed to have continuous second derivatives, we can take the simpler alternative of checking the sign definiteness or semidefiniteness of the quadratic form

This quadratic form can, of course, be equivalently rewritten as (4.6')
= FYpj dYf2

+ 2Fytydy' dy + Fyy dy2

because under this test we must consider all the possible ordering sequences in which the variables of the quadratic form can be arranged.2'1. a sufficient condition would require a definite sim. For notational convenience. We first write the discriminant of q. this test may be overly stringent. let us use the notation IDi I 2 0 to mean that each member of lDil is 2 0 . . This is why it is related to absolute extrema.7).I < 0 . and r. y. but also in that of optimal control theory later. meaning that the sign semidefiniteness of q should hold for all points in the domain (in the yy' space) for all t. Though easy to apply. whereas. It is intended for strict concavity/convexity.12) Positive semidefiniteness of q o o 1D ~ 5I 0. The test rewhree WOWI~ .84 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 4: SECOND-ORDER CONDITIONS 85 which would fit in better with the ensuing discussion of the Legendre condition. ID. whereas the sufficient condition (4. y'). it only calls for sign semidefiniteness. The Determinantal Test for Sign Semidefiniteness The determinantal test for the sign semidefiniteness of q requires a larger number of determinants to be examined. the quadratic form q is everywhere negative (positive) semidefinite. the concavity/convexity sufficient condition is less strong. For the present two-variable case. y') if. inferences can readily be drawn regarding concavity/convexity as follows: The F(t. It should be noted that concavity/convexity is a global concept. first write the characteristic equation ID. 2 0 1D . Since these tests are applicable not only in the present context of the calculus of variations.2'). Then the test for sign semidefiniteness is as follows: Negative semidefiniteness of q (4.I and I Do2I 1 collectively as I D21LFurther. there are two possible ordering sequences for the two variables-(y'. which can reveal sign semidefiniteness as well as sign definiteness all a t once. I D.I= The Characteristic-Root Test The alternative to the use of determinants is to apply the characteristic-root test. From another point of view. because in the latter. it may prove worthwhile to present here a self-contained explanation of these tests. we need to consider only one additional discriminant besides (4. 1 2 0. The sign definiteness and semidefiniteness of a quadratic form can be checked with determinantal and characteristic-root tests. y') function is concave (convex) in (y. however. This is also why the quadratic form q is required to be everywhere negative (positive) semidefinite for concavity (convexity) of F. - whose principal minors are The Determinantal Test for Sign Definiteness The determinantal test for the sign definiteness of the quadratic form q is the simplest to use.J > 0 (everywhere in the domain) Then sol* &r the characteristic roots r .3) merely stipulates weak concavity/convexity. To use this test. namely. and the F function is strictly concave (strictly convex) if (but not only if) q is everywhere negative (positive) definite. I > 0 Positive definiteness of q I Dl 1 > 0 . and only if. for (4. Once the sign of the quadratic form is ascertained. This is a stronger condition than the earlier-mentioned quadratic-form criterion relating to (4. 1D21 2 0 and then define the following two principal minors: (4.9) o o (everywhere in the domain) - D F y y = F y y and ID. and the 2 x 2 principal mjnors I D.8) The test is Negative definiteness of q (4. Therefore. I collectively as I D . y) and (y.. let us refer to the 1 x 1 principal minors IDll and I Do. the second derivatives of F are to be evaluated on the extremal only.

by the chain rule. 2. y'. We can thus conclude that the extremal found from the Euler equation yields a unique minimum distance between the two given points. these two relations provide a means of double-checking the calculation of r. EXAMPLE 1 The shortest-distance problem (Example 4 of Sec. Tt follows that the F function is convex. y. ID21 is also . However. [This result.yq-positive for all y' values and for all t-means that F is strictly convex in the only variable y' everywhere. Fundamental Methods of Mathematical Economics. r2 5 0 o The positive sign for Fy. In so doing. which cannot be negative. 2.] Since the roots are unconditionally nonnegative. here) is negative as required for the negative definiteness of q. it is useful to remember that the two roots are tied to the determinant ID21 in (4. EXAMPLE 2 Is the Euler equation sufficient for maximization or minimization if the integrand of the objective functional is F ( t . in this function. so that q must be indefinite in sign.I (rptp. if ID2! < 0. and r. Consequently. and r2. This point becomes important when we evaluate the second derivative. i t is easy to check the second-order condition.4. there is no need to go through the steps in (4.12) is satisfied. The quadratic form q is not positive definite.13) and (4. 2 0.r2>0 rl 5 0. and the Euler equation is indeed sufficient for the minimization of V[y I.16). incidentally.(1+ yt2)-3/2y12 + (1 + yr2)-1/2 = = Its two roots are r. with reference to (4.15) (4. we first write the characteristic equation Further differentiation yields Fytyt . Chiang. y') = 4y2 + 4yy' + yt2? To check the concavity/convexity of F .86 PART2: THE CALCULUS OF VARIATIONS I CHAPTER 4: SECOND-ORDER CONDITIONS 87 the signs of these roots as follows: Negative definiteness of q Positive definiteness of q Negative semidefiniteness of q (4. confirms (4. McGraw-Hill. More importantly.8). once ID21 is found to be negative.. then r. To illustrate the characteristic-root test. = 10 and r2 = 0. we first find the second derivatives of F.11). New York.6 of Alpha C. Since I (at least one root r. must be opposite in sign.14). the required second derivatives are and FYt=4y+2y1 + r2 = sum of principal-diagonal elements = Fyty. r2 r 0 = 0) FY=8y+4y1 (at least one root = 0) For this test.15) and (4.14). The first derivative of F is.91.<0.r2<0 r1>0.2 b ( l = + ab) Using the determinantal test (4. E&lMPLE 3 Let us now check whether the Euler equation is sufficient for profit maximization in the dynamic monopoly problem of Sec.2) has the integrand function F = (1 + y'2)'/2. -2ah2 = 'see the discussion relating to local stability analysis in Sec.8) by the following two relations2 (4.33) gives rise to the second derivatives (1 + y'2) (1 . we see that even though ID. Since there is only one variable. 3d ed. with reference to (4.16) r.3/2 [-y2+ (1 + >0 y12)] = + yr2)-3/2 [positive square root] aprp. we find that So the condition for positive semidefiniteness in (4. bear in mind that the square root in the F expression is a positive square root because it represents a distance. 1984. 18. Fyy + rlr2=lD21 1 Thus. we have For one thing. h ( l = = + 2ab) app . q is everywhere positive semidefinite according to (4. they allow us to infer that. The profit function (2.34) Positive semidefiniteness of q o o r. rptprpp.

.2 ) For notational simplicity. there would be a larger discriminant with more principal minors to be checked. we need only to subject all of them to sign restrictions similar to those in (4. to as many as four variables (y. We shall denote these 1 x 1 principal minors collectively by 1D. however. the total number available is 24. the diagon_al. F. This is because the number of principal minors generated by different ordering sequences of the variables will quickly multiply. One question naturally arises: How does one ascertain the number of distinct values that can arise from principal minors of various dimensions? The answer is that we can just apply the formula for the number of combinations of n ob~ects taken r at a time.) variables in reverse order. Fundamental Methods of Mathematical Economics. As we shall see in the next section (Example 3). in the diagonal is always equal in value to the one with the two F. The the F function is not concave.. For the characteristic-root .New York. the F function must be concave/convex (as the case may be) in all the n variables and their derivatives (y.. by (4.1.. Finally. only four distinct values can arise.. Alpha C. F. this would mean a higher-degree polynomial equation to solve. The 2 x 2 principal in JJ minors in (4. 3d ed. as the first. the concavity and convexity sufficiency conditions are still applicable... say. denoted by C.r ) ! . the problem does satisfy a second-order necessary condition for a maximum. respectively. however. characteristic roots r . .but only six distinct values can emerge from them. Then we can arrange the second-order derivatives of F into the following 4 x 4 discriminant: The reason why the other six can be ignored is that the principal minor with (F.). These values can be calculated from the following six principal minors: Extension to n-Variable Problems When the problem contains n state variables.. Chiang.]. we need to consider only one 4 X 4 principal minor. will As to 3 x 3 principal minors. in that case. y . Sec. 1984.3. ' ' .however. Even with only two state variables. number these four variables. for example. 1. third. the procedure for the n-variable case would become more complex..? = n! r ! ( n. F.16). the determinantal test for sign In definiteness. there exist four possible 1 x 1 principal minors: Hence.!': la[. ( F . .y) jointly. .. But the technique is well known. (4.19) be collectively referred to as ID. 11.. concavity/convexity must now be checked with regard z). Fkkarranged in any other order in the diagonal. We shall refer to the principal minors in (4. and the Euler equation is not sufficient for profit maximization. McGraw-Hill. F.) in the diagonal always has the same value as those with F.. and r2 have opposite signs.88 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 4: SECOND-ORDER CONDITIONS 89 negative: ordering sequence. The total number of possible 2 x 2 principal minors is 12.y.17). From these.. .. y. Once the roots are found. . let us z.. . second.. ..21) C. %ee. But.3 With regard to the determinantal test for sign semidefiniteness.14). which is identical in value with /Dl in (4. and they can be calculated from the following principal minors: Since each of the variables can be taken as the "first" variable in a difhent We can ignore the others because the principal minor with (F.. (y.20) collectively as 1 D. ' test. and fourth.

23). 1 of Exercise 2. The Rationale To understand the rationale behind (4. Though not as powerful as a sufficient condition.18). unfortunately. dt and du = 2 p ( t ) p f ( t )d t .. for 4 x 4 principal minors. Let v = Fyyp u = p2(t).V.I 0 to mean 2 that each member of ID. 1 0 Minimization of V [ y] = . ( b ) What specifically can you conclude from that fact? For 2 x 2 principal minors.23) 1 ~ 1 2 1 =s (4.[ is 2 0. 1 above on Prob. EXERCISE 4. 8 Perform the tests mentioned in Prob. a unique n&w h u m The concavity feature described in (4. The Legendre Condition Once all the principal minors have been calculated. we have 4. check for concavity/convexity by the determinantal test (4. 1 above on Prob./5 0.2 1 For Prob. let us first transform the middle term in the integrand of (4.9). SO that and dv dFyy' = -d t (c) Is the sufficient condition for a maximum/minimum satisfied? 2 Perform the tests mentioned in Prob. T ] 1 ~ 2 ~0. the test for sign semidefiniteness is similar to (4.12). In this section. 1B41 0 Fycyr0 5 Fytyt 0 2 for all t E [O. When that feature is present in the integrand function F. the mathematician. as may often happen .22) If'. derivative is to be evaluated along the extremal...T ] for all t E [0. 3 of Exercise 2.p 90 PART 2: THE CALCULUS OF VARIATIONS ' CHAPTER 4 SECOND-ORDER CONDITIONS 91 For (4. 1B41 1 SB312 0. But. 1 ~ 2 ~0.. > 0 for a minimum of . thus the number of combinations is 4 ( a ) Show that. 2. ( b ) If that test fails.4) is a global concept.2: ( a ) Check whether the F function is strictly concave/convex in ( y . it is very useful and indeed is used frequently. 5 of Exercise 2. ID. the integrand function is strictly convex. Using the notation IB. the extremal is guaranteed to maximize V [ y ] But when F is not globally concave. thought at one time that he had discovered a remarkably neat sufficient condition: F./ 5 0.. for it involves nothing but the sign of FytYr. the formula yielcls Finally. (everywhere in the domain) [Legendre necessary cond@bn] The FyFy..12) or the characteristic-roottest. Positive semidefiniteness of 0.2.3 THE LEGENDRE NECESSARY CONDITION A d for 3 x 3 ppirxipd minors. The same is true for convexity. Legendre. < 0 for a maximum of V. we have four variables taken one a t a time. which is based on local concavity/convexity. (such as in the dynamic monopoly problem). the weak-inequality version of this condition is indeed correct as a necessary condition: Maximization of V [ y] (4.5.2. we introduce a second-order necessary condition known as the Legendre condition.we have the following: Negative semidefiniteness of q The great merit of the Legendre condition lies in its extreme simplicity. in the inflation-unemploymenttradeoff model of Sec.2') into a squared term by integration by parts. he was mistaken.y') by the determinantal test (4. and F. we would have to settle for some weaker conditions. Nonetheless.

even then. preceding sec- where all the limits of integration refer to values of t (not u or u). p l ( t ) would in general have to take large absolute values. is unconditionally positive for all t in -the interval [ a .23). and thus the nonpositivity (nonnegativity) of its coefficient Fypyt necessary for the nonpositivity (nonnegativity) of d2V/de2 for is maximization (minimization) of V.4) minimize the surface of revolution? Since that problem has we can. What will happen if the terminal point is variable and p ( T ) is not required to be zero? The answer Thus. as indicated in (4. =p(T) = p2(t)% dt dt [p(O) 0 assumed] EXAMPLE 1 In the shortest-distance problem (Example 1. z]. This means that the eventuality of (4. Does the extremal (pictured in Fig. F. . we have already found that where the integrand now consists of two squared terms p2(t) and p'2(t). we have made the assumption that p(0) = p(T) = 0 on the perturbing curve.251. Plugging this into (4. 2.2') yields tion).la shows that p'(t1. z]. because the two terms in parentheses are always positive. the condition is not sufficient. the minimum-surfaceof-revolution problem does not satisfy the sufficient condition.1. and y-represented by the height of the AZ curve in Fig. Figure 4.. y*(t) must be positive for all t in the interval [ a .24) will then be replaced by the new expression [Fyy. This means that y*(t) can only have a single sign throughout. That the pr2(t)term will dominate can be given an intuitive explanation with the help of the illustrative perturbing curves in Fig.lb shows that in order to attain large p(t) values. we seek to find a curve between two fixed points that will generate the smallest surface of revolution.2') can be rewritten as = o- 1 T is that the zero term on the last line of (4. 2. Unlike the shortest-distance problem.2. Hence. Therefore. the Legendre necessary condition for a minimum is satisfied. 2. after differentiating and simplifying.23). too. since that problem has earlier been shown to satisfy the second-order sufficient condition for minimization. Fig.p2(t)]t=T. But because of the presence of the other term in (4. It turns out the pr2(t) term will dominate the other one over the interval [O. get ..92 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 4: SECOND-ORDER CONDITIONS 93 Then the middle term in (4. we see from the general solution of the problem ~ * ( t= ) 2 [e(t+k)/c C + e-(t+k)/c 3 that y*(t) takes the sign of the constant c. Given that points A and Z show positive y values. The (1 + y'2)-3/2 expression is positive. thereby duplicating the result of p(T) = 0. it is possible for some admissiBut ble neighboring path to have its terminal point located exactly where y*(T) is. and the Legendre condition is still needed. 4. and reducing that new expression to zero. TI. This is only to be expected. by (4.25) is still possible. Note that. can take large absolute values even if p(t) maintain small values throughout.4-is positive along the entire path. To make certain that y(t) is indeed positive for all t . the Legendre necessary condition is valid whether the terminal point is fixed or variable. 4. EXAMPLE 2 In Example 3 of Sec. the slope of p(t). and the ".251. These considerations suggest that the pf2(t)term tends to dominate. however. to reach the result in (4. Legendre condition for a minimum is satisfied.

23).5J.=2z'+y' the Legendre necessary conaition for a maximum is indeed satisfied.!. we find that 'i where the second-order derivatives are to be evaluated along the extremals..26) results in In this case. For this purpose. as in (4.1 z 0 iI for all t E satisfy the Legendre necessary condition for a maximum/minimum. so as to keep it distinct from the ID1 symbol used earlier in (4. satisfies the Legendre condition for a maximum/minimum..4 FIRST AND SECOND VARIATIONS Our discussion of first-order and second-order conditions has hitherto been based on the concept of the first derivative d V / d ~ and the second derivative d2V/de2. we must now check I 2 whether the n x n matrix [Fy. I . .17). 2. The deviation of V[y] from V[y*] is . we have found in the preceding section that the integrand function T(P. since EXAMPLE 4 In Exercise 2. 2. with proper modification. 8. 0 or FyCy. =2 FYtZt = 1 = Fzry. But instead of checking whether Fyty. all the 2 x 2 principal minors )A2). two state variables appear in the integrand function F = y'2 + zr2+ ytz' Since FY.and so forth.27) includes (4.l-or.. 6. . .3. . 4.22). - [0. which are directly tied to the name calculus of variations itself.. ld21 2 0. Prob. T I 20 2 Check whether Prob. Then write all the 1 x 1 principal minors 1h. EXERCISE 4.PART 2: THE CALCULUS OF VARIATIONS CHAPTER 4: SECOND-ORDER CONDITIONS 95 EXAMPLE 3 For the dynamic monopoly problem.-is negative semidefinite (for maximization of V) or positive semidefinite (for minimization of V). .27) Minimizationof V =. This is why we employ the symbol l Al in (4. so it does not satisfy the sufficient condition (4. as explained in the text following (4. TI [Legendre necessary condition] It is clear that (4.22) involves derivatives of the Fyyand F types as well as Fyty:.0..j.1. though apparently similar to the determinantal test for sign semidefiniteness in (4. type. also be applied to the problem with n state variables (y. Note that the Legendre condition in (4.3). Second.. the Legendre condition is local in nature. I 0. 3 of Exercise 2.26) and (4. .? = 2 I Substitution of these into (4. unlike the global concavity/convexity characterization. alternatively.27).1 I.22). il i 1 for all t E 3 Does the inflation-unemploymentproblem of Sec. but the .yn).. 2 0.3 1 Check whether the extremals obtained in Probs. and 9 of Exercise 2.(-1)"16. by (4. A n alternative way of viewing the problems centers around the concepts of first variation and second variation. l2 2 0 .=2y'+zp we find that and FZ.5 satisfy the Legendre condition for minimization of total social loss? [0.26) are to be evaluated along the extremals only. differs from the latter in two essential respects. First. ! ! 1 I The n-Variable Case The Legendre necessary condition can. The calculus of variations involves a comparison of path values V[y] and V[y*].27). However.271. the Legendre condition for a minimum is satisfied.. first define FYPY.23) as a special case. so the second-order derivatives in (4.3. the quadratic form whose coefficients are Fy. the sign-semidefiniteness test in (4.. Legendre condition is based exclusively on the Fyry. The Legendre second-order necessary condition is Maximization of V (4. Thus. a two-variableproblem. F. P') is not concave.

y*'). denoted by S V Similarly.Y*) + F. In sum.28).y*' = ~ p ' to get ' .t)(yl .30) is known as the second variation. (higher-order-terms) I In the calculus-of-variations literature.. (4. This is equivalent to the first-order condition d V / d ~= 0.t).t.0. it is further necessary to have S T v 0. where all the partial derivatives of F are to be evaluated at (t. We have chosen to follow the derivative route because it provides more of an intuitive grasp of the underlying reasoning process. Using tk& result in (4. where A v 0. it is clear from (4. Recalling (2. for the sake of completeness.(t . the second-order necessary condition of Legendre. y*'). +R.cp1 + 1 [ F . y*') + Fyep t.y*')] +.. y*. Once the SV = 0 condition is met. y') = F ( t ..t ) ( y .(Y' . y*. y . since the coefficient c2/2 is nonnegative at every point of time.t ) + F. In a maximization problem.31. it would contain a term F(t.28).y*) 1 + z [ ~ t t ( t. y*.(y.30) that V it is necessary to have SV = 0.y*') ) + [w. the first-order and second-order necessary conditions can be derived from either the derivative route or the variation route.y* = EP.'12 + Fyy(y .29') F ( t . we can substitute y . +R.y*. of course.y*12 + Fyfyt(y1 Y * . it is necessary to have 6V = 0 and S2V 2 0. ~ ( E P ) + F ~ ~ ~ . these would all drop out.y*)(yt . We have.(Y . which in our earlier discussion has led to the Euler equation.FY. ( ~ P ) ( E P ' ) ] ~ ( ~F) + .PART 2: THE CALCULUS OF VARIATIONS CHAPTER 4: SECOND-ORDER CONDITIONS 97 When the integrand of the first integral is expanded into a Taylor series around the point (t.y*') +2Fyy. y*'). the first integral in (4.30) in referred to as the first variation. Analogous reasoning would show that. included the terms that involve (t . This condition is equivalent to d2V/de2 I0. since E can take either sign at every point of time. y*. ( +P ~ ~ ~ .29) F ( ~ .Y*')] +2Fty(t . but. ) ~ - + 2Fty. which would allow us to cancel out the second integral in (4. for a minimization problem. Y . the second integral in (4. denoted by S2V . we can then transform the latter into + h. Y '= F(t. and y . That Taylor series is (4..

there may be good reasons to expect or assume its existence to be permanent.1) i dt and dt -+ m.CHAPTER 5. with assurance that it will integrate to a finite value. Condition I Given the improper integral /. or even for a corporation. is an improper integral which may or may not have a finite value."F(t. 1 " 1 (5. it does not pertain to the calculus of variations as such. But that topic is complicated and.' It is true that even in the divergent situation. t o . and the other concerns the matter of transversality conditions. INFINITE PLANNING HORIZON 99 The Convergence of the Objective Functional CHAPTER 5 INFINITE PLANNING HORIZON The convergence problem arises because the objective functional.y . This is in the nature of a sufficient condition. 178-187. we shall make some comments on certain conditions that are (or allegedly are) sufficient for convergence. But for society as a whole. 231-237. various ways have been proposed to make an appropriate selection of a time path from among all the paths with an infinite integral value. In the case where the integral diverges. "The Existence of an Optimum Savings Program. see S. and if F attains a zero value at some finite point of time.pp. Condition I1 (False) Given the improper integral /. ml. T I to [O. besides. This condition is often taken to be a sufficient condition. and remains at zero for all t > to. he various optimality criteria in the divergent case are concisely summarized in Atle Seierstad and Knut Sydsaeter. y') d t . so we shall not delve into it here. then the integral will converge.2 Instead. if the integrand F is finite throughout the interval of integration. New York."F(t. January 1962. TI. for even the most far-sighted person probably would not plan too far beyond his or her expected lifetime. I Each of these has an integrand that tends to zero as t But while I.pp. Chakravarty. y') dt. 1987. Such an extension of horizon has the advantage of rendering the optimization framework more comprehensive. now in the form of /. at least two major methodological issues must be addressed. the effective upper limit of integration is a finite value. Unfortunately. Optimal Control Theory with Economic Applications. Elsevier."F(t. One has to do with the convergence of the objective functional." Econometrics. 'F'or a detailed discussion of the problem of convergence. say. It may therefore be desirable to extend its planning horizon indefinitely into the future. if F -+ 0 as t -+ m. but it is not. as a preliminary to the discussion of economic applications in the two ensuing sections. Although the integral nominally has an infinite horizon. then the integral will converge. . To see this. the infinite planning horizon entails some methodological complications. y') dt.1 METHODOLOGICAL ISSUES OF INFINITE HORIZON With an infinite horizon. y. there may exist more than one y ( t ) path that yields an infinite value for the objective functional and it would be difficult to determine which among these paths is optimal. consider the two integrals " For an individual. y . Thus. it is generally adequate to plan for a finite time interval [O. and change the interval of integration in the objective functional from [O. the given improper integral reduces in effect to a proper one. t o . 5. More importantly. it also has the offsetting drawback of compromising the plausibility of the often-made assumption that all parameter values in a model will remain constant throughout the planning period.

are variable. In the following. there is no longer a specific terminal T value for us to adhere to. Courant. 2d ed. Advanced Calculus: An Introduction to Analys~s. In the present . on the other hand. and G. that the integrand function F tends to zero as t tends to infinity. we can use the same procedure as in the finite-horizon problem of Chap. on the other hand. where M is a constant..2) Z = 1 = 1 I."F(t. so the condition is not necessary. if there is a positive constant N such that tF(t) > N > 0. must also be convergent.5 A another example.. if the F function is given in the general form. There also exist other sufficient conditions. Vol. ceteris paribus. Differential and Integral Calculus. then the downward force of e-"' is sufficient to make the integral converge. North-Holland. Amsterdam. based on the formula for the present value of a perpetual constant flow. where p is a positive rate of discount. To develop the transversality conditions. It follows that the first integral. 4 ~ e efor example. y') dt. which emanates from (3. 52. y. thereby allowing the contributions to the integral from neighboring time intervals to cancel out one another." Econo. 'see Watson Fulks. 182. p. C. pp. For such functions. these conditions would emerge naturally as a by-product of the process of deriving the Euler equation.. 1937. if there exists a number a > 1 such that. but they are not simple to apply.100 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5: INFINITE PLANNING HORIZON 101 I converges. pp. We have here another SUBcient condition for convergence.9). and it is in fact commonly accepted as such.3 The upshot is that. More formally. 1. or both. resulting in convergence. 1. 249-250. Essentially. utility function. y') component of the integrand is positive (as in most economic applications) and has an upper bound.. yet the integral j:F(t)dt converges to the magnitude Ci=l(l/k2). In the case of I. b The Matter of Transversality Conditions Transversality conditions enter into the picture when either the terminal time or the terminal state. "The Existence of an Optimum Savings Program. A distinguishing feature of this integral is the presence of the discount factor e-Pt which. y. y. S. Variational Methods in Economzcs.. I. Kemp. 63. convergence is something to be explicitly checked. Interscience. translated from German by E. When specific functions are used. See R.31. and will not be discussed here. metnca. the an following criterion can be used: The integral converges if F(t) vanishes at infinity to a higher order than the first. for normal economic applications. that these counterexamples involve either an unusual type of discontinuous integrand or an integrand that periodically changes sign. Thus transversality conditions are needed. the integrand in an economic model-representing a profit function. Courant. shows that the second integral in (5. New York. In the case of I.. p. 253. Vol. yl)dt to converge? Such a necessary condition seems intuitively plausible. and the like-is assumed to be continuous and nonnegative. that is. in and of itself. New York. is convergent. does not: (5. When the G(t. 570-571. The integral d~vergeslf Mt) vanishes at infinlty to an order not higher than the first. then the integral will converge. Such functions are not usually 1. This would imply. 'see R. 3d 1978. as in (3. we can write - +n2 1 (n = 1. we shall always assume that the integral is convergent. &. = lim [ln(t + l)]: = rn b-tm used in the objective functionals in economic models. s i n ~ e value of the the G function can never exceed the value of the constant G. y'je-P'.. 0 < F(t) _i M/t" holds. When the planning horizon is infinite. e 3 ~ o r integral l. positive). And the terminal state may also be left open. What accounts for this sharp difference in results? The answer lies in the speed at which the integrand falls toward zero. Hadley and M. New York. the fraction falls with a sufficiently high speed as t takes increasingly larger values. It is of interest also to ask the converse question: Is the condition " F -t 0 as t -+ rn" a necessary condition for l. Wiley.I 0 otherwise then F(tj has no limit as t becomes infinite. y . that is. the counterexamples would be irrelevant. Dzfferential and Integral Calculus. Chakravarty."F(t." The last equality. for all values of t (no matter how large).2. 3. Typically. pp.4 But counterexamples can be produced to show that an improper integral can converge even though F does not tend to zero. For instance. J. January 1962. the condition " F -+ 0 as t + rn" does not guarantee convergence. however. if the integrand takes the form of G(t.~(t)dt whose integrand maintans a single sign (say. and the integral diverges. 1971. Interscience. with the upper bound in the integran:. the speed of decline is not great enough. McShane. say. whose integrand is G I G. the integral s sin t 2 dt can be shown to converge to the value $@ even though the sine-function integrand does not have zero as its limiL6 Note. and the G function is bounded. ed.7). if 1 if n 5 t I n Condition I11 In the integral l. provides a dynamic force to drive the integrand down toward zero over time at a good speed. where the denominator of the integrand is a squared term.

In a problem of this type.7) lim Fyp= 0 &-+= [transversality condition for free terminal state] I..9) must be modified to where each of the two terms must individually vanish. once K*(t) has been found. = a given constant [which is the infinite-horizon counterpart of y(T) = Z in the finite-horizon problem]. alleged counterexamples have been presented to show that standard transversality conditions are not necessarily applicable in infinite-horizon problems.6) in place of a transversality condition. If the problem is that of a profit-maximizing firm with the state variable y representing capital stock." American Economic Review. And the doubts have spread to the calculus of variations.y1FYt) 0 = [transversality condition for infinite horizon] This condition has the same economic interpretation as transversality condition (3.102 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5 INFINITE PLANNING HORIZON 103 J context. I = dK/dt. In the present section. The Jorgenson Model7 In the Jorgenson neoclassical theory of investment. then we should impose the additional condition (5. it is only fair to warn the reader at this point that there exists a controversy surrounding this aspect of infinite-horizon problems. as t -t w. If the problem pertains to a profit-maximizing firm. If the restriction y. QLL< 0)and constant returns to scale (linear homogeneity). K*(t). Since both components of investment are intimately tied to capital.2 THE OPTIMAL INVESTMENT PATH OF A FIRM The gross investment of a firm. the firm is assumed to produce its output with capital K and labor L with the "neoclassical production function" Q = Q(K.17'11. for some reason. then we are done with the problem. AT is perforce nonzero. to be discussed in Part 3. however. and replacement investment. for instance.11) in the finite-horizon framework. we present two models of investment that illustrate both of these eventualities. is satisfied by the solution. there is usually an implicit terminal state ym-a steady state-toward which the system would gravitate in order to fulfill the objective of the problem. Although these transversality conditions are intuitively reasonable. then some other criterion must be used to determine Ig*(t) from K*(t). .7) first. Nevertheless. where 6 is the depreciation rate of capital K. their validity is sometimes called into question. has two components: net investment. Assuming that investment plans are always realizable. 247-259. If we can determine ym.4) will vanish on its own (Ay. Otherwise.4).*(t) = K*'(t) + 6K*(t) This condition can be given the same economic interpretation as transversality condition (3. = 0) and no transversality condition is needed. Since there is no fixed T in the present context. if an asymptotic terminal state is specified in the problem: (5.8) ' I 4 lim y( t ) = y. and this necessitates the condition (5. the optimal investment path is simply (5. Such an approach is especially feasible in problems where the integrand function F either does not contain an explicit t argument (an "autonomous" problem in the mathematical sense or where the t argument appears explicitly only in the discount factor e-Pi (an "autonomous" problem in economists' usage). we can always apply (5. I 5.17) and (3. This is because in the field of optimal control theory. the condition requires the firm to take full advantage of all profit opportunities. It may be added that if the free terminal state is subject to a restriction such as y. This feature distinguishes it from the acceleration theory of investment in which capital is tied to output in a fixed ratio.6) t-+m One way out of the dilemma is to avoid the transversality conditions and use plain economic reasoning to determine what the terminal state should be. ale W. assumed equal to 6K. then. I. Jorgenson. the message of (5. we shall argue that these counterexamples are not really valid counterexamples. "Capital Theory and Investment Behavior. The neoclassical production function is usually accompanied by the assumptions of positive but diminishing marginal products (QK. But if the terminal state is free. (3. As to the second term in (5.10) for the finite-horizon problem. I. we have to use yminas a given terminal state.then we can simply use the terminal condition (5. (3.QL > 0. however. there exist obstacles to the execution of investment plans. the determination of an optimal investment path. When we come to the topic of optimal control. then the Kuhn-Tucker conditions should be used [cf. L ) that allows substitution between the two inputs. > y.5) fiz( F . 2 y.7) is for the firm to use up all its capital as t + m. then the second term in (5.QKK. pp. is understandably contingent upon the determination of an optimal capital path. But if.. May 1963. The version given here omits certain less essential features of the original model. In practical applications.*(t).

m FL= (PQL . in line with our discussion in Sec.9) is an improper integral. i. they are nothing but the standard "marginal physical product = real marginal resource cost" conditionss that arise in static contexts. but are now required to hold at every t. and mp means the interest-earning opportunity cost of holding capital. ( m denotes the price of "machine").*(t) path.9) is linear in both K' and L . however. pp 580-599. ml. We may expect the firm to have a given initial capital. but the Jorgenson paper actually uses a substitute target that vanes with time. K and L. for instance.. = aKN-'LP and QL = /3K"LP-'. according to Condition I11 of the preceding section. in the objective functional. The Flexible Accelerator If it is not feasible to force a jump in the state variable K from KO to K*. the alternative is a gradual move to K *. the integral will converge. and the Euler equations indeed emerge as the twin conditions (5. which is the exact counterpart of W on the labor side. The partial derivatives of the integrand are The gradual nature of the adjustment in the capital level would presumably enable the firm to alleviate the difficulties that might be associated with abrupt and wholesale changes. FK* = = di his fact is immediately obvious from the second equation in (5. For a discuss~on this. Gould.104 PART 2: THE CALCULUS OF VARIATIONS I CHAPTER 5 INFINITE PLANNING HORIZON 105 The firm's cash revenue at any point of time is PQ.10). we can express the present-value net worth N of the firm as with no derivatives of t in them.WePL ). the other symbols denote parameters. November 1969. di he correct application of the flexible accelerator calls for setting the target a at the level in (5 11). if K is not allowed to make a vertical jump. (The L term is altogether absent. or the firm is a newly instituted enterprise free to pick its initial capital (variable initial state). Jorgenson adopts what amounts to the so-called flexible-accelerator mechanism to effect a gradual adjustment in net investment. a theoretical justification has been provided for it in a classic paper by Eisner and Strotz. the condition cannot be satisfied." Quarterly Journal of Econom~cs. In fact.10) is that it implies constant K* and L* solutions.12) was originally introduced by economists merely as a convenient postulate. When these are substituted into (5.happens to be equal to K*. and its expenditure on new capital. and their sum constitutes the marginal user cost of capital. WL (W denotes the money wage rate).) Thus. While the particular construct in (5. There will be two Euler equations yielding two optimal paths. including t = 0. K*(t) and L*(t). The constant K* value represents a first-order necessary condition to be satisfied at all points of time. see John P. Therefore. The important point about (5. if the bracketed net-revenue expression has an upper bound. "The Use of Endogenous Variables in Dynam~cModels of Investment.e. The essence of the flexible accelerator is to elimjnate in a systematic way the existing discrepancy between a target capital. where P is the given product price. which is of suboptimal. it has a zero ' ' coefficient. KO. K(t):' Optimal Capital Stock In applying the Euler equations to the present model. the Euler equations will not be differential equations. 0 m6 means the marginal depreciation cost.10). Taking a generalized Cobb-Douglas production function Q = K"LP. that is. and the K*(t) path can then lead us to the Z. There are two state variables. KO." . Its cash outlay at any time consists of its wage bill. Thus the net revenue at any point of time is Applying the discount factor e-Pt to this expression and summing over time. As for the first equation.but the terminal capital is left open. we obtain (after eliminating L ) The firm's objective is to maximize its net worth N by choosing an optimal K path and an optimal L path. ( a + /3 # 11. 2. and the actual capital at time t . unless the firm's initial capital. we first observe that the integrand in (5. K. FK= ( PQK .10) can give us a similar constant expression for L*. i m(8 + P ) (0-l)/(l-e-@) -p/(l-n-p) = constant And substitution of this K* back into (5. we have Q.1. In view of the presence of the discount factor e-"'. and the problem is degenerate. The objective functional in (5.10) QK = m(S + P ) and W QL=p for all t 2 o . Such would be the case if we can rule out an infinite value for Kt.

however. provides an interesting contrast to the profit function rr(L) in the labor-adjustment model of Sec. "Determinants of Business Investment. Assuming that the firm has knowledge of the profit rate a associated with each plant size. The C curve has been drawn only in the first quadrant. their characteristic roots 51) hi and particular Integral do not coincide with ( . Englewood Cliffs. but inasmuch as the net return can be expected to be bounded from above. 3.13). Prentice-Hall. then once the K*(t) path is found. - . T i fact. where K' > 8. see also their App. "The Euler equation given by Eisner and Strotz contains a mlnor error-the coefficient of the K term in (5. NJ.pp. convergence should not be a problem by Condition I11 of the preceding section. The objective of the firm is to choose a K*(t) path that maximizes the total present value of its net profit over time: Maximize subject to I w [ r ( ~. 60-233. 1963. as measured by the capital stock K. So we have an increasing function C = CiK'). A Series of Research Studies Prepared for the Commission on Money and Credit. does not affect their qualitative conclusions. Considered together. through which the firm's internal difficulties of plant adjustment as well as the external obstacles to investment (such as pressure on the capital-goods industry supply) can be explicitly taken into account in the optimization problem of the firm. let us assume that both the rr and C function are quadratic.106 PART 2: THE CALCULUS OF VARIATIONS CEIAPTER 6: INFINITE PLANNING HORIZON 107 The Eisner-Strotz Model1' The Eisner-Strotz model focuses on net investment as a process that expands a firm's plant size. we may just take its derivative K*'(t) as the optimal net investment path without having to use some ad hoc expedient like the flexible-accelerator mechanism." in Impacts of Monetary Polzcy. Strotz.13). Note also that this problem is autonomous.1: Thus. by (2.4.19). The theoretical model discussed here is contcuned in Sec. however.13) The profit rate has a maximum. because we are confining our attention to the case of plant expansion. an adjustment cost C is incurred whose magnitude varies positively with the speed of expansion Kf(t). 5. we have a profit function a(K)-which. these two curves should provide an upper bound for the net return a . we have I This functional is again an improper integral. If the C function adequately reflects these adjustment difficulties.. with derivatives: F. we have the Euler equation with general solution" 'O~obertEisner and Robert H. Note that although we have specified a fixed initial capital stock. as graphed in Fig.17)is shown as -P/2a instead of . attained at K = a/2P.P / a As a result. 8 . = p(2aK' + b)e-"' The Quadratic Case It is possible to apply the Euler equation to the general-function formulation in (5. In this quadratic model.1 (5. A. For more specific results. incidentally.C(Kt)]e-pt d t ) o K(0) = K O ( K Ogiven) II[K] = I[ c FIGURE 5. the terminal capital stock is left open. 1 of their paper. To expand the plant.C. to guarantee convergence of the improper integral in (5. Thus replacement investment is ignored.

because the expression under the square-root sign is positive. Thus. we get the complicated result A few things may be noted about this result.bp)/2p is less than a / 2 / 3 .18)]. z K*' =A 1 1 r erlt + A 2 r2 er2t Plugging these into (5.16).. The Definite Solution To get the definite solution. < 0. > p > 0.. the firm may very well wish to select an ultimate size even smaller. because it represents the intertemporal equilibrium level of K. the first two terms are explosive since r . the only way out is to set A.. the highest attainable profit rate occurs at plant size K = a / 2 p .22). we then have A. Since r2 is The transversality condition is that (5. namely. Accordingly. Worse. = 0. + r2 = p [see (5. moreover. Moreover.2 smaller than the T-maximizing one shown in Fig. 5 . the only way is to set . Moreover. the discrep.181. 5 . For the problem to be sensible. As to the particular integral E.. so that the optimal K path is For the K and K' terms. we know that We would like to see whether we can apply the terminal condition (5. 1 ~Finally. are both real. The K*(t) path should therefore have the general shape illustrated in Fig. we would not expect the firm to want to expand beyond that size.-. therefore. KO. First.18)-to write The Transversality Conditions Although we have been able to definitize the arbitrary constants without using any transversality conditions. we must stipulate that K= - - a-bp .6). Even though no explicit terminal state is given in the model. we first make use of the initial conditian K(0) = KO. 5.because r. but r.. Coupling this information with (5. we see that while the second exponential term (where r2 < 0) tends to zero as t -+ m. we should use the K* in (5. = K O. 9 p.20).22') vanish as t negative. After the cost function C is taken into account.. Economic considerations would dictate. Since neither + m nor . after explicitly taking into account the adjustment cost. In fact. it would be interesting to see where the transversality condition (5. given that b and p are positive. the autonomous nature of the problem suggests that it has an implicit terminal state-the ultimate plant size (capital stock) to which the firm is to expand.5) would lead us. fT = ( a . But the exponential in the third term reduces to e0 = 1. however.. Using the information in (5. - --^- ---- t FIGURE 5. 1 ~ indicates. the particular integral would be the logical candidate for the ultimate plant size K. however. is subject to exponential decay. and r.E..m is acceptable as the terminal value for K on economic grounds. thus B indeed emerges as the optimal terminal plant size K. Referring to (5.E.1 . it is uncertain in sign. it follows that r.108 where and r .r = PART 2 THE CALCULUS OF VARIATIONS CHAPTER 5: INFINITE PLANNING HORIZON 109 2 p %) [characteristic roots] [particular integral] 2p The characteristic roots r. that it be positive. the firm indeed ends up with a plant size m.which enables us-after setting t = 0 in (5. 3 0. since the square root itself exceeds p. the last three terms cause no difficulty. K*(t) converges to the particular integral K . and that term will not tend to zero. from our knowledge of differential equations. > 0) tends to + m if A. As Fig.. the first exponential term (where r. In order to force the first three terms to go to zero. since r2 < 0.18) and its derivative.2.. ancy between the initial and terminal sizes.

EXERCISE 5. the transversality condition leads us to precisely what we have concluded via economic reasoning. but what is saved always results in investment and capital accumulation.110 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6: INFINITE PLANNING HORIZON 111 A. Un(C) s 0 This speci. As to transversality condition (5. and is well worth a careful review.K Among the very first applications of the calculus of variations to economics is the classic paper by Frank Ramsey on the optimal social saving behavior.r2 a positive fraction.r2 must be less than one. The production function.21) to check your answer to the preceding problem. has thus contributed a much needed theoretical justification for the popularly used flexible-accelerator mechanism.24) is necessary to make . ( K O. the amount of labor services rendered can still vary. . 1.or What makes the flexible accelerator in (5. Then. and hence yield future utility? The output is assumed to be produced with two inputs. Ramsey.23') fundamentally different from (5. and K*(t). . Use the general Euler equation you derived in the preceding problem for (5.2 1 Let the profit function and the adjustment-cost fundion (5.21) implies that 6.23') I*(t) = r . However. capital K and labor L. it will also tell us to set A.3 THE OPTIMAL SOCIAL SAVING BEHAVIOR K*(t) . The discrepang between if. 4 Let the profit function be rr = 50K .23) can be simplified to (5. I The Optimal Investment Path and the Flexible Accelerator If the adjustment-cost function C = C(K1) fully takes into account the various internal and external difficulties of adjustment.23') is that it exactly depicts the flexible-accelerator mechanism in (5. but leave the discount rate p in the parametric form.K 2 as in Prob. Find the optimal path of K (definite solution) by directly applying the Euler equation (2. The Eisner-Strotz model. The output can either be consumed or saved." Economic Journal. 5 Demonstrate that the parameter restriction (5. 543-559. in the flexible accelerator. Other simplifying assumptions include the absence of depreciation for capital and a stationary population. the actual plant size on the extremal at any point of time. "A Mathematical Theory of Saving. with nonincreasing marginal utility. in its quadratic version. rank P. L). but an optimization rule that emerges from the solution procedure. = 0 in order to tame an explosive exponential expression.21): 3 3 * But since (5. This requirement is met if we impose the following additional restriction on the model: The Ramsey Model The central question addressed by Ramsey is that of intertemporal resource allocation: How much of the national output at any point of time should be for current consumption to yield current utility. [ ~ * ( t ).12) is that the former is no longer a postulated behavior pattern. = 0. the target plant size. The only unsettled matter is that. Q = Q(K. since no technological progress is assumed.14) and (5.13).7). using the standard symbols.15) take the specific forms Consumption contributes to social welfare via a social utility (index) function U(C).12). and how much should be saved (and invested) so as to enhance future production and consumption.12 This paper has exerted an enormous if delayed influence on the current literature on optimal economic growth. December 1928. then the optimal investment path is simply the derivative of the definite solution (5. we have Q=C+S=C+K'. 2 Use the solution formula (5. is time invariant.19). But if we try to apply it. pp.13) to get the optima1 K path. Thus.W ]= -$[E -~ * ( t ) ] The remarkable thing about (5. is systematically eliminated through the positive coefficient . but change the adjustment-cost function to C = K".r. 3 Find the Euler equation for the general formulation of the Eisner-Strotz model as given in (5.R)erz" the investment path (5.. it is not really needed in this problem.

Because there is no L term in the integrand. C depends on K. K and L. L.3. 0.26) Maximize L m [ u ( c ) . however. The integrand must also fall sufficiently fast over time. fication is consistent with all three utility functions illustrated in Fig.U(C) + D ( L ) ] dt ( K Ogiven) 1 subject to = KO FIGURE 6 3 . the improper integral in (5. If so. 3 ~ 6 . this problem has two state variables.1 to establish convergence. From the integrand . 5. Thus. by (5." is widely accepted as sufficient for convergence. and.D(L) falls short of Bliss. whereas D depends only on L. Even though the convergence problem is not clearly resolved by Ramsey. The substitution of (5. society incurs disutility of labor D(L). then convergence indeed is ensured. there is an upper bound for utility.26') is an autonomous problem in the state variables K and L. In order to produce its consumption goods. But. the integral is likely to diverge. but can only and be approached asymptotically in Fig.26'). D ( L ) 2 0. The net social utility is therefore U(C) .112 PART 2: THE CALCULUS OF VARLATIONS CHAPTER 5 INFINITE PLANNING HORIZON The Question of Convergence Unlike the functional of the Eisner-Strotz model. and Kt. In every case. The Solution of the Model The problem as stated in (5. it stems from Ramsey's view that it is "ethically indefensible" for the (current-generation) planner to discount the utility of future generations. Ramsey replaces (5. with nondecreasing marginal disutility. the integrand in (5. 3 ~ . as underscored in Condition 11. an optimal allocation plan should either take society to Bliss. In fact. or approach zero as t -t m. the mere fact that the integrand tends to zero as t becomes infinite does not in and of itself guarantee convergence. we shall now proceed on the assumption that the general-function integrand in (5. While this may be commendable on moral grounds.251. or lead it to approach Bliss asymptotically.1 would confirm that if the integrand attains zero at some finite time and remains at zero thenceforth. 5. As far as Ramsey is concerned. even if the integrand has an upper bound. referred to as "the Ramsey device. Since the new functional measures the amount by which the net utility U(C) . The economic planner's problem is to maximize the social utility for the current generation as well as for all generations to come: (5.26) does not contain a discount factor. U depends solely on C. 5. it is to be minimized rather than maximized.26) with the following substitute problem: Minimize km[ B K(0) .26') will fall steadily to the zero level. ' the problem is degenerate on the L side. This omission is not the result of neglect. That level of utility is attainable a t a finite consumption level & in Figs. the convergence problem is thereby resolved.D ( L ) ] dt where B (for Bliss) is a postulated maximum attainable level of net utility. and in general it would be inappropriate to stipulate an initial condition on L. 5 . In the integrand.26) by (5. To handle this difficulty. 5 . where C and L-as K and Q-are functions of time. since the net utility is expected to be positive as t becomes infinite.26') does converge. the absence of a discount factor unfortunately forfeits the opportunity to take advantage of Condition I11 of Sec.D(L). Our earlier discussion of Condition I in Sec. Intuitively.

optimally. or This result prescribes a rule about consumption: p . must at every point of time have a growth rate. we are using p to denote marginal utility.27) D1(L)=pQL forallt20 or. (The abbreviation of marginal utility is "mu. This result is known as the Ramsey rule. = FL. If so.281.d p / d t = 0. It follows that the arbitrary constant in (5.114 we can get. the Euler equation for the L variable.2.. 0.29) uncertain. The marginal disutility of labor must. p is a function of C and hence indirectly a function of K. or = (5. We can thus make use of the fact that. Once the optimal p path is found. F . When condition (5. this rule is curiously independent of the production function. equal to the negative of the marginal product of capital. We note first that since FLt 0.dF.K'F.") Like U(C). as t + m.211. Note that. And this led Ramsey to conclude (page 548) that the production function would matter only insofar as it may affect the determination of Bliss. L. be equated to the product of the marginal utility of consumption and the marginal product of labor. according and lim(F-K1F.dFLt/dt = 0. reduces simply to FL= 0.27).29) has to be zero.27) can be used to determine an optimal L path. the optimal path of K' is These derivatives will enable us to apply the Euler equations (2. the economic objective of the mopel is to have U(C) . let us find this information by taking advantage of the fact that the present problem falls under Special Case I1 of Sec. But this conclusion is incorrect. however. On the K side.29) be set equal to zero. According to (5.. Following this rule./dt = 0 gives us the condition -pQK . we can map out the optimal path for p . this condition still leaves the constant in (5. the derivatives PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5: INFINITE PLANNING HORIZON to (2.D(L) tend to Bliss.. To find this value. This means that U must tend to U and p must tend to zero as t becomes infinite. the first of these conditions reduces to the = condition that F -+ 0 as t -+ m. However.30') does depend crucially on the production function. we note that this constant is to hold for all t . with the time argument explicitly written out. On the surface. is nothing but the left-hand-side expression in (5. It stipulates that. On the K side. (5.. or (5. the rate of capital accumulation must at any point of time be equal to the ratio of the shortfall of net utility from Bliss to the marginal utility of consumption. As mentioned earlier.K1FK3 constant. the Euler equation FK. at each point of time.29) B - U(C) + D(L) - K'p = constant for all t r 0 For notational simplicity. - . so it is not appropriate to preset an initial level L(0). the marginal utility of consumption. it requires that lim(F-L'FLt)=O t--rm I The Optimal Investment and Capital Paths Of greater interest to us is the optimal K path and the related investment (and savings) path Kt.5) is applied to the two state variables in the present problem. 2. and K'. Without t as an explicit argument in the integrand function. This would mean the net utility U(C) D(L) must tend to Bliss. the problem is degenerate in L. A Look at the Transversality Conditions We shall now show that the use of the infinite-horizon transversality condition would also dictate that the constant in (5.29). p is to be optimally chosen by reference to Q. Instead of deducing these from the previous results. This means that the denominator of (5. on the L side. by itself. the other condition will fix the constant a t zero because F .>=O t-tm In view of the fact that FL. the Euler equation for K is. including t + m. we can obtain the derivatives This equation can be solved for K' as soon as the (arbitrary) constant on the right-hand side can be assigned a specific value.

( b ) Apply the Euler equation (2. and A. New York. ( a ) Apply the condition (5.< 0 ) [from ( 5 . Discuss the characteristics of this path. 1 7 ) ] where p is a positive rate of discount. unless one of the arbitrary constants is zero. we have B = U. the transversality condition (5.4 PHASE-DIAGRAM ANALYSIS In models of dynamic optimization. Then definitize the other constant by the initial condition K ( 0 ) = K O .). 5.. ( d ) Integrate K*'(t) to obtain the K * ( t ) path. find an expression for C*. we shall show the application of phase-diagram analysis to it in the next section. however. Compare the resulting definite solution with the one obtained in the preceding problem. (c) Find the savings ratio K*'(t)/Q*(t) = K*'(t)/rK*(t).). ( K O. implied by the result in (b). The two-variable phase diagram can then be applied to the problem in a straightforward manner.13 This is particularly true when general functions are used in the model. Sec. What is the rate of growth of K*? How does that rate vary over time? Find the K * ( t ) path for the preceding problem by the alternative procedure outlined below: ( a ) Express the integrand of the functional [ B . Writing these functions as C = a ~ " bKf + ( a . For a calculus-of-variations problem with a single state variable. as F ( K .] (c) Bearing in mind that Q . and using the relationship between K*'(O) and C*. = rK. ( b ) Use the Ramsey rule (5. two-variable phase diagrams are prevalently employed to obtain qualitative analytical results. the Euler equation comes as a single second-order differential equation. How specifically are the two paths related to each other?*[Hint: Since the D ( L ) term is absent in the problem. it is possible to go one step further to find the K * ( t ) path by integrating (5. 1 5 ) ] they derive the Euler equation P -K a (5.3 1 Let the production function and the utility function in the Ramsey model take the specific forms: with initial capital K O .. We shall illustrate the technique in this section with the Eisner-Strotz model and the Ramsey model. that is. K'). Consequently. 18.30') will contain one arbitrary constant. Chiang. in terms of K O .Write out the definite solution for K*'(t). Discuss the characteristics of this path. We shall then adapt the phase diagram to a finite-horizon context. and the time horizon is infinite.30').116 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5 . But it is usually a simple matter to convert that equation into a system of two simultaneous first-order equations in two variables.R ) e r z t + = + ( r r . with arbitrary constants A. we need specific forms of U ( C )and D ( L ) functions. 2 1 ) ] K - a-bp 2P - 13~he method of two-variable phase diagrams is explained in Alpha C. and Qo = C*. We shall not attempt to illustrate the model with specific functions. which can be definitized by the initial condition K ( 0 ) = K O . . From the Ramsey rule.b > 0 ) [from ( 5 . discussed in Sec. Eisner and Strotz are able to offer a quantitative solution to the problem when specific quadratic profit and cost-of-adjustment functions are assumed.30')to find K*'(t). retain the arbitrary constants. ( d ) Is a unitary limit for the savings ratio economically acceptable? Use your answer to this question to definitize one of the constants. 5. Fundamental Methods of Mathematical Economics.2. + K*'(O) (allocation of Q. McGraw-Hill. For that. Show that this ratio approaches one as t -+ m. INFINITE PLANNING HORIZON 117 The present problem implicitly specifies the terminal state at Bliss. rather. The Eisner-Strotz Model Once Again In their model of a firm's optimal investment in its plant. The general solution of (5.7) is not needed. Find the general solution K * ( t ) . EXERCISE 5.19) to get a second-order differential equation. 1984. From the quantitative solution.18) or (2. and compare it with C*(t).5. 3d ed.U(CI]in terms of K and K t . Then derive the optimal savings path K*'(t).28) to derive a C * ( t ) path. (production of Q.32) K*(t) where = = bp-a 2a [from ( 5 . And that would complete the solution of the model.

and the K' = 0 curve coinciding with the vertical axis. the speed of movement depends on the magnitudes of those derivatives.34) and solving for K.351. Similarly.0. Accordingly. and rightward I-arrowheads (I' > 0) to the right. our first order of business is to draw two demarcation curves. K' should be moving from a negative region. The unique intertemporal equilibrium occurs at the vertical intercept of the I' = 0 curve. From (5. to a positive region (with I' increasing). For K'.. Setting I' = 0 in (5. FIGURE 5. by (5. of the I' = 0 curve. each of these curves serves to delineate the subset of points in the IK space where the variable in question can be stationary (dI/dt = 0 or dK/dt = 0).. I' = f ( I . 5. Note that this equilibrium coincides exactly with the particular integral (target plant size) found in (5. and upward (K' > 0) to the right. K)] This is an especially simple system. moving from west to east (with I increasing).. the positive sign of aKt/aI similarly implies that. a positive magnitude by (5. and a plus sign has been added to the right of the I' = 0 label.31) as a system of two first-order differential equations: (5.118 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5: INFINITE PLANNING HORIZON 119 we see that the capital stock K (the plant size) should optimally approach its specific target value by following a steady time path that exponentially closes the gap between the initial value and the target value of K. Jointly. and in fact K is even absent in the g function.1 > o aK' a~ Similarly. all points on the I' = 0 curve are characterized by stationarity in the variable I. on the other hand. with the I' = 0 curve sloping downward. the two curves determine at their intersection the intertemporal equilibrium-steady state-of the entire system (dI/dt = 0 and dK/dt = 0). we find by differentiation that -= -.35) K = ing bars" have been attached to the K' = 0 curve to indicate that the stationarity of K forbids any north-south movement on the curve. I' should be going from a negative region. of the K' = 0 label.19). to a positive region.34).e. horizontal "sketch- ! t I The positive sign of aI1/al implies that. which. we get (5. very much involved in dynamic motion.18).b p ) / 2 P . we have also drawn leftward I-arrowheads ( I ' < 0) to the left. by setting Kt = 0. through zero. of the K' = 0 curve.e. + ). the sign sequence is again ( . This is of course only to be expected.36) [equation for K' = 0 curve] These two equations both plot as straight lines in the phase space. is K = (a . Individually. The direction of movement depends on the signs of the derivatives I' and K' at a particular point in the IK space. through zero. To record this ( .. since both the f and g functions are linear in the two variables I and K. K' = g(I.-I 2P P 1=0 a-bp up [equation for I' = 0 curve] azl a~ p>O and -. Now we shall show how the Euler equation can be analyzed by means of a phase diagram. We have therefore drawn a couple of vertical "sketching b a r s ' k n the curve. and the plus sign to the right. The K-arrowheads are downward (K' < 0) to the left.0. we get (5. Hence.4 The Phase Diagram To construct the phase diagram.34) P bp-a I1=pI+-K+a 2a K'= I [i. going from west to east. We begin by introducing a variable I (net investment) (5. and this explains the minus sign to the left. to indicate that there should not be any east-west movement for points on the curve. as shown in Fig. K ) ] [i. . a minus sign has been added to the left of the I' = 0 label. Points off the demarcation curves are.33) I ( t ) = K'(t) [implying I1(t) = KU(t)] which would enable us to rewrite the Euler equation (5. + ) sign sequence.4. I' = 0 and K' = 0. By definition..

only the one that also satisfies the transversality condition or its equivalent-one that lies along a stable branch-qualifies as an optimal path. This is precisely what (5. 5. U.25). and. which. certain qualitative restrictions should be placed on the shapes of these functions. we shall assume that the labor input is constant. becomes The replacement of Q(K. It is clear that the only way to reach the target level of capital at E is to get onto the stable branches. and further consumption beyond that level entails a reduction in U . we can derive the marginalutility function p(C) in the diagram directly beneath. for by differentiating (5. .38') and (5.5. we see two stable branches leading toward E (one from the southeast direction and the other from the northwest). A Simplified Ramsey Model To simplify the analysis of the Ramsey model. the inverse function C(p) exists. and there should be an infinite number of streamlines. or trajectories. it is mandatory that we select I*. Then we will have only one state variable and one Euler equation with which to contend. in particular. Again. to portray the dynamics of the system from any conceivable initial point.. The way the streamlines are structured. The restriction on the choice of I*. As a result. 5. Combining (5.Kt. sets forth the assumed shape of Q(K).. thereby reducing the production function to Q(K). with B as the Bliss level of utility and CB the Bliss level of consumption. of course. Given the initial capital KO. p. we can draw a family of:streamlines. containing two general functions. The underlying assumption for this curve is that the marginal product of capital Q'(K) is positive throughout-there is no capital saturation-but the law of diminishing returns prevails throughout. This need is satisfied by (5. However. 5. we have a so-called saddle-point equilibrium. when adapted to the present simplified context. and the other streamlines heading toward E at first but then turning away from it.a ) e r z f< 0 since. Another consequence of omitting L and D{L) is to make Bliss identical with the upper bound of the utility function. Before we proceed. Every point should be on some streamline. The reader should verify in Fig.341. we get a two-equation system r. with C'(p) < 0.39).4. Traveling toward the northwest on the stable branch in Fig. KO< p. the other general function.for instance. The diagram in the lower-left corner. on the other hand. While all the streamlines. L) . The Euler equation for the original Ramsey model can be written as [from (5. But usually we only draw a few. satisfy the Euler equation.(K. with p = 0 at C = CB.PART 2 THE CALCULUS OF VARIATIONS CHAPTER 5 INFINITE PLANNING HORIZON Following the directional restrictions imposed by the IK-arrowheads and sketching bars.32) tells us. is thus the surrogate for a transversality condition. the intersection of the two demarcation curves. for only that choice will place us on the "yellow brick road" to the equilibrium. The substitution of C by the function C(p) reflects the fact that in the usual marginal-utility function. And this serves to underscore the fact that the transversality condition constitutes an integral part of the optimization rule. is a monotonically decreasing function of C. we find a steady climb in the level of K. as the initial rate of investment. A different initial capital will.28)] or. . we need a difl'erential equation in the variable K to close the dynamic system. L) by Q(K) is due to the elimination of L. making explicit the dependence of the marginal product on K. There exists consumption saturation at C. this is consistent with the earlier quantitative solution. For the time being. so that C can be taken as an inverse function of marginal utility p. Q(K) and C(p). being based on (5. since the variable K also appears in the equation. in the present example. two unstable branches leading away from E (one toward the northeast and the other toward the southwest). Since p(C) is monotonic. we do find I*'(t) = K*"(t) = This is a first-order differential equation in the variable p. implying a steady riarrowing of the discrepancy between KO and the target capital level. From this function. The Saddle-Point Equilibrium The intertemporal equilibrium of the system occurs at point E. let us assume that the U(C) function is shaped as in the top diagram of Fig. C = Q(K.4 that the streamlines conform to the IK-arrowheads.32) twice with respect to t. Accompanying the increase in K is a steady decline in the rate of investment I. that they are consistent with the sketching bars which require them to cross the I' = 0 curve with infinite slope and cross the K' = 0 curve with zero slope. here plotted-for a good reason-with Q on the horizontal rather than the vertical axis. call for a different I*.

' Since the sketching bars coincide with the p' = 0 curve. into an equal horizontal distance in the phase space) to complete the formation of a rectangle. thus we denote it by K. To satisfy (5. The vertical height of point G may be interpreted as the Bliss level of K. Now the point ( K = KB. Other ordered pairs that qualify as points on the K' = 0 curve can be derived similarly. we include in this diagram set a 45" line diagram (with K on both axes) and a phase-space diagram which is to serve as the repository of the K' = 0 curve being derived. The second results from our aasumption of no capital saturation: Since Q'(K) + 0. the C axis and the Q axis are not inherently the same. -. . that is. 5.5. which is characterized by K = K. horizontal. and each pair of adjacent diagrams in the same "column" has a common horizontal axis-except that. but rather to be made the same via a deliberate equating process... p = 0) can be traced out via the dashed rectangle.GHK. Those for the . should coincide with the K axis. according to (5. and then go straight up into the phase space (thereby translating the vertical height of point G. the Q(K) curve.41).5.411. p = 0 curve are. p = 0) then clearly qualifies as a point on the Kt = 0 curve in the phase space. At the intersection of these two curves is the unique equilibrium point E. if we go straight down to meet the Q(K) curve [thereby satisfying the condition (5. we simply select point G on the Q(K) curve. we equate Q(K) and C(p) in accordance with (5. and add thereto the CL) = 0 curve.41)]. Consider.. and which therefore corresponds to Bliss. for instance.. we need the K' = 0 and p = 0 auves. in the left column. the following two equations readily emerge: (5. then the fourth corner of this rectangle must be a point on the K' = 0 curve.42) The Saddle-Point Equilibrium We now reproduce the K' = 0 curve in Fig. and they lie entirely along that curve. l U'(C) Phase a I I To find the proper shape of the K' = 0 curve. C. ' From (5.- JE . for then we can equate Q and C graphically by making a vertical movement across the two diagrams. 5.122 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 5 . and the 45" line. The Phase Diagram To construct the phase diagram. on the p(C) curve. Aside from the p(C) and Q(K) diagrams. Q(K ) = C(p) p=O [equation for K' = 0 curve] [equation foi p' = 0 curve] The first of these is self-explanatory. It is in order to facilitate this equating process that we have chosen to plot the Q(K) function with the Q variable on the horizontal axis in Fig. but also as the locus of some streamlines. the point that lies directly below point C. The sketching bars for the K' = 0 curve are vertical. Indeed. we can have pf = 0 if and only if p = 0. spanned by the p(C) curve.41) (5. starting from any selected point on the p(C) curve. turn right to meet the 45" line. or KB. 5.6.401. the latter will serve not only as a demarcation curve. the case of p = 0.42). The easiest way to trace out the entire K' = 0 curve is by means of the rectangularly aligned set of our diagrams in the lower part of Fig. the point C. which. and p = 0. however.. These four diagrams are aligned in a way such that each pair of adjacent diagrams in the same "row" has a common vertical axis. INFINITE PLANNING HORIZON 123 i/ Utility function I 1I . Such a process of rectangle construction yields a K' = 0 curve that slopes downward and cuts the horizontal axis of the phase space at K = K. The ordered pair (K = K B .




implies that there exists an optimization rule to be followed, much as a profit-maximizing firm must, in the simple static optimization framework, select its output level according to the rule MC = MR. If the equilibrium were a stable node or a stable focus-the "all roads lead to Rome" situation -there would be no specific rule imposed, which is hardly characteristic of an optimization problem. On the other hand, if the equilibrium were an unstable node or focus, there would be no way to arrive at any target level of the state variable at all. This, again, would hardly be a likely case in a meaningful optimization context. In contrast, the saddle-point equilibrium, with a target that is attainable, but attainable only under a specific rule, fits comfortably into the general framework of an optimization problem.

Capital Saturation Versus Consumption Saturation
The preceding discussion is based on the assumption of consumption saturation at C,, with U(C,) = B and p(CB) = 0. Even though capital saturation does not exist and Q'(K) remains positive throughout, the economy would nevertheless refrain from accumulating capital beyond the level KB. As a variant of that model, we may also analyze the case in which capital, instead of consumption, is saturated. Specifically, let Q(K1 be a strictly concave curve, with Q(0) 10, and slope Q'(K) >< 0 as K 5 K. Then we have capital saturation a t K. At the same time, let U(C) be an increasing function throughout. Under these new assumptions, it becomes necessary to redraw all the curves in Fig. 5.5 except the 45" line. Consequently, the K' = 0 curve in the phase space may be expected to acquire an altogether different shape. However, it is actually also necessary to replace the horizontal CL) = 0 curve, because with consumption saturation assumed away and with p > 0 throughout, the horizontal axis in the diagram is now strictly off limits. In short, there will be an entirely new phase diagram to analyze. Instead of presenting the detailed analysis here, we shall leave to the reader the task of deriving the new K' = 0 and CL) = 0 curves and pursuing the subsequent steps of determining the Kp-arrowheads and the streamlines. However, some of the ma~or features of the new phase diagram may be mentioned here without spoiling the reader's fun: (1) In the new phase diagram, the K' = 0 curve will appear as a U-shaped curve, and the p' = 0 curve wil! appear as a vertical line. (2) The new equilibrium point will occur at K = K, and the marginal utility corresponding to K (call it / will now . ) I be positive rather than zero. (3) The equilibrium value of the marginal utility, P , implies a corresponding equilibrium consuyption & which, in turn, is associated with a corresponding utility level U.Even though the latter is not the highest conceivable level of utility, it may nevertheless be regarded as "Bliss" in the present context, for, in view of capital saturation, we will never wish to venture beyond 0. The magnitude of this adapted


The sign sequences for K' and p' can be m r h i n e d from the derivatives and [from (5.40)]

These yield the (-, 0, +) sequence rightward for Kt, and the (+, 0, -1 sequence upward for p'. Consequently, the K-arrowheads point eastward in the area to the right of the K' = 0 curve, and they point westward in the area to its left, whereas the p-arrowheads point southward in the area above the )o = 0 curve, and they point northward in the area below it. The resulting streamlines again produce a saddle point. Given any initial capital KO,it is thus necessary for us to select an initial marginal utility that will place us on one of the stable branches of the saddle point. A specific illustration, with KO< Kg, is shown in Fig. 5.6, where there is a unique po value, P * ~that can land us on the "yellow brick road" leading to , E , All the other streamlines will result ultimately in either (1) excessive capital accumulation and failure to attain Bliss, or (2) continual decumulation ("eating up") of capital and, again, failure to reach Bliss. As in the Eisner-Strotz model, we may interpret the strict requirement on the proper selection of an initial p value as tantamount to the imposition of a transversality condition. The fact that both the Eisner-Strotz model and the Ramsey model present us with the saddle-point type of equilibrium is not a mere coincidence. The requirement to place ourselves onto a stable branch merely





Bliss is not inherently fixed, but is dependent on the location of the capital-saturation point on the production function. (4) Finally, the equilibrium is again a saddle point. Yet another variant of the model arises when the U ( C ) curve is replaced by a concave curve asymptotic to B rather than with a peak at B, as in Fig. 5 . 3 ~ In this new case, the p curve will become a convex curve . asymptotic to the C axis. What will happen to the K' = 0 curve now depends on whether the Q ( K ) curve is characterized by capital saturation. If it is, the present variant will simply reduce to the preceding variant. If not, and if the Q( K ) curve is positively sloped throughout, the K' = 0 curve will turn out to be a downward-sloping convex curve asymptotic to the K axis in the phase space. The other curve, p = 0, is now impossible to draw ' because, according to (5.401, p' cannot attain a zero value unless either p = 0 or Q'(K) = 0, but neither of these two alternatives is available to us. Consequently, we cannot define an intertemporal equilibrium in this case.








Finite Planning Horizon and Turnpike Behavior
The previous variants of the Ramsey model all envisage an infinite planning horizon. But what if the planning period is finite, say, 100 years or 250 years? Can the phase-diagram technique still be used? The answer is yes. In particular, phase-diagram analysis is very useful in explaining the so-called "turnpike" behavior of optimal time paths when long, but finite, planning horizons are considered.14 In the finite-horizon context, we need to have not only a given initial capital K O as in the infinite-horizon case, but also a prespecified terminal capital, K T , to make the solution definite. The magnitude of KT represents the amount of capital we choose to bequeath to the generations that outlive our planning period. As such, the choice of KT is purely arbitrary. If we avoid making such a choice and take the problem to be one with a truncated vertical terminal line subject to KT r 0, however, the optimal solution path for K will inevitably dictate that KT = 0, because the maximization of utility over our finite planning period, but not beyond, requires that we use up all the capital by time T. More reasonably, we should pick an arbitrary but positive K T . In Fig. 5.7, we assume that KT > KO. The specification of KO and KT is not sufficient to pinpoint a unique optimal solution. In Fig. 5.7a, the streamlines labeled S,, .. . , S, (as well as others not drawn), all start with KO, and are all capable of taking us


Fi6URE 5.7


14see Paul A. Samuelson, "A Catenary Turnpike Theorem Involving Consumption and the Golden Rule," American Economic Review,June 1965, pp. 486-496.

eventually to the capital level K T . The missing consideration that can pinpoint the solution is the length of the planning period, T. Let the TI curve be the locus of points that represent positions attained on the various streamlines after exactly T, years have elapsed from t = 0. Such a curve, referred to as an isochrone (equal-time curve), displays a positive slope here for the following reason. Streamline S,, being the farthest away from the K' = 0 curve and hence with the largest K' value among the four, should result in the largest capital accumulation at the end of TI years (here, K T ) , whereas streamline S,, at the opposite extreme, should end up at time T, with the least amount of capital (here,






not much above KO). If the planning period happens to be T, years, streamline S, will clearly constitute the best choice, because while all streamlines satisfy the Euler equation, only S, satisfies additionally the boundary conditions K(0) = KO and K(T,) = KT, with the indicated journey completed in exactly T, years. Streamlines are phase paths, not time paths. Nevertheless, each streamline does imply a specific time path for each of the variables K and p. The time paths for K corresponding to the four streamlines are illustrated in Fig. 5.7b, where, in order to align the K axis with that of Fig. 5.7a, we plot the time axis vertically down rather than horizontally to the right. In deriving these time paths, we have also made use of other isochrones, such as T, and T, (with a larger subscript indicating a longer time). Moving down each streamline in Fig. 5.7a, we take note of the K levels attained at various points of time (i.e., at the intersections with various isochrones). , When such information is plotted in Fig. 5.7b, the time paths P,, . . . , P emerge, with P, being the counterpart of streamline Si. For the problem with finite planning horizon TI, the optimal time path for K consists only of the segment KoG on PI. Time path P, is relevant to this problem, because S, is already known to be the relevant streamline; we stop at point G on PI, because that is where we reach capital level KT at time TI. However, if the planning horizon is pushed outward to T,, then we should forsake streamline S, in favor of S,. The corresponding optimal time path for K will then be P,, or, more correctly, the K o J segment of it. Other values of T can also be analyzed in an analogous manner. In each case, a different value of T will yield a different optimal time path for K. And, collectively, all the finite-horizon optimal time paths for K are different from the infinite-horizon optimal K path P,-the Ramsey path-associated with streamline S,, the stable branch of the saddle point. With this background, we can discuss the turnpike behavior of finite-horizon optimal paths. First, imagine a family traveling by car between two cities. If the distance involved is not too long, it may be the simplest to take some small, but direct, highway for the trip. But if the distance is sufficiently long, then it may be advantageous to take a more out-of-the-way route to get onto an expressway, or turnpike, and stay on it for as long as possible, till an appropriate exit is reached close to the destination city. It turns out that similar behavior characterizes the finite-horizon optimal time paths in Fig. 5.7b. Having specified K O as the point of departure and KT as the destination level of capital, if we successively extend out the planning horizon sufficiently far into the future, then the optimal time path can be made to arch, to any arbitrary extent, toward the Ramsey path P, or toward the K, line-the "turnpike" in that diagram. To see this more clearly, let us compare points M and N. If the planning horizon is T,, the optimal path is segment K,M on P,, which takes us to capital level KT (point M ) at time T,. If the horizon is extended to T,,however, we ought to select segment

K o N on P, instead, which takes us to capital level KT (point N ) a t time T,. The fact that this latter path, with a longer planning period, arches more toward the Ramsey path and the K , line serves to demonstrate the turnpike behavior in the finite-horizon model. The reader will note that the turnpike analogy is by no means a perfect one. In the case of auto travel, the person actually drives on the turnpike. In the finite-horizon problem of capital accumulation, on the other hand, the turnpike is only a standard for comparison, but is not meant to be physically reached. Nevertheless, the graphic nature of the analogy makes it intuitively very appealing.

1 In the Eisner-Strotz model, describe the economic consequences of not




adhering to the stable branch of the saddle point: (a) With KOas given in Fig. 5.4, what will happen if the firm chooses an initial rate of investment greater than I*o? ( b ) What if it chooses an initial investment less than I*,? In the Ramsey model with an infiniteplanning horizon, let there be capital saturation at K. but no consumption saturation. (a) Would this change of assumption require any modification of the system (5.40)?Why? (b) Should (5.41) and (5.42)be modified? Why? How? ( c ) Redraw Fig. 5.5 and derive an appropriate new K' = 0 curve. ( d l In the phase space, also add an appropriate new p' = 0 curve. On the basis of the new K' = 0 and p' = 0 curves derived in the preceding problem, analyze the phase diagram of the capital-saturation case. In what way(s) is the new equilibrium similar to, and different from, the consumption-saturation equilibrium in Fig. 5.6? [Hint: The partial derivatives in (5.43) are not the only ones that can be taken.] Show that when depreciation of capital is considered, the net product can have capital saturation even if the gross product Q( K) does not. In the finite-horizon problem of Fig. 5.7a, retain the original KO,but let capital KT be located to the right of Kg. (a) Would the streamlines lying below the stable branch remain relevant to the problem? Those lying above? Why? (b) Draw a new phase diagram similar to Fig. 5.7a, with four streamlines as follows: S, (the stable branch), S,, S,, and S , (three successive relevant streamlines, with S , being the closest to S,). Using isochrones, derive time paths P I , . . . , P, for the K variable, corresponding to the four streamlines. (c) IS turnpike behavior again discernible?

we find th (5. this sufficiency condition remains applicable when the terminal time is fixed but the terminal state is variable. Moreover.47) But since 15 IDII = FKrKt >0 A more formal statement of this condition can be found m G. y') in a fixed-endpoint problem is concave (convex) in the variables (y.9). 102. this supplementary condition becomes15 In this condition. the difference between the K value on any admissible neighboring path and the K* value is bounded. North-Holland.451. we thus have FK= . K') yields the following second derivatives. p. 1971.45) can be satisfied as an equality. U"(C) < 0. and Q ( K ) < 0.46) FKt= . Q'(K) the following first derivatives of F: > 0. For the infinite-horizon case. the integrand function fit.44) takes the form (5.Ur(C)Q'(K) and FKt= U1(C)(. y.K*) component of (5. Again applying the sign-definiteness test in (4. Applying the sign-definiteness test in (4. the vanishing of FKt assures us that the supplementary condition (5.2 ~ e .16): FKeKt Application t o the Ramsey Model The integrand function of the (simplified) Ramsey model is = . the assumed form of the quadratic profit function depicted in Fig. F ..46). Kemp.U r ( C ) Q ( K ) = It follows that the quadratic form q associated ritb these second derivatives is negative definite. yl).16)] Substituting the K*'(t) expression in (5.5 THE CONCAVITY/CONVEXITY SUFFICIENT CONDITION AGAIN It was earlier shown in Sec. represents the deviation of any admissible neighboring path y ( t ) f m Ua optimal path y*(t).u " ( c ) [ Q ( K ) ] ~ . Variational Methods in Economics. the strid concavity of the integrand function would make the Euler equation sufficient for a unique absolute maximum in the total profit H(K).(K.5.45) lim [FK.la suggests that as t becomes infinite. Application t o the Eisner-Strotz Model In the Eisner-Strotz model. then the Euler equation is sufficient for an absolute maximum (minimum) of V[y]. term should specifically be (5.9). is to be evaluated along the optimal path. Thus. Theorem 2. C. provided that the supplementary condition For the present model.~ ' FKP= FKJK0 = and FKK -2ge-pt = We assume that U'(C) r 0.17. and the integrand functimj ir is strictly concave in the variables (K. 5.(2aK' + b)ePPt [from (5.1) = Ur(C) lDIJ = F K t K r < 0 the second derivatives are found to be FKrKr FKKr = = U"(C)(-1) = -Um(C) FKtK U"(C)Qf( K ) = FKK . md (v 9) .K*)] s 0 t+m where the FK. 4.23) for the K' term in (5.130 PART 2: THE CALCULUS OF VARIATIONS / CHAPTER 5. Kt). Consequently. .2 that if the integrand function F(t. H a w and M. - K It is clear that F ttends to zero as t becomes infinite. as shown in (5. the supplementary condition (5. Amsterdam. From - where all the parameters are positive. K. As for the ( K . have is satisfied. INFINITE PLANNING HORIZON 131 5.

as t w. we need to confirm that lim [FK. In fact.is subject to revision over time when. If Q(K) contains a saturation point.12).K*) will be bounded as t -t m. ) 2 0 CONSTRAINED PROBLEMS Thus. pertain to the endpoint. While such a constraint can. As for the ( K .U(C)]dt. the input variables R and L may appear in a model as the state variables. we see that the assumption of capital saturation would help in the application of the sufficiency condition. even though not mentioned by name. we cannot claim that the associated quadratic form q is positive definite and that the F function is strictly convex. and there is no bound to the economically meaningful values of K. Thus the model should make room for a production-function constraint. be expunged from a model.1I). it is easy to see that as t becomes infinite (as we move = toward Bliss and as the marginal utility steadily declines). (5. Nevertheless.(K.12). we can conclude that the integrand function F is convex.132 PART 2: THE CALCULUS OF VARIATIONS is not strictly positive. by (4. the Q(K) curve extends upward indefinitely. As a simple example.49) must tend to zero as t -+ m. The present chapter explains how this is done. through substitution. It is also possible to have a constraint that ties one state variable to the time derivative of another.K*) component of (5. or is bounded. imply that lhll > 0 and ID. when capital saturation is assumed. a model may involve the expected rate of inflation rr and the actual rate of inflation p as state variables. together with the information in (5." all admissible paths must end at K. given the convexity of the F function. we find that CHAPTER IDO. however. Thus the ( K .47) and (5. FKt tends to zero.( 20 These. Turning to the supplementary condition (5./ = ID."[B .49) can be satisfied as an equality.49) can be satisfied evenif there is no consumption saturation. In this connection. it imposes a constraint that the solution must satisfy. it is unfortunately not possible to state generally that the deviation of K(t) from K*(t) tends to zero. in light of experience. the Euler equation is in this case sufficient for an absolute minimum of the integral I. In the present chapter. For instance. Since FKt= U'(C) is bounded as we approach Bliss.41). we concern ourselves with constraints that more generally prescribe the behavior of the state variables.K*)] I0 t-+a Since FK* U'(C).48). . The revision of the expected rate of inflation may be captured by a constraint such as as previously encountered in (2. constraints have already been encountered several times. The expected rate. Consequently. + In the previous discussion. it proves to be off the mark either on the up side or on the down side. Such constraints.49) is satisfied as an equality. Whenever a problem has a specified terminal curve or a truncated vertical or horizontal terminal line.I= FKK^> 0 and ID0.K*) term. condition (5. but obviously their time paths cannot be chosen independently of each other because K and L are linked to each other by technological considerations via a production function. and (5. Since the production function is assumed to have positive marginal product throughout. a. it is possible to establish the (nonstrict) convexity of the F function by the sign-semidefiniteness test in (4. With reference to the principal minors defined in (4. it can also be retained and explicitly treated as a constraint. then ( K . With the capital-saturation point K now serving to define "Bliss.44).

will be identical with that of F. To demonstrate this. To emphasize the fact that A.y. with (say) m = n. Equality Constraints Let the problem be that of maximizing subject to a set of m independent but consistent constraints ( m < n) g1(t. The Lagrangian integrand 9. n. for every i. or the Euler-Lagrange equation.. then the value of . so = 0 that the m equations in (6. It is a relatively simple matter to ensure the requisite satisfaction of all the constraints. whereas in the static problem the Lagrange-multiplier term is added to the original objective function.has as its arguments not only the usual t. . since F is independent of any A''. . let us first note that the Euler-Lagrange equations relating to the yJ variables are simply .) (6. Second.g ' ) expression.g' = 0 for all t E [0. . . C.2) would already uniquely determine the yJ(t) paths.T] ( J = 1. to be attached to the (c.. m). yJ...y1. .. but as functions of t. ( j = 1 . and there would remain no degree of freedom for any optimizing choice.. each to be subjected to an Euler equation. so that c. this type of constrained dynamic optimization problem ought to contain at least two state variables. =0 or c. we now form a Lagrangian integrand function. as it is sometimes referred to in the present constrained framework. (6... the number of constraints... we have FA. can vary with t .2) is supposed to be satisfied at every point of time in the interval [0.7) - d dt 0 forallt~[O. they are. such as But. . 9. not necessarily the first m of them. in this problem. similarly. n ) [cf. As long as all of the constraints in (6. and y. This is because each constraint g ' in (6.(2.g' = 0 for all i. In line with our knowledge of Lagrange multipliers in static optimization. .7') FA.2) are satisfied. any m of the yJ variables can be used in IJI. but also the multipliers A.. the Lagrange multipliers A. it is meant that there should exist a nonvanishing Jacobian determinant of order m. . augmenting the by original integrand F in (6. are added to the integrand function F.. are constants) . of course. by subjecting all the Ai .1 FOUR BASIC TYPES OF PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 135 CONSTRAINTS Four basic types of constraints will be introduced in this section.134 6.64)] - which coincide with the given constraints.. . T ] [by (6. is required to be strictly Iess than the number of state variables.7) reduce to 1 (6.. . m. . m ) However. .. and the free extremum of the F functional Y in (6. (i = 1.= dt d O forallt E [O. not to the objective functional /:~dt.2) = Although the structure of this Lagrangian function appears to be identical with that used in static optimization. TI. in the present framework.n).27)] As applied to the Lagrange multipliers. First. there are two fundamental differences.5) will accordingly be identical with the constrained extremum of the original functional V.. . .Y. . we write A.(t).1) as follows: which we can maximize as if it is an unconstrained problem. By the "independence" of the m constraints. before a single constraint can meaningfully be accommodated. . The replacement of F by 9 in the objective functional gives us the new functional el : g"'(t. and appropriate boundary conditions.) = c. As in static optimization problems with constraints. the Lagrange-multiplier method plays an important role in the treatment of constrained dynamic optimization problems. In view of this.. Thus. Just treat the Lagrange multipliers as additional state variables.Y. here the terms with Lagrange multipliers A.T] ( i = l . Otherwise. Note that. and to each value of t there may correspond a different value of the Lagrange multiplier A. appear not as constants.. . ( cl. the equation system (6.

the resulting n differential equations will contain 2n arbitrary constants in their general solutions.1). Even though the nature of the constraint equations has been changed. . As a matter of practical procedure for solving the present problem.Y This is a problem with two state variables ( n = 2) and one constraint ( m = 1). we can ( 1 ) apply the Euler-Lagrange equation to the n state variables yj only. z ( 0 ) = 20.y.136 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 137 variables to the Euler-Lagrange equation. we see that Moreover. the problem can These derivatives lead to the two Euler-Lagrange equations h he independence among the m differential equations in (6. Inequality Constraints When the constraints are characterized by inequalities. with the arbitrary constants to be definitized by the boundary conditions. So we have a total of n Euler-Lagrange equations for the state variables.) lying on a surface 40. The Lagrangian integrand function is still $ ( t . z ( T ) = ZT = Y O .yo. plus the m constraints to determine the n + m paths.z)= 0 give us three equations to determine the three optimal paths for y. we can guarantee the satisfaction of all the constraints. together with the constraint 4(t. similar equations with respect to the Lagrange multipliers hi are again nothing but a restatement of the given constraints. z ) y(0) 0 ( T ) = Y T .y. z. we can still follow essentially the same procedure as before. and (3) use these n + m equations to determine the y j ( t ) and the Ai(t) paths.8) means that there should exist a nonvanishing Jacobian determinant of order m. As the first step. and A. as in (6. z.2).. we form the Lagrangian integrand and the Euler-Lagrange equations with respect to the state variables yj are still in the form of Partially differentiating F. These may be definitized by the boundary conditions on the state variables. Thus the problem is to which.61. y j ( t ) and A. In step ( I ) . The distance between the given pair of points is measured by the integral [:(I + y12 + z12)1/2 which resembles the integral [:(I + dt.y. EXAMPLE 1 Find the curve with the shortest distance between two given points A = (0. z ) = 0. any m of the yj' can be used in the Jacobian. Differential-Equation Constraints Now suppose that the problem is to maximize (6. ( 2 ) take the m constraints exactly as originally given in (6. such as Again. yr2)'l2dt for the distance between two points lying in a plane. z o ) and B = ( T . therefore. y.(t). subject to a consistent set of m independent constraints (rn < n ) that are differential equations:' Minimize subject to and ( 1 +y 2 + z 2 ) = 1/ 2 dt and appropriate boundary conditions.

yI.. the problem is referred to as the isoperimetric problem. .(t) are all constants. Two characteristics distinguish isoperimetric problems from other constrained problems.m ) In the isoperimetric problem.Y n . but the variable t itself. . . the i t h Lagrange multiplier will be zero. . . and appropriate boundary conditions Since inequality constraints are much less stringent then equality constraints.10) A i ( t ) ( c . the constraint in (6. Y 1. ~ : ) d t L T ~ l ( t . the inequality constraints do have to be consistent with one another. the inequality constraints as a group will not uniquely determine the yJ paths. the Euler-Lagrange equations for the yJ variables. freedom of optimizing choice is not ruled out. however. To ensure that all the A. Y.T] (i = 1 . but to force the integral of some function G to attain a specific value. Y n 9 ~. . . . for every i (a set of m equations): (6. and hence will not eliminate all degrees of freedom from our choice problem. Y ~ . . there is no need to stipulate that m < n... and (2) whenever the i t h constraint is a strict inequality. let us define a function ) This complementary-slackness relationship guarantees that (1) whenever the i t h Lagrange multiplier is nonzero. Y ~ . Y . we may again write the Lagrangian integrand as One of the earliest problems involving such a constraint is that of finding the geometric figure with the largest area that can be enclosed by a curve of some specified length.. the i t h constraint will be satisfied as a strict equality.. y 1 .Y:) dt * k l . therefore. ~. not to restrict the y value at every point of time. . . . Even if the number of constraints exceeds the number of state variables. To solve this problem.(tXc. However. . First.. . . that is. this name has been extended to all problems that involve integral constraints.g L ) = 0 for all t E [O. B ~ ( ~ . r ~ n r ~ . In some sense. -~. Note that the upper limit of integration here is not a specific value T . there is again no need to require m < n... In other words.138 generally be stated as follows: Maximize PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 139 Isoperimetric Problem The final type of constraint to be considered here i9 the hkghl constraint. . this integral is an indefinite integral . let us consider the case of a single state variable y and a single integral constraint ( m = n = 1. ..g i ) terms vanish in the solution (SOthat the optimized values of 9and F are equal).. .. . the corresponding equations with respect to the Lagrange multipliers must be duly modified to reflect the inequality nature of the constraints. In later usage. It is this relationship that serves to maintain the identity between the optimal value of the original integrand F and that of the modified integrand F in (6.' )5 em . as well as with the other aspects of the problem. First. because even with m 2 n. so that they can simply be written as A.4). To verify that the Lagrange multipliers indeed are constants in the solution. and appropriate boundary conditions are no different from before.~ .11) is intended. The other characteristic is that the solution values of the Lagrange multipliers A. . the constraint is more indirect.Yd) dl . to any problem of the general form Maximize subject to LTF(t. as exemplified by the equation L T ~ ( t . Since all the figures admissible in the problem must have the same perimeter. . we need a complementary-slackness relationship between the i t h multiplier and the i t h constraint.

In view of this. + h m G m ) (Ai are all constants) E2UMPLE 2 Find a curve AB. and can be assigned the economic interpretation of a shadow price.12) dt = k 0 = . the constant k shall be assumed to be larger then the length of line segment AB. . the value of which can be determined from the isoperimetric constraint.13) that the derivative of the T(t) function is simply the G function. And we can then use the previously discussed approach of solution. namely. we may write the Lagrange multiplier of the isoperimetric problem simply as A.20) with a minus sign. and then apply the Euler-Lagrange equation to y alone. At t = 0 and t = T. (If k < L.18) reduces to the condition (6.16). if . that maximizes the area under the curve. yo) and B = (T. The foregoing procedure can easily be generalized to the n-statevariable. we can as a practical procedure use the modified Lagrange integrand F in (6. y. we see Maximize subject to and V = JTy d t 0 JOT(. y.) in the ty plane. It is clear that T(t) measures the accumulation of G from time 0 to time t. and since that (6. the Lagrange multiplier A would come out with a positive sign in the solution. L. For the latter.PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6: CONSTRAINED PROBLEMS 141 and hence a function of t. So we can write which is seen to conform to the general structure of the differential constraint git. 6. Thus we have in effect ' transformed the given integral constraint into a differential-equation constraint. say. there arise two parallel conditions in this problem: Thus.1. for the curve AB in question is impossible to draw. For the problem to be the meaningful. Accordingly. Also. must also be subjected to an Euler-Lagrange equation. m-integral-constraint case. with g = G . respectively. whose status is the same as that of a state variable. we can obtain a more specific version of that Euler-Lagrange equation: r(0) = 1G d t 0 0 = 0 and r(T) = J TG d t = k 0 [by (6.21) F= .11)] But a moment's reflection will tell us that this same ccndition could have been obtained from a modified (abridged) version of F without the Y(t) term in it.--A(t) dt = gP = A(t). in the present one-state-variable problem with a single integral constraint.(A.G' F + . inasmuch as P is independent of T.r and c = 0. because it is merely an intermediate expression which will later be replaced by a final Lagrangian integrand.Ag has a minus sign attached to Ag. y') = c.. This extra variable. For our immediate purpose. we may write the Lagrangian integrand (We denote this particular Lagrangian integrand by F rather than 9. The problem. as illustrated in Fig. and having a given length k.20) instead of F. the problem has no solution. The curious reader may be wondering why the Lagrange-multiplier term hG enters in 9 in (6. The answer is that this follows directly from the standard way we write the Lagrangian integrand. + f /2 .g ) = Ac .17) and using (6.) In contrast to the previously encountered Fexpressions. knowing that the Euler-Lagrange equation for A will merely tell us that A is a constant. where the term A(c . is to However. passing though two given points A = (0. we have Turning next to (6.18') d . A(t) = constant y(O)=Yo Y(~)=YT This then validates our earlier claim about the constancy of A. Thus. When written in this way. we pate that P involves an extra variable I?-although the latter enters into F only in its derivative form T'(t). the modified Lagrangian function is (6. we note from (6.

Boca Raton. Since this does not explicitly contain t as an argument.g.1 1 Prepare a summary table for this section. and rearranging. this equation esln be solved for y'. CRC Press. and c. however. that produces the largest area between the curve AB and the line segment AB. varying or constant over time? 2 Work out all the steps leading from (6. we can write I !%paringboth sides and subtracting 1 from both sides. The integral of dt.c. \Zydt = k. 1 EXERCISE 6.23) can be expressed more spec. 1987). the last result may alternatively be written as which the reader will recognize as a nonlinear differential equation with separable variables. and has the area under the curve equal to a given constant K. and A can be determined from the boundary conditions and the constraint equation. and no optimizing choice exists. solvable by integrating each side in turn. -c2.23) Since 9-r T 1= c .this integral is equal to -4 = - 4 .23").23") is then d ~ q d rAccording to standard tables of integrals (e. [Note: This problem serves to illustrate the principle of reciprocity in isoperimetric problems... Transposing terms and simplifying. 4 Find the y(t) and z(t) paths (general solutions) that give an extremum of :y ' + 2'') dt. The specific values of the constants c. the desired curve AB must be an arc of a circle.~ solidating the two constants of integration into a single symbol. ]Zyt2 Via a series of elementary operations to be described. so that dr = dy. FL. . The integral of the left-hand-sideexpression of (6. c. which states that the problem of maximum area enclosed by a curve of a given perzmeter and the problem of minimum perimeter for a curve enclosing that area are reciprocal to (6. we first write the Lagrangian integrand A la (6. The square root of this gives us the following expressian of y' in tern of A. con. the right-hand-side expression in (6. we finally obtain the general solution k = L. 28th ed.1 / 2(6. namely. plus a constant of integration.23) to (6.= -Ay'(1 + y f 2 ) . (c) m < n or m Zc n? ( d l A ..PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 143 Since y' = dy/dt.~ ( +1y 1 2 ) 1 / 2 (A = constant) Since this is the equation for a family of circles. we may advantage of formula (2. 6 Find a curve AB with the shortest possible length that passes through two given points A and B.20): (6. we get y.23"). 6. subject to y . ) and radius A . with center (c. = arbitrary constant) 1 T. y ( c .z' = 0.) To find the necessary condition for the extremal. is simply t + constant. And the integral of the left-hand-side expression is + c o n ~ t a n t Thus. then the curve AB can only be drawn as a straight line. with a given length and passing through the given points A and B. Standard Mathematical Tables.. c .22) F=y .211.1..: 2 ~ e xt = /x/ = y . subject to . the functional /(2 5 With reference to Fig. with a column for each of the following: ( a ) type of constraint. 3 Find the extremal (general solution) of V = dt. Formula 204 in CRC . ( b ) mathematical form of the constraint.. squaring both sides. and write the Euler-Lagrange ha form that has already been partly solved. find a curve AB. by equating the two integrals.

and later define A = l/p. we shall use the symbol y to denote Yf . r r = O g 2 ( t . we write the Lagrangian integrand function as (6.26) F=B-U(C)+D(L)+A[-Q(K. in particular.Y (the deviation of the current national income Y from its full-employment level Yf). Then the integrand function F-the loss function (2. .28) is identical with (5. K ) and one Lagrange multiplier A (not a constant). the presence of two constraints makes the solution process somewhat more involved.] [Hint: When working out this problem. be identical regardless of the method used.U f ( C ) ( . L ) . L.3). of course. to handle constrained problems by substitution and elimination of variables. When we took the derivatives FKand FKr. Condition (6. As might be expected. it may prove convenient to denote the Lagrange multiplier by p .L)+K1+C] Since we now have three variables (C.1 ) = p [ p = U r ( C ) ] = The use of this rule reduces C to an intermediate variable that disappears from the final scene.2 SOME ECONOMIC APPLICATIONS REFORMULATED While the Lagrange-multiplier method can play a central role in constrained problems.C = 0 boundary conditions T) .. Actually. the F function was considered to contain only two variables. (6. most of the economic models discussed previously contain constraints.411.rrl 0 [from (2..27).p. because we substituted out the variable C. p. To show how the Lagrange-multiplier method can be applied. Similarly.144 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 145 each other and share the same extremal.rr. For notational simplicity. we shall reconsider the Ramsey model and the inflation-unemployment tradeoff model. the chain rule was used to yield the results Condition (6. and in the solution process we have substituted out certain variables.27) tells us that the Lagrange multiplier A is equal to p-the symbol we have adopted for the marginal utility of consumption.T') = b y + p . especially in problems with general functions. The results would. there should be altogether four Euler-Lagrange equations: 6. In the present section. The Inflation-Unemployment Tradeoff Model The inflation-unemployment tradeoff model of Sec.y.5 will now be reformulated as a problem with two constraints.r. the integrand function in the objective functional is Previously.4).28). will now be accepted as two constraints: gl(t.y. Lastly.. to illustrate the method of Lagrange multipliers.30) merely restates the constraint. which were previously used to substitute out y and p in favor of the variable T. We then recognize the constraint F K = -Ut(C)QK= -pQK and and reformulate the problem as one with an explicit constraint: Minimize subject to and L m [-~U(C) + D(L)] dt The expectations-augmented Phillips relation (2.41)] The problem thus becomes one where one of the constraints involves a .29) conveys the same information as (5. The Ramsey Model In the Ramsey model (Sec. and sometimes even simpler. K and L. to facilitate comparison with Example 2 of this section. it is also possible. let us instead treat C as another variable on the same level as K and L.39)-can be written as FK.rrr) =j ( p - [from (2.40)] = Q(K.] By (6. 5.40) and the adaptive-expectations equation (2. (6. 2.K ' .. Thus the use of the Lagrange-multiplier method produces precisely the same conclusions as before.

. After combining and canceling terms.146 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS simple equality. substitute the result into (6. First.g. p. It is in situations where the elimination of variables is unfeasible (e.331. and the other constraint involves a differential equation: Minimize (6.36) for y.46). the complicated expressions reduce to the surprisingly simple result I 1 To solve these equations simultaneously requires quite a few steps.F=( Y 2 + a p 2 ) e . solve (6.27) through (6. that this time the Lagrange-multiplier method entails more involved calculations.T). This shows that sometimes the more elementary elimination-of-variable approach may work better.43) is identical with the earlier result (2. The Euler-Lagrange equations are Finally. and T and two (nonconstant) Lagrange multipliers A .39) and (6.T = 0 ' and boundary conditions This result implies the total derivative The Lagrangian integrand function is (6.. L ) K' = 0 . solve (6./dt expressions of the last three equations into (6.(-jp + jrr + T ' ) with three variables y. Write the new Lagrangian integrand + function and the new Euler-Lagrange equations.37) for the variable p: This result qualifies as the condensed statement of conditions (6.37) because all five equations have been used in the process and incorporated into (6.32) .30)? .31) subject to /T(y2 0 + ap2)e-ptdt -T =0 Then substituting both (6. One way of doing it is the following. however. to get i Next.2 1 The constraint in the Ramsey model (6.33) through (6.25) could have been equivalently written as C . and dA. we have again verified that the Lagrange-multiplier method leads to the same conclusion. A.Q ( K . we obtain (after simplification) py +p j ( p .39) into (6.p t+ Al(-py -p + T ) + A. expression: of a and p in EXERCISE 6. Note..43). How do the analytical results differ from those in (6. Since (6.35) to get a single summary statement of the five simultaneous Euler-Lagrange equations.34) and solving for A.40) into (6. and solve for A .38) results in the the A . Substitution of (6. and A. with general functions) that the Lagrange-multiplier method shows its powerfulness in the best light. we can substitute the A...

Accordingly. In the absence of guaranteed success in the discovery of new deposits or substitute resources.45) subject to ( ~ d t = So The Hotelling Model of Socially Optimal Extraction In a classic article by Harold H ~ t e l l i n g the notion of "the social value" of . P.3 THE ECONOMICS OF EXHAUSTIBLE RESOURCES In the discussion of production functions. which is intern preted as the "net price" of the exhaustible resource. the prospect of eventual exhaustion must be taken into account when such a resource is put into use. Qo . 5. as illustrated in Fig.148 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS 149 2 The Eisner-Strotz model of Sec.2. so that as they are being used up.46) - F= N(Q)e-Pt . we may write the net social value of the resource. there is usually a presumption that all the inputs are inexhaustible. is negatively related to the quantity demanded. and the net value to society is the gross value less the cost of extracting that unit. however. By (6. In reality. N. * ~ Hotelling's paper.2 can be reformulated as a constrained problem: Maximize P Lm(n. the Lagrangian integrand is (6. the extraction cost is subsumed under the symbol P. the CCQ) term does not appear in his treatment. is an isoperimetric problem.A Q (A = constant) 3 ~ a r o Hotelling. Generalizing from a specific output level Q .44) as Assuming that the total stock of the exhaustible resource is given a t So. we have the problem of finding an extraction path Q(t) so as to Maximize (6. The optimal production (extraction) of an exhaustible resource over time offers a good illustration of the isoperimetric problem. we subtract from the gross social value the total cost of extraction C(Q).20).P 'dt - This. To avoid any confusion caused by using the same symbol Q as the variable of integration as well as the upper limit of integration. to any output level Q. the improper integral in the objective functional should be convergent. Compare the result with (5. or the integral /:oP(Q) dQ.~ an exhaustible resource is used for judging the desirability of any extraction pattern of the resource. "The Economics of Exhaustible Resources. Since it is reasonable to expect the net-social-value function to have an upper bound. measured by the shaded area under the curve. The gross value to society of a marginal unit of output or extraction of the resource is measured by the price society is willing to pay to call forth that particular unit of output. certain resources-such as oil and minerals-are subject to ultimate exhaustion. then the gross social value of an output Qo is V = I o N ( Q ) ~ . If the price of the resource. we can alternatively write (6. more of them can be obtained for future use. To find the net social value.2 6. 137-175. of course. (c) Write the Euler-Lagrange equations and solve them. ~d April 1931." Journal of Political Economy. . FIGURE 6. pp.17). 6.C)r -*' dt boundary conditions " and (a)How many variables are there in this problem? How many constraints? ( b ) Write the Lagrangian integrand function.

A. 655-661. "Monopoly and the Rate of Extraction of Exhaustible Resources. whereas a monopolistic firm will adopt an extraction path that is more conservationistic. we have Po . the value of P(Q) .cI(Qz)]e-p'dt which differs from the rule for socially optimal extraction. say &(A. The i th firm extracts at a rate Qi out of a known total stock of Siunder its control.C ( Q ) ] ~ . t). however. By slightly rearranging the terms. but not always.C'(Q) grow at the rate p: (6.8) to N(Q) in (6. it is the difference between the marginal revenue (rather than price) and the marginal cost that is to grow at the rate p.' for example.47"). In a paper by Stiglitz. The latter. . The firm's problem is to maximize the total discounted profits over time.44'). but socially suboptimal. we can still apply the Euler-Lagrange equation. not only that monopolistic production of an exhaustible resource is suboptimal." If the P(Q) and the C(Q) functions are specific. Assume for simplicity that there are n firms under pure competition. reduces in the q present case to the condition F = 0. (6. the problem of monopolistic profit maximization is Maximize (6. pp. taking the product price to be exogenously set at Po: Maximize (6. we can solve (6. with A now representing the initial value of the said difference.'( Q. or Q and the application of the Euler-Lagrange equation yields the condition (6. too. F=[ R(Q) .45). Along the optimal extraction path.49) Upon applying the differentiation formula (2. the Lagrange multiplier A is a constant.48) subject to L m ~ . Stiglitz. when substituted into the constraint in (6.C1(Q) associated with any point of time must have a uniform present value. = Si dt ([POQ. it is also clear that A has the connotation of "the initial value of P(Q) .C1(Q) = AePt [monopoly] Pure Competition Versus Monopoly One major conclusion of the Hotelling paper is that pure competition can yield an extraction path identical with the socially optimal one.~ '. This issue has been examined by various people using different specific assumptions.50) subject to ( ~ d t = So [[ R(Q) . then enables us to solve for A . ) = .47") P(&) . or if the cost of extraction is constant per unit of extraction but The Lagrangian integrand now becomes ' ~ o s e ~E. differences in assumptions result in different conclusions.47") for Q in terms of A and t.CP(Q). we can reinterpret the condition as requiring that P(Q) .C'(Q) = This condition is perfectly consistent with that for social optimum. but also that it is specifically biased toward excessive conservationism. in that it. hence the extremal to be obtained from the Euler-Lagrange equation may not fit given fixed endpoints. with the Lagrangian integrand From the last equation.51) R1(Q) . If no rigid endpoints are imposed. We shall now look into this aspect of the problem. September 1976. for an isoperimetric problem. . however.Pdt ' Aept [social optimum] This time. The Monopolist and Conservation The conclusion of the Hotelling paper is. which.c ( Q ) ] e ." Amerih can Economic Review. a * .C.iePt [pure competition] The economic interpretation of this condition becomes easier when we recall that. it is shown that if the elasticity of demand increases with time (with the discovery of substitutes). Here. because F = 0.A Q the Euler-Lagrange equation leads to the condition (6. requires the difference between the price of the exhaustible resource and its marginal cost of extraction to grow exponentially at the rate p . In contrast. How valid is the latter claim? The answer is: It is under certain circumstances. As one might expect.150 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6: CONSTRAINED PROBLEMS 151 Note that the integrand Fdoes not contain the derivative of Q.

Using a familiar formula for = the rate of growth of a difference. the monopolist faces the (inverse) demand function rate to the rate of growth dictated by the social optimum will then reveal whether the monopolist is excessively or insufficiently conservationistic.. we shall use the standard abbreviations for marginal revenue and marginal cost.152 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6: CONSTRAINED PROBLEMS 153 decreases with time (with better technology).. 'E'irst take the natural log of both sides of (6. March 1979. MR = Rf(Q) and MC = C1(Q). Matthews. 1984. Since the cost of extraction is assumed to be a fixed cost ( = a).54) subject to The Lewis-Matthews-Burness model.' we can rewrite this as And the monopolist's dynamic optimization problem is to Maximize (6.47"). Steven A. and H. "Monopoly and the Rate of Extraction of Exhaustible Resources: Note. 10. New York. Lewis. McGraw-Hill. Matthews. and Burness as special cases. In the following. Instead of analyzing problems (6." American Economic Review. Sec. Chiang. and B ~ r n e s s . 3d ed.@ ] o .L(t)Qle-pf d t And this implies that We can translate this r. Let the demand function and the cost function be expressed as with elasticity of demand 1/(1 . or its negative. leasing fees. uses a general (inverse) demand function P(Q) that is stationary.55) separately. of any variable x.7.. From that process. however. are essentially fixed costs). although its elasticity of demand is assumed to increase with consumption. as fol10ws:~ L-[P(Q)Q . The per-unit cost of extraction is constant at any given time.56) to get lnQ = g t +lnD(P) Then derive rg by differentiating In Q with respect to t: r racy R. The condition for social optimum that arises from the Euler-Lagrange equation. Comparing that - 'see Alpha C.54) and (6. into a corresponding rQ by using the fact that rQ is linked to rp via the elasticity of demand E < 0. Matthews. 227-230. but it can decline over time: Thus the profit is These functions include those used by Stiglitz and Lewis. ~ under the assumption that the extraction cost does not vary with the rate of extraction (capital costs. Stuart Burness.55) subject to L-Q dt = [[+(t)Qa . as well as the notation r. then the monopolist tends to be more conservationistic than under the social optimum. on the other hand. = (dx/dt)/x for the rate of growth . and that the elasticity of demand increases with consumption (sufficiently low prices can attract bulk users at the margin to switch from substitutes).54) and (6. E = I. etc. requires that rcp-Mg p. more general formulation that consolidates the considerations present in (6. Fundamental Methods of Mathernatrcal Economics.55). we shall work with a single. the monopolist's problem is to Maximize (6. In one of Stiglitz's submodels. (where E = lei) . (6.a). But the opposite conclusion is shown to prevail in a paper by Lewis.~ ' d t So The Euler-Lagrange equation is applicable to each of these two problems in a manner similar to the Hotelling model. we can deduce an optimal rate of growth of Q for the monopolist. pp.

. let E'(Q) > 0-a case considered by Lewis. relative to the social optimizer.62). Condition (6. the discovery of substitutes.64) with (6. we now find (6. and then substituting into (.63) rMR 1 dMR -- 1 MR dt dP/dt -P + P ( 1 . This set of conditions-also considered by Stiglitz-is quite stringent. We have reached the same conclusion here as Stiglitz. Retaining the assumption that rE = 0. 6.63). although the assumptions are different. so that rgs < rQmat any level of Q.66) rQs=g-E[p(l-%)] and rpm=g-E we may derive another expression for rMRfrom (6. which is in no Since P > MR. If the demand for the exhaustible resource is stationary ( g = O).0. E must vary accordingly.@ 6&. must be characterized by a more gentle negative slope. to be (6. Recall. namely. we now have a nonzero rate of growth of E: . To satisfy this constraint. which = may be rewritten as O (3 t FIGURE 6 . Assume again that MC = rMc = 0. The social optimizer and the monopolist now follow different Q ( t ) paths. but if the conditions are met. Matthews. Then.58) into (6. = the marginal cost of extraction has to be time invariant [CtQ= 0 in (6.+ prMc] y) [social optimum] The counterpart expression for rQ. say. even though E is not in itself a function of t. With these assumptions.61) and (6. we then find the socially optimal rate of growth of Q. Nevertheless. Note that the -Ep term is negative and implies a declining Q over time.. that both paths must have the same total area under the curve.5711. since MR and P are related to each other by the equation way overconservationistic. And the condition r~ = 0 is met when the demand elasticity is constant over time. So. the monopolist indeed emerges in the present case as a conservationist. then rQ must be negative.0. solving for rp.154 PART 2: THE CALCULUS OF VARIATIONS CHAPTER 6: CONSTRAINED PROBLEMS Substituting (6.60) reveals that we can have The situation of MC = 0 occurs when extraction is costless.g.E p 1 - [( MC .lrE f): $51 Equating (6.59). But let the elasticity of demand E vary with the rate of extraction Q.05).3. and Burness. denoted by rgs (s for social optimum).64) [monopoly] A comparison of (6. (6. as illustrated by the solid curve in Fig. it follows that k / P is less than k/MR. it is also possible to envisage the opposite case where the monopolist tends to undertake excessive early extraction. however. Since such a curve entails lower initial extraction and higher later extraction than the broken curve. A different outcome emerges. when there is a positive MC = that is output-invariant (MC = k > 0) as well as time-invariant (rMC 0). however. we get tke rate of growth of Q under monopoly: (6. then the monopolist will follow exactly the same extraction path as called for under the social optimum. or shifts downward over time ( g < 0) as a result of. And this implies that However.60) rQs= g . More specifically. namely.l / E ) [(1+ 1 dE/dt 1 -= rp + 1 1 E E . ( m for monopoly) can be derived by a similar procedure. unless offset by a sufficiently large positive value of g.51) requires that rmR-Mc)p.03 as against . as Q varies over time. with the larger rQ value (e. For rMc 0. the monopolist's Q(t) path.

study its algebraic sign. or In the latter inequality. 3 If the transversality conditions (5. . And.1) + E1(Q)Q< 0. respectively.62)with respect to Q .Ep [from (6.7) are applied to the Lagrangian integrands F for social optimum and monopoly. the left-hand side is positive by virtue of our assumption that E'(Q) > 0.the positivity of MR in turn dictates that E > 1. the right-hand side is positive because (6.04 by results i n < -0.1/E.68) is a positive fraction when R"(Q) < 0. How must the N(Q) expression in (6. what modification(s) must be made to the result of the analysis? 'Y differentiating(6. and. assuming that (1) MC = rMC= 0. t h a t is. we interpret A to mean the initial value of P(Q) .62).48) in terms of the variables q and qf(t). and find the Euler-Lagrange equation.Ep < 0. (a)Assume that the cost of extraction C is a function of the cumulative extraction q as well as the current rate of extraction q'. in view of (6. E 1 ( Q ) Q / ( E. 6 Let q = the cumulative extraction of an exhaustible resource: q(t) = / . we have B Multiplying both sides of this inequality by E 2 Q / P .45).( E .60) and (6. ~= 01 Suppose t h a t g .0. we can see from t h e present analysis t h a t monopoly is by n o means synonymous with conservationism.47"). 5 Assume that (1) the demand elasticity is output-invariant and timeinvariant. (c) Write the Lagrangian integrand.64). Write out the appropriate expressions for r ~ . i n view of Fig.a n d substituting (6.5) and (5. and retaining the assumption that C = C(q.3 1 In (6.68) is a positive fraction.rQS. B u t a declining marginal revenue i s a commonly acknowledged characteristic of a monopoly situation. expressing it in terms of the variables q and ql(t). dividing . Find an alternative economic interpretation of A from (6. . we get Collecting t e r m s and solving for rQm. and so is the denominator in (6.08.64). q').45) is subject instead to the weak inequality /.C1(Q).- - - - 156 PART 2 THE CALCULUS OF VARIATIONS CHAPTER 6 CONSTRAINED PROBLEMS Taking t h e rQi n this equation t o be rQm.51) requires MR to be positive.68). in (6. Hence. t h e monopolist will i n such a case t u r n o u t t o be a n anticonservationist. 7 Using the definition of q in the preceding problem. if t h e denominator i n (6. a smaller number). Therefore. but increases over time. 2 If the maximization problem (6. and (2) the MC is positive and output-invariant at any given time.3. and making use of the fact that P1(Q)Q/P = . and letting the derivative be negative. when dMR/dQ < 0.47'). and (3) the demand becomes more elastic over time because of the emergence of substitutes. Show that the Euler-Lagrange equation for the competitive firm is consistent with that of the social optimizer."&dt I S o . ' ~ ( t )dt. 6.44) be modified? ( b ) Reformulate the social optimizer's problem (6. and indicate under what conditions the monopolist will be more conservationistic and less conservationistic than the social optimizer. reformulate the competitive firm's problem (6.1) is a positive fraction.60) with MC = r . (2) the elasticity of demand is output-invariant. we will e n d u p with rQm rQs(e. i 1 I ! I . Verify that the current extraction rate Q is related to q by the equation Q(t) = qf(t).g.we finally obtain which should be contrasted against rQs= g . what terminal conditions will emerge? Are these conditions economically reasonable? 4 Compare rQ..67) into (6. It c a n b e showng t h a t t h e denominator in (6. and rQ. we can transform the inequality to the form . find the difference rQ. and r g m . Then. EXERCISE 6.



The calculus of variations, the classical method for tackling problems of dynamic optimization, like the ordinary calculus, requires for its applicability the differentiability of the functions that enter in the problem. More importantly, only interior solutions can be handled. A more modern development that can deal with nonclassical features such as corner solutions, is found in optimal control theory. As its name implies, the optimal-control formulation of a dynamic optimization problem focuses upon one or more control variables that serve as the instrument of optimization. Unlike the calculus of variations, therefore, where our goal is to find the optimal time path for a state variable y, optimal control theory has as its foremost aim the determination of the optimal time path for a control variable, u . Of course, once the optimal control path, u*(t), has been found, we can also find the optimal state path, y*(t), that corresponds to it. In fact, the optimal u*(t) and y*(t) paths are usually found in the same process. But the presence of a control variable at center stage does alter the basic orientation of the dynamic optimization problem. A couple of questions immediately suggest themselves. What makes a variable a "control" variable? And how does it fit into a dynamic optimization problem? To answer these questions, let us consider a simple economic illustration. Suppose there is in an economy a finite stock of an exhaustible






resource S (such as coal or oil), as discussed in the Hotelling model, with S(0) = So.As this resource is being extracted (and used up), the resource stock will be reduced according to the relation







where E(t9 denotes the rate of extraction of the resource at time t. The E(t) variable qualifies as a control variable because it possesses the following two properties. First, it is something that is subject to our discretionary choice. Second, our choice of E(t) impinges upon the variable S(t) which indicates the state of the resource at every point of time. Consequently, the E(t) variable is like a steering mechanism which we can maneuver so as to "drive" the state variable S(t) to various positions at any time t via the differential equation dS/dt = -E(t). By the judicious steering of such a control variable, we can therefore aim to optimize some performance criterion as expressed by the objective functional. For the present example, we may postulate that society wants to maximize the total utility derived from using the exhaustible resource over a given time period [0,TI. If the terminal stock is not restricted, the dynamic optimization problem may take the following form: Maximize subject to and






J t





influence the state variable. Thus any chosen control path u(t) will imply an associated state path y(t). Our task is to choose the optimal admissible control path u*(t) which, along with the associated optimal admissible state path y*(t), will optimize the objective functional over a given time interval [O, T . I

LTu( e - ~ dt E) l
- - -E(t)

Special Features of Optimal Control Problems
It is a noteworthy feature of optimal control theory that a control path does not have to be continuous in order to become admissible; it only needs to be piecewise continuous. This means that it is allowed to contain jump discontinuities, as illustrated in Fig. 7.la, although we cannot permit any discontinuities that involve an infinite value of u. A good illustration of piecewise-continuou~control in daily life is the on-off switch on the computer or lighting fixture. Whenever we turn the switch on (u = 1) and off (u = O), the control path experiences a jump. The state path y(t), on the other hand, does have to be continuous throughout the time period [0, TI. But, as illustrated in Fig. ?.lb, it is permissible to have a finite number of sharp points, or corners. That is to say, to be admissible, a state path only needs to be piecewise differentiable.' Note that each sharp point on the state path occurs at the time the control path makes a jump. The reason for this timing coincidence lies in the procedure by which the solution to the problem is obtained. Once we have

dS dt


S ( T ) free

(So, T given)

In this formulation, it happens that only the control variable E enters into the objective functional. More generally, the objective functional can be expected to depend on the state variable(s) as well as the control variable(s). Similarly, it is fortuitous that in this example the movement of the state variable S depends only on the control variable E. In general, the course of movement of a state variable over time may be affected by the state variable(s1 as well as the control variable(s), and indeed even by the t variable itself. With this background, we now proceed to the discussion of the method of optimal control.

To keep the introductory framework simple, we first consider a problem with a single state variable y and a single control variable u. As suggested earlier, the control variable is a policy instrument that enables us to
'sharp points on a state path can also be accommodated in the calculus of variations via the Weierstrass-Erdmann conditions. We have not discussed the subject in this book, because of the relative infrequency of its application in economics. The interested reader can consult any book on the calculus of variations.






found that the segment of the optimal control path for the time interval [O,t,) is, say, the curve ab in Fig. 7.la, we then try to determine the corresponding segment of the optimal state path. This may turn out to be, say, the curve AB in Fig. 7.lb, whose initial point satisfies the given initial condition. For the next time interval, [t,, t,), we again determine the optimal state-path segment on the basis of the pre-found optimal control, curve cd, but this time we must take point B as the "initial point" of the optimal state-path segment. Therefore, point B serves at once as the terminal point of the first segment, and the initial point of the second segment, of the optimal state path. For this reason, there can be no discontinuity at point B, although it may very well emerge as a sharp point. Like admissible control paths, admissible state paths must have a finite y value for every t in the time interval [O, TI. Another feature of importance is that optimal control theory is capable of directly handling a constraint on the control variable u, such as the restriction u(t) E 4 for all t E [0, TI, where 4 denotes some bounded control set. The control set can in fact be a closed, convex set, such as u(t) E [O,ll. The fact that 4 can be a closed set means that corner solutions (boundary solutions) can be admitted, which injects an important nonclassical feature into the framework. When this feature is combined with the possibility of jump discontinuities on the control path, an interesting phenomenon called a bang-bang solution may result. Assuming the control set to be Q = [O, 11, if for instance the optimal control path turns out to jump as follows:

The Simplest Problem
Based on the preceding discussion, we may state the simplest problem of optimal control as Maximize (7.1) subjectto and

V = l T ~ ( t , U ) dt y,

y=f(t,y,u) u(t) E ' 2 for all t [0, T I





for t


[tl, t2)



( t l < t2) (t2<T)

then we are "banging" against one boundary of the set 52,and then against the other, in succession; hence, the name "bailg-bang." Finally, we point out that the simplest problem in optimal control theory, unlike in the calculus of variations, has a free terminal state (vertical terminal line) rather than a fixed terminal point. The primary reason for this is as follows: In the development of the fundamental first-order condition known as the maximum princzple, we shall invoke the notion of an arbitrary Au. Any arbitrary Au must, however, imply an associated Ay. If the problem has a fixed terminal state, we need to pay heed to whether the associated b y will lead ultimately to the designated terminal state. Hence, the choice of Au may not be fully and truly arbitrary. If the problem has a free terminal state (vertical terminal line), on the other hand, then we can let the arbitrary Au lead to wherever it may without having to worry about the final destination of y. And that simplifies the problem.

Here, as in the subsequent discussion, we shall deal exclusively with the maximization problem. This way, the necessary conditions for optimization can be stated with more specificity and less confusion. When a minimization problem is encountered, we can always reformulate it as a maximization problem by simply attaching a minus sign to the objective functional. For example, minimizing /:~(t, y, u ) dt is equivalent to maximizing ( : F(t, y, u) dt. In (7.1), the objective functional still takes the form of a definite integral, but the integrand function F no longer contains a y' argument as in the calculus of variations. Instead, there is a new argument u. The presence of the control variable u necessitates a linkage between u and y, to tell us how u will specifically affect the course taken by the state variable y. This information is provided by the equation y = f(t, y, u), where the dotted symbol Y, denoting the time derivative dy/dt, is an alternative notation to the y' symbol used heretofore.' At the initial time, the first two arguments in the f function must take the given value t = 0 and y(0) = A, so only the third argument is up to us to choose. For some chosen policy at t = 0, say, ul(0), this equation will yield a specific value for y,. say y,(O), which entails a specific direction the y variable is to move. A different policy, u2(0), will in general give us a different value, y2(0), via the f function. And a similar argument should apply to other points of time. What this equation does, therefore is to provide the mechanism whereby our choice of the control u can be translated into a specific pattern of movement of the state variable y. For this reason, this equation is referred to as the equation of motion for the state variable (or the state equation for short). Normally, the linkage between u and y can be adequately described by a first-order differential equation j = f(t, y, u). However, if it happens that the pattern of change of the state variable y cannot be captured by the first

' ~ v e n though the 9 and y' symbols are interchangeable, we shall exclusively use 9 in the context of optimal control theory, to make a visual distinction from the calculus-of-variations context.

G. We shall consistently use the lowercase letter f as the function symbol in the equation of motion. the same technique was independently discovered by Magnus Hestenes. y (state). which figures very prominently in the solution process. which is in contrast to the yet undetermined ~ ( t ) By substituting the equation of motion into the integrand function. yet another type of variable will emerge. Mishchenko. we have assigned a unitary coefficient to F.4. we can eliminate y. Pontryagin and his associate^. an additional state variable must be introduced into the problem. j ) dt y. it i s in the nature of a valuation variable. N. Thus the symbol A is really a short version of A(t). This term was coined by the Russian mathematician L. y.2') subject to y(0) V = l T ~ ( t .4. The statement of the maximum principle involves the concepts of the Hamiltonian function and costate variable. or simply the Hamiltonian. V. in the process. V. and rewrite the problem as Maximize (7. to be denoted by A. Trirogoff. and where the equation of motion takes the particularly simple form The Costate Variable and the Hamiltonian Function Three types of variables were already presented in the problem statement (7. however. other terminal-point specifications can be accommodated. and possess continuous first-order partial derivatives with respect to t and y. who later also extended Pontryagin's results. It is called the costate variable (or auxiliary variable). Pontryagin. in which a. the variable A can take different values at different points of time.1) consists of the specifications regarding the boundaries and control set. which we must transform into a pair of first-order differential equations.166 PART 3. S. and u (control).1): t (time). This book won the 1962 Lenin Prize for Science and Technology.2). But the equations of motion encountered in optimal control problems are generally much more complicated than in (7. however.3). translated from the Russian by K. + ) If SO. u ) d t t 0 I ' 3. and E. o =A y ( T ) free ( A. Gamkrelidze. 1962.then the state equation will take the form of a second-order differential equation.^ As mentioned earlier in Sec. the simplest case is for 4 to be the open set 4 = (-a). in (7. The complication is that. Like y and u . An example of such a situation can be found in Sec. case we can omit the statement u(t) E 4 from the problem altogether. 8. T given) '. F. Interscience. Note that. The fundamental link between the calculus of variations and . it itself should naturally be a function with four arguments: t. R. as such. S. too. As to the control set. We must therefore first explain these concepts. OPTIMAL CONTROL THEORY CHAPTER 7: OPTIMUM CONTROL: THE MAXIMUM PRINCIPLE derivative y but requires the use of the second derivative y = dZy/dt2.. It turns out that in the solution process. While the vertical-terminal-line case is the simplest. measuring the shadow price of an associated state variable. optimal control theory is thus apparent. 7. Both the F and f functions are assumed to be continuous in all their arguments. New York. a s costate variable is akin to a Lagrange multiplier and. A we shall see.the choice of u will in effect be unconstrained. A Special Case As a special case. This is precisely the problem of the calculus of variations with a vertical terminal line.2) subject to and V = i T ~ ( y. Los Angeles. T given) Since H consists of the integrand function F plus the product of the costate variable and the function f . The vehicle through which the costate variable gains entry into the optimal control problem is the Hamiltonian function. consider the problem where the choice of u is unconstrained. = u y(0) = A y ( T ) free (A. The MathematL ical Theory of Optimal Processes. the Hamiltonian is defined as Then the optimal control problem becomes Maximize (7. Boltyanskii. u as well as A. a mathematician at the University of California. 1. but not necessarily with respect to u. and reserve the capital-letter F for the integrand function in the objective functional. Denoted by H.2 THE MAXIMUM PRINCIPLE The most important result in optimal control theory-a first-order necessary condition-is known as the maximum principle. The rest of problem (7.

Mathematical Economrcs. the maximum of H occurs at u = b. even when the problem is not one with a vertical terminal line.6) could have been more succinctly embodied in the first-order condition aH/au = 0 (properly supported by an appropriate second-order condition). which is differentiable with respect to u . is always nonzero (strictly positive). ~ is. y. T ] The Maximum Principle In contrast to the Euler equation which is a single second-order differential equation in the state variable y. For pedagogical effectiveness. thus. which can then be normalized to unity. For curve 1. The condition A(T1 = 0 would require a nonzero value for A. Cambridge. The truth. the solution to the vertical-terminal-line problem must satisfy the transversality condition A(T) = 0. is that the requirement Max H is a much broader statement of the requirement. however. for simplicity. then the control in % that maximizes H is u = c. 2d ed. on the other hand.5) A = .6) H ( t . 674-675.4) to (7.3). But since A. however. the prevalent practice among economists is simply to assume A. the maximum principle involves two 4 ~ ospecific examples of the checking process. another boundary point. for specific values of y and A.168 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7. why the coefficient A. A) 2 H ( t . the maximum principle conditions are Max H ( t . Schwartz. For the problem in (7. and with the Hamiltonian defined in (7.11. p. first-order differential equations in the state variable y and the costate variable A. is a nonnegative constant.31. Besides. r 2d ed. and A(t) cannot vanish simultaneously at any point of time.Thus the condition aH/au = 0 does not apply. with the Hamiltonian linear in u . First. -- aH dy A(T) = 0 The symbol Max H means that the Hamiltonian is to be maximized with respect to u Ualone as the choice variable. it turns out that the constant A.31. may turn out to be zero. the Hamiltonian should have been written as where A. > 0. is a positive constant.~ reality. pp. so as to drive out the P function from the Hamiltonian. The checking process would involve a demonstration that A.to be explained in the ensuing discussion. before using the Hamiltonian (7. Elsevier. 5 ~ example of such a problem can be found in Morton I. we havt drawn three curves. we can conclude that A.. the maximum of H occurs at u = a. there is a requirement that the Hamiltonian be maximized with respect to the control variable u at every point of time. 1991. also yet undetermined. where the F function does not matter in the solution p r o ~ e s s . # 0 in the simplest problem is due to two stipulations of the maximum principle. and u is any other control value. Since most of the problems encountered in economics are those where the F function does matter. even though the curve is differentiable. an interior point of the control region 8. And in the case of curve 3. = 0 would lead to a contradiction and violate the aforementioned stipulation that A. A. OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 169 coefficient for f . The control region is assumed to be the closed interval [a. of course. then normalize it to unity and use the Hamiltonian (7.c]. A) for all t E [0. sometimes use the shorter notation "Max H" to indicate this requirement without explicitly mentioning u . In Fig. y. For the vertical-terminal-line problem (7. The fact that A. the equation aH/au = 0 could indeed serve to identify the optimal control at that point of time. and the condition aH/au = 0 is again inapplicable because . should be set equal to This zero. The purist would therefore insist on checking in every problem that A. and A(t) cannot vanish sim~ltaneously. each indicating a possible plot of the Hamiltonian H against the control variable u at a specific point of time. A ) Zl for all t E [O. 617-618. 1985. a boundary point of 8. For formulations of the optimal control problem other than (7. 7.1). New York. it can be normalized to a unit value. thereby invalidating the Hamiltonian in (7. we shall first state and discuss the conditions involved. This is the practice we shall follow.in this case. thereby reducing (7.3). the eventuality of a zero In A. The reader will note that it is this requirement of maximizing H with respect to u that gives rise to the name "the maximum principle. that is." It might appear on first thought that the requirement in (7. see Akira Takayama. the multipliers A. and 679-680. before providing the rationale for the maximum principle. u*. where u* is the optimal control. at t = T. 149. U . But if curve 2 is the relevant curve. Kamlen and Nancy L. u . we shall. An equivalent way of expressing this condition is (7.T] aH Y=X [equation of motion for y ] [equation of motion for A] [transversality condition] (7. is indeed positive. is a nonnegative constant.1). In the following discussion. n Dynamic Opt~mizatron:The Calculus of Variatrons and Optzmal Control m Economics and Management. Strictly speaking. y .2. Second. occurs only in certain unusual (some say "pathological") situations where the solution of the problem is actually independent of the integrand function F.3). Cambridge University Press.

it may turn out to be more convenient to deal with a dynamic system in the variables (y. T given) Note that the control variable is not constrained. Although we have more than one differential equation to deal with in optimal control theory-one for every state variable and every costate variable-each differential equation is of the first order only. let point P be (0. this case serves to highlight how a thorny situation in the calculus of variations has now become easily manageable in optimal control theory. In the calculus of variations.1). (If the given position of line L is not vertical. For one thing.5). that the line L is a vertical line. in contrast. Then our problem is to Maximize (7. we can 8& condition aH/au = 0 to get the first& . resulting in Fyiyt= 0. But from the basic solution of (7. there is no differential equation for u in the Hamiltonian system. As we would expect. then there is no unique optimal control. whenever the integrand function is linear in y'. since the optimal control is then always to be found at a boundary of u. however. such a condition only concerns what should happen at the terminal time T. and assume. we must attach a minus sign to the old integrand.7) subject to and T v = 1 .( 1 + u2) 0 1/2 dt y I1 1j 1 l = u y(0) = A y ( T ) free (A. To reexpress y as a partial derivative of H with respect to the costate variable A is solely for the sake of showing the symmetry between this equation of motion and that for the costate variable. In short. the two equations of motion are referred to collectively as the Hamiltonian system. (If H plots against u as a horizontal straight line. And.2 EXAMPLE 1 Find the curve with the shortest distance from a given point nowhere is that derivative equal to zero. so the optimal control will be an interior solution.PART 3: O m OONTROL 'IWSOItY CHAPTER 7. u ) in place of the canonical system in the variables (y. The case where the Hamiltonian is linear in u is of special interest. while the condition aH/au = 0 may serve the purpose when the Hamiltonian is differentiable with respect to u and yields an interior solution. that in the latter equation of motion. Also. Together. Since the control variable never appears in the derivative form. if desired. with possible boundary solutions. necessitates the broader statement Max H. Note. A is the negative of the partial derivative of H with respect to the state variable y. A). derive a differential equation for the control variable. Moving on to the other parts of (7. OPTIMUM CONTROL: THE MAXIMUM PRINCIPLE 171 \ V % J system of differential equations) for the given problem. In optimal control theory. under the maximum principle the U Hamiltonian is not even required to be differentiable with respect to u. the fact that the control region may be a closed set. A). provided we let y' = u. we note that the condition y = aH/aA is nothing but a restatement of the equation of motion for the state variable originally specified in (7. or the canonical system (meaning the "standard" P to a given straight line L. To reformulate it as an optimal control problem. it is an especially simple situation to handle when H plots against u as either a positively sloped or a negatively sloped straight line. this case poses no problem at all. to convert the distance-minimization problem to one of maximization. We have encountered this problem before in the calculus of variations. In fact. (1 + y'2)1/2 can be rewritten as (1 + u ~ ) ' / ~ . FIGURE 7.) The previously used F function.5) is the transversality condition for the free-terminal-state problem-one with a vertical terminal line. without loss of generality. Step i We begin by writing the Hamiltonian function Observing that H is differentiable and nonlinear. in some models.5) one can. it can always be made so by an appropriate rotation of the axes. the Euler equation may not yield a solution satisfying the given boundary conditions. The last condition in (7.) More importantly. The only task is to determine which boundary. or j~ = u.

10) aH A=--=o ay =. Thus. =y +u y ( 2 ) free y(0) = 4 u ( t ) E 9 = [O. and taking the square root. then an upward-sloping curve like curve 1 in Fig.12) y = = u. the transversality condition A(T) = 0 in (7. multiplying through by (1 + u2). then its value at t = T is also its value for all t. we have the differential equation Squaring both sides. T i result implies that hs . Since (7. for otherwise the equation would become 0 = 1. boundary solutions.8) shows that H is independent of y. we resort to the equation of motion for the costate variable = -aH/ay in (7. we get h2 + 1. as needed in (7. S t e p ii To do that. EXAMPLE 2 Find the optimal control that will A(t) = constant (7.13). 21 Looking back to (7. Alternatively. we have (7. illustrated in Fig. Dividing both sides by the nonzero quantity (1 . S t e p ii It is our next task to determine h(t). then curve 2 will prevail. we finally arrive at (7. S t e p iii From the equation of motion y (7. the initial condition y(0) = A enables us to definitize this constant and write he aH/du = 0 equation can be written as u(1 + u ~ ) .4 will prevail. Note that.9) expresses u in terms of A . If at a given point of time. From the equation of motion for h . 0 = . because H is linear in u . the usual first-order condition aH/au = 0 is inapplicable in our search for u*. and we must choose u* = 0 instead. we can expect boundary solutions to occur.5) is sufficient for definitizing the constant. it may be viewed as a path orthogonal to the vertical terminal line. we must now look for a solution for h. to maximize H . If h < 3.14).3 u ) dt Conveniently.> This y* path.~ A ~ = ' Both u* = 2 and u* = 0 are.9). is a horizontal straight line.172 This yields the solution6 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 173 Further differentiation of aH/au using the product rule yields Thus the result in (7. Step i The Hamiltonian of (7. on the other hand. however. which is impossible. we find A > 3. we can now also conclude that Since this problem is characterized by linearity in u and a closed control set.9) does maximize H .3. But since (7. 7.91. y(t) = Moreover. we have to choose u* = 2.5).A'). In short. and collecting terms. namely. with slope aH/au = A . we are now able to write constant is linear in u .3. For if A is a constant. of course.13) Maximize subject to and V= y 1( 2 y 2 0 . 7.

13) because we are already in phase 11. 3d ed.19) Y * ~= Y * [ r .18) = 2 with initial value y ( 0 ) = 4 Y * = y*[O.17) U* = ~ * [ O . Note that the constant c cannot be definitized by the initial condition y(0) = 4 given in (7. we have 3 -y Its solution is (7. The critical point of time.2 ( k arbitrary) t Since the arbitrary constant k can be definitized to k = 2e2 by using the transversality condition A(T) = A(2) = 0 . r ) ~ = 2 ( 3 e f . However. beyond t = 0 . In phase I. we can write the definite solution 1 iI 1 5 = 1.21 = 0 '~irst-orderlinear differential equations of this type are explained in Sec.2 = 12. As graphically depicted in Fig. 7 . 7.14) can be restated more specifically in two phases: Phase I: (7.174 PART 3. the equation of motion for y is y Consequently. Step iii Even though the problem only asks for the optimal control path. Consequently.5a.. 14. OPTWVM CORTWL.083 2 FIGURE 7 5 . T ) 2 I = u*. the equation of motion for y is y = y + u = y + 2. 2 ] = cet . in two phases. we can also find the optimal state path. l b . as illustrated in Fig. this optimal control exemplifies a simple variety of bang-bang. falling steadily from an initial value A*(O) = 2e2 . can be found by setting A*(t) = 3 in (7. when A* = 3 and when the optimal control should be switched from u* = 2 to u* = 0 .2 = 0 . 3'-y=o + 0 . ( c arbitrary) Phase 11: = u * [ r . the . or Note that A*(t) is a decreasing function. OPTIMAL CONTROL THEORY CHAPTER 7. Nor can it be definitized by any terminal condition because the terminal state is free..15) and solving for t . but eventually falls below 3. THE MKUMWd PRIhlCIPLE U* 175 Phase I Phase I1 I I I I I I (a) 2 Its general solution is7 A(t) = ke-' . Chiang. McGraw-Hill. New York. the reader will recall that the optimal y path is required to be continuous. h* exceeds 3 at first.778 to a terminal value A*(2) = 2 . Denoting that particular t by the Greek letter r . Thus. Fundamental Methods of Mathematical Economics.1) =y In phase 11.1 of Alpha C. ar with general solution (7. the optimal control in (7.

example. Review the material on simultaneous differential equations in Alpha C. This will enable us to derive certain transversality conditions in the process of the discussion. ( ~ ) 2(3eT . Make sure that the Hamiltonian is maximized rather than mmimized. 7. we obtain the complete y* path for the time interval [0.19') y*.. must be set equal to the value of y*.19'1. and costate variables to Maximize subject to j i1f - (r2 + u 2 ) dt u ( t ) unconstrained I 8 X = u -y = 3 4 y(1) free 4 we find. as shown in Fig. u ) dt t 0 y(0) = 0 y ( 2 ) free u ( t ) unconstrained (7." As U usual. but the terminal point is allowed to vary. y.. y(0) = y o (given) * . Sec. [ H i n t : The two equations of motion should be solved simultaneously.18) and (7.176 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 177 initial value of y*. 3 Find the optimal paths of the control.3 THE RATIONALE OF THE MAXIMUM PRINCIPLE We shall now explain the rationale underlying the maximum principle.u 2 ) df j =u V = I T ~ ( y. McGraw-Hill. state.18)] 7.2.5b. The problem is then to Maximize ~ ( 0= 5 ) and u ( t > E [ O . u) 1 Make sure that the Hamiltonian is maximized rather than minimized.. evaluated at Inasmuch as y * .1) = and Y * . so that the optimal y path in phase I1 is (7.1) = 15. it is assumed here that the control variable u is unconstrained. Chiang. 18. 1984.324et The value of y* at the switching time 7 is approximately 2(3e1. ~ ( T= ceT ) [by ( 7 W 1 [by (7.1 7. state. EXERCISE 7. In this particular 21. T. and the d H / d u = 0 condition can be invoked in place of the condition "Max H. but the two segments are in fact parts of two separate exponential functions. that c = 2(3 e-'1. 4 Find the optimal paths of the control.e-')et = 5. 2 of their book) runs for as many as 40 pages-but to present a variational view of the problem. state. and costate variables to Maximize subject to $ 3 df ~ y =y +u ~(qh A Variational View of the Control Problem To make things simple. the Hamiltonian function is assumed to be differentiable with respect to u .739. and attains the value 3 a t only one point of time. 21 Be sure to check that the Hamiltonian is maximized rather than minimized. Moreover. = - and y(0) 1 1 4 3 5 3! 2(3 . and costate variables to Maximize subject to and L 2 ( y .0g6. 3d ed. to make the maximum principle plausible.. What would happen if it turned out that A(t) = 3 for all t? 2 Find the optimal paths of the control.2 1 In Example 2. by equating these two expressions and solving for c. so that u* is an interior solution. we take the initial point to be fixed. What we plan to do is not to give a detailed proof-the complete proof given by Pontryagin and his associates (Chap. New York. By joining the two paths (7.20) subject to and y = f(t. h ( t ) is a decreasing function. the joined path happens to look like one single exponential curve. Fundamental Methods of Mathematrcal Economics. This will later be reinforced by a comparison of the conditions of the maximum principle with the Euler equation and the other conditions of the calculus of variations.

y. and A . and R. is concerned only with the initial time. an integral. being a Lagrange multiplier. since the equation of motion is actually given as a part of the problem itself. whereas the former does not. if the variable y always obeys the equation of motion.y] over t in the period [0. we have defined the Hamiltonian function as of I The substitution of the H function into (7. we shall focus on A.TI.23) as a necessary condition for the maximization of Y. so long as confident that Y will have the same value as V. and then reexpress the functional in terms of the Hamiltonian.8 we find that Hence. and the Lagrange-multiplier expression. we can form an expression A(t)[f ( t . y. let us incorporate the equation of motion into the objective functional. that is. differs fundamentally from u and y .the R. This accounts for one of the three conditions of the maximum principle. u ) . on the other. we can augment the old objective functional by the integral in (7. of course. term is exclusively concerned with the terminal time T. the new objective functional can be further rewritten as For this reason. This.j ] for each value of t.j ] dt Step ii The value of Y depends on the time paths chosen for the three variables y . E&m. Although there is an infinite number of values of t in the interval [0. using the notion of Lagrange multipliers. The reader will observe that. we have whieh leads to the result in the text.15) by w. u . we can work with the new objective functional (7. is hardly an earth-shaking step. y. u ) is strictly so adhered to.TI. y. u ) . y . Previously. on the one hand. Thus.j ] . through substitution of this result. d t ) It is important to distinguish clearly between the second term in the Hamiltonian.TI. and still get a zero value.dt= udw. because u is now used to denote the control vmkbie. he formula for integrating a definite integral by parts has been &en la (2.. as long as the eq motion in (7. That is. for the choice of the A(t) path will produce no effect on the value of Y. I n the present step..j ] will assuredly take a zero value for all t in the interval [0.21) without affecting the solution. then the quantity [ f(t. The A variable. A(t)f(t. to relieve us from further worries about the effect of A(t) on Y. R. The latter explicitly contains j ..PART 3 OPTIMAL CONTROL THEORY 1 CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE Step i As the first step in the development of the maximum principle. u ) . Note that while the R.20) is adhered to at all times. we simply impose (7.22) can simplify the functional to the form am So. R. y . u ) . A(t)[f ( t . y . Y. we replace the symbol u in (2. since A(t)3.22') is integrated Then.V + k T A ( t ) [f ( t .22) The Y expression comprises three additive component terms. When the last integral in (7. u ) .T I would still yield a total value of zero: by parts. long as the equation of motion y = f(t. as well as the values chosen for T and y.15). spans the entire planning period [0. and R3. Let u w = A(t) (implying that du (implying that d w = d dt ) =y (t) = 3. summing h(t)[f(t. u ) . . term.

= 0.28) can be rewritten as follows: What will happen to the maximum principle when the terminal condition specifies something other than a vertical terminal line? The general answer is that the first three conditions in (7.28) and (7.251.5).29) is set equal to zero.y. by formula (2. and if we perturb the u*(t) path with a perturbing curve p(t). The neighboring y paths can be written as (7. it has now made an impressive comeback. the A(t) path is not to be arbitrarily chosen. term in (7. but the transversality condition will assume some alternative form.4 ALTERNATrVE TERMINAL CONDITIONS On the other hand. brushed aside as having no effect on the value of the objective functional. each of the three must individually be set equal to zero in order to satisfy (7. whereas the other two involve arbitrary AT and Ay. We see that. there will then occur for each e a corresponding perturbation in the y*(t) path..27) drops out in differentiation. Consequently.A(T)YT+ A(O)Y. one for each value of E.180 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM C N R L THE MAXIMUM PRINCIPLE O T O. the derivative And the derivative of the second term in (7. however.30). SO that we can again apply the first-order condition dY/dt.25) ~ ( t=~ * ( t+ 4 t ) ) ) Furthermore.24) and (7. Fixed Terminal Point Thus. when the sum of (7.30) is automatically equal to zero. -- aH ay and -.= AT and de de In view of the u and y expressions in (7. if T and y. but Ay. Since the simplest problem has a fixed T and free y. we can deduce two conditions: A= . the A(O)y. is not. The first gives us the equation of motion for the costate variable A (or the costate equation for short). In the differentiation process. The new version of Y is The three components of this derivative relate to different arbitrary things: The integral contains arbitrary perturbing curves p(t) and q(t). 181 Step iii We now turn to the u(t) path and its effect on the y(t) path. are variable. from the! product rule.11).5) will still hold. the integral term yields. we also have (7.28) and (7. And the second represents a weaker version of the " MaxH7' condition-weaker in the sense that it is predicated on the assump:ion that H is differentiable with respect to u and there is an interior solution. we must impose the restriction Step iv We now apply the condition d Y / d ~= 0.29). In adding these two expressions. 7. we can generate "neighboring" control paths condition emerges (after rearrangement) as y . Thus dY/dc is the sum of (7. B U ~in accordance with the equation of motion = f ( t .26) T=T*+eAT and YT=Y. respectively. This explains the transversality condition in (7.*+EAYT dT implying . in Step ii. and it must end with a terminal value of zero if the problem has a free terminal state.27) with respect to e is. the AT term in (7. expression vanish.0 a H au . u). the first-order The reason why the problem with a fixed terminal point (with both the terminal state and the terminal time fixed) does not qualify as the "sim- . If we have a known u*(t) path. but is required to follow a prescribed equation of motion. By setting the integral component equal to zero. Note that although the A(t) path was earlier. In order to make the -A(T) Ay. we can express Y in terms of e. in order for the maximum principle to work. we note that one component of (7..

. = $(T) governs the selection of the terminal point. In the other outcome. his assumption does not bias the final result of the deductive process here. given E 2 0. Terminal Curve In case a terminal curve y .. the transversality condition for the problem with a regular vertical terminal line would apply: (7.. Sec.34) A(T) 2 0 for Y. The question then arises as to whether we can still legitimately deduce the condition aH/du = 0 from (7. Thus. We can always try the A(T) = 0 condition first.33) h(T) = 0 for y. and treat the problem as one with a given terminal point. for an arbitrary AT.* > y.* in the present context-implies Ay.* > ymi. Using this to eliminate Ay.2. we can combine the last two terms in (7.25) at t = T and let Y. the practical application of (7.9 the requirement y..* = y. If it does. however. For simplicity. In the former outcome. For our purposes.33) and (7. = @(T)AT. the transversality is replaced by the condition Only two types of outcome are possible in the optimal solution: yT* > ymin. it suffices to state that.A@It-. 2 y. we can finally write a single summary statement of the transversality condition as follows: [H . Fundamental Mdhads Mathematical Economics. If we evaluate (7. of . since the terminal restriction is binding. we can see from (7. 1984. but T is not (AT is arbitrary). If the perturbation of the u*(t) path is supposed to generate through the equation of motion 3. Truncated Vertical Terminal Line Now consider the problem in which the terminal time T is fixed. we obtain Horizontal Terminal Line (FixedEndpoint Problem) If the problem has a horizontal terminal line (with a free terminal time but a fixed "endpoint.35) is not as complicated as the condition may appear. the validity of the maximum principle is not affected by this compromise in the arbitrariness of p(t). > y. 2 ymin-which is the same as y." meaning a fixed terminal state). the admissible neighboring y paths consist only of those that have terminal states y. ymin would dictate that E > 0.* = ymin. only subject to y r ymin. Thus the preceding inequality transversality condition reduces to (7. and check whether the resulting y. y. a given minimum permissible level of y. the problem is solved. it is easy to see that the transversality condition for this case is The Harniltonian function must attain a zero value at the optimal terminal time... 2 0.30) would now yield an inequality transversality condition At the same time.34) and omitting the * symbol. 21. with a fixed terminal point. the requirement of y. From the second and third component terms in (7. we shall not go into details to demonstrate this point. then AT and Ay.where ymindenotes . the transversality condition &add be (7. Fortunately. by the Kuhn-Tucker conditions.26) that. ' O ~ h eKuhn-Tucker conditions are explained in Alpha C. the terminal restriction is automatically satisfied..30) into a single expression involving AT only: [ H I t s T A T .* = ymi.32) [H= Note that the last part of this statement represents the familiar complementary-slackness condition from the Kuhn-Tucker conditions. 3d ed. = f ( t . is fixed (Ay.AT 0 It follows that.A(T)@(T) AT = Combining (7.301. New York. But... then the choice of the perturbing curve p(t) is not truly arbitrary.182 PART 3 OPTIMAL CONTROL THEORY CHAPTER 7: OPTIMUM CONTROL: THE MAXIMUM PRINCIPLE 183 plest" problem in optimal control theory is that the specification of a fixed terminal point entails a complication in the notion of an "arbitrary" perturbing curve p ( t ) for the control variable u.y. McGraw-Hill. = 01. u ) a corresponding perturbation in the y*(t) path that has to end at a preset terminal state.* value satisfies the terminal restriction y. but the terminal state is free to vary. then y. but are linked to each other by the relation Ay. But there is no restriction on the value of A at time T.* = ymin. Chiang. are not both arbitrary.30). Assuming that q(T) > 0 on the perturbing curve q(t). we then set yT* = ymin order to satisfy in the complementary-slackness condition. If not.1° It follows that (7. the nonnegativity of 6 would alter the first-order condition d Y / d ~= 0 to d Y / d ~I 0 for our maximization problem. As in the similar problem with a truncated vertical terminal line in the calculus of variations. or Y.

THE MAXIMUM PRINCIPLE 185 Truncated Horizontal Terminal Line Let the terminal state be fixed. By combining (7.-. in the optimal solution. we try to resort to the boundary conditions.39) u(t)= i ~ ( t ) c e c -ke-t 4 ( c arbitrary) Since a2H/du2 = -2 is negative.37) To definitize the arbitrary constant. or [ H I t =2 0 ~ for T * = T . then by implication all the admissible neighboring y paths must have terminal time T 5 T. = y y(0) +u = 1 ~ ( 1= 0 ) With fixed endpoints.184 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7: OPTIMUM CONTROL. or T* = T . it is possible to establish the transversality condition (7. Thus. Using (7.u 2 dt 1 0 3.Via a standard formula. but. and the transversality condition for the problem with a regular horizontal terminal line would still hold: (7. more accurately. its solution can be found as follows:11 Maximize subject to and EXAMPLE 1 V= 1 .40) A(t) = . But if T* = T .36) S t e p ii From the costate equation of motion A= we get the general solution (7.401. the terminal restriction turns out to be nonbinding. however. --= aH ay -A ke-t ( k arbitrary) [HIt=.34) for the truncated vertical terminal line.= 0 for T* < T .36) and (7. we can rewrite this equation as 9 = y + $ k e p t . we then obtain the following summary statement of the transversality condition: This is a first-order linear differential equation with a variable coefficient and a variable term. but the terminal time T be allowed to vary subject to the restriction that T * _< T . By analogous reasoning to that leading to the result (7. . S t e p iii The equation of motion for y is j~ = y + u . this u ( t ) solution does maximize rather than minimize H. is the maximum permissible value of T-a preset deadline. unfortunately. Then we either have T * < T . . we can apply the first-order condition = dH . for the fixed-terminal-point problem these conditions are linked to the variable y instead of A. of the type d y / d t + u ( t ) y = w(t)--here with u ( t ) = .-2u+A=O au This yields the solution u = A/2 or.1 and w ( t ) = +kept. where T .37) and omitting the * symbol. S t e p i Since the Hamiltonian function is nonlinear: 1 and since u is unconstrained.39) and (7. we must find the latter path before u ( t ) becomes determinate. it now becomes necessary first to look into the solution path for y. But since this solution is expressed in terms of A(t). we need no transversality condition in this problem. In the former outcome. (7.

411. it illustrates the type of problem known as a time-optimal problem. EXAMPLE 2 Let us reconsider the preceding example. and the appropriate transversality condition is (7.351.We shall first try to solve this problem as if its vertical terminal line is not truncated. and they give us the following definite values for c and k : I CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 187 it then follows from (7.40)] 4 kT 8 C Lbni- . / Step i To begin with. y*(t). T = -T we now can use the transversality condition A(T) = 0 m A(1) tize the arbitrary constant. Thus.39).1dt = [-f]. At the fixed terminal time T = 1.43) h(t) = ke-' [from (7. If y * ( l ) turns out to be 2 3.45) gives us y * ( l ) = e .186 PART 3. then (7. 1 0 T .42) u ( t ) = $h(t) [from ( 7 . and A*(t) paths in the present problem turns out to be an intertwined process.45) would have been an acceptable solution. then the problem is solved. we haw @e following definite solutions for the three optimal paths: II / Step iii From the equation of motion for y. where the transversality condition h ( T ) = 0 may enable us to get a definite solution of the costate path h*(t) at an early stage of the game. form the Hamiltonian 1 . The time-optimal nature of the problem is conveyed by the objective functional: Step ii Although the general solution for h is still (7. with the terminal condition y(1) = 0 replaced by the restriction = 1 by the initial condition I j I 1 Step iv It remains to check (7. =y + u =y [by ( 7 . This. 3 9 ) ] / This example illustrates the problem with a horizontal terminal line. Thus the optimal state path is The search for the u*(t). we find j. =y +u y(T)=ll y(O)=5 Tfree and u(t) E [ -1. Note that had the terminal restriction been T = 1. substituting these into (7.1 dt The problem is then one with a truncated vertical terminal line. violates the terminal restriction y(1) 2 3. whose objective is to reach some preset target in the minimum amount of time.1] Step i The Hamiltonian remains the same as in lbranq& 1. we now have to set y(1) = 3 and resolve the problem as one with a fixed terminal point.42) that 1 I Consequently. to maximize this integral is to minimize T.35). otherwise. Moreover. y(1) 2 2. so that Clearly. the fixed-terminal-point problem does not allow the application of the boundary conditions on y(0) and y ( T ) until the final stage of the solution process. in order to satisfy the transversality condition (7. (7. and (7. (7. unfortunately. and the solution for the control variable is still ( 7. unlike the simplest problem of optimal control. we shall then redo the problem by setting y(1) = 3.40). EXAMPLE) T=1 y(1) 2 3 Maximize subject to V= j. OPTIMAL CONTROL THEORY Step iv Now the boundary conditions y(0) = 1 and y(1) = 0 are directly applicable.45) against the terminal restriction. 4 4 ) ] The general solution of this differential equation is 1 where the constant c can be definitized to c y(0) = 1. This is because. The result is k = 0.

if A > 0 ( H is an increasing function of u).1 and b = 1. the A in (7. Hence.49) then yields the optimal A A*(t) = ie-' In this result.491. and that I I i Step iii With u* = 1 for all t.46). it produces no bang-bang phenomenon.1. Consequently. For this reason. Its definite solution gives us the optimal y path12 8 4 3 1 Note that if y is a signum function of x .1. It turns out that the clue to the sign of k lies in the transversality condition [HItzT= 0.46). New York. A(t). his solution formula is derived in Alpha C. . we find that a knowledge of A is necessary before u can be determined. 14. For this purpose.188 PART 3: OPTIMAL CONTROL THEORY CHAPTER 7. this function becomes u* = = 6ef .1 [ ~ ( o = 51 ) sgn A Once more. with the control variable confined to the closed interval [ .52) portray the complete solution to the present problem except for the value of T*. k must be positive in order to satisfy this condition. Fundamental Methods of Mathematical Economics. we first return to the = 0 to fix the value of the constant k. (7. Applied to the present problem.This.49) h(t) = 1 ke-* ( k arbitrary) Thus k path (7. either . McGraw-Hill.50). the optimal value of u at any point of time is expected to be a boundary value. and u* will become indeterminate at that point of time. In transversality condition view of (7. but if A < 0. As a third possibility. even though the linearity of the Hamiltonian in the control variable u results in a boundary solution in the present example. To calculate that. Step iv Having obtained the optimal control and state paths u*(t) and y*(t). we can write the transversality condition as Step v The three optimal paths in (7.1]. if A = 0 a t some value of t.50) and (7.. the transversality condition now reduces to which integrates to (7.52) = i.OPTIMUM CONTROL: THE MAXIMUM PRINCIPLE Because this H function is linear in u.1. We shall leave this to the reader. and (7. as is e-T. we can express the equation of motion of the state variable.1 or 1. y = y + u . or e T = 2.1. tells us that 11 = 6 e T . then the Hamiltonian will plot as a horizontal line against u. the quantity (11 + u*) must be positive. Therefore. being exponential. then y (if determinate) can only take one of two values. barring the eventuality of k = 0 so that A(t) = 0 for all t (which eventuality in fact does not occur here). we next look for A*(t). It then follows that A(t) > 0 for all t. as 4 5I i. from (7. And. denoted by the symbol sgn. can take only a single algebraic sign-the sign of the constant k. Sec. the aH/du = 0 condition is inapplicable. then u* = 1 (upper bound). a single constant value-in accordance with the signum function. recall that the terminal state value is stipulated to be y(T) = 11. u* must be determinate and adhere to a single sign-nay. Step ii The equation of motion for the costate variable is.Substituting this result in (7.51). dy/dt + ay = b-here with a = . ! The optimal paths for the various variables are easily graphed. and the value of y depends on the sign (not magnitude) of x . 3d ed. and defined as follows: Since u* is either 1 or . Chiang. and the terminal condition y(T) = 11. in conjunction with the y*(t) path obtained earlier.1. This fits the format of the first-order linear differential equation with a constant coefficient and a constant term. Specifically. 1984.511. Using the H in (7. then u* = . This relationship between u* and A can be succinctly captured in the so-called signum function.

O)and a terminal curve y(T) = 10 .190 PART 3. The answer is that they indeed are. 5 Demonstrate the validity of the transversality condition (7.37) for the problem with a truncated horizontal terminal line. 4 Find the optimal control path and the corresponding optimal state path that minimize the distance between the point of origin (0. A) in the general case: i i CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 191 1 i1 EXERCISE 7. we have aH*/dt = 0. u. T > 0. In Example 3. Formulate and solve the problem.1. so that (7. Moreover. For problem (7. -= 0 dH* dt or H* = constant [for autonomous problems] This result is of practical use in an autonomous problem with a horizontal terminal line. OPTIMAL CONTROL THEORY The Constancy of the Hamiltonian in Autonomous Problems All the examples discussed previously share the common feature that the problems are "autonomous.2') that a simple optimal control problem can be translated into an equivalent problem of the calculus of variations.T2. let us first examine the time derivative of the Hamiltonian H(t. But if the Hamiltonian is a constant in the optimal solution. The net result is that H*.5 THE CALCULUS OF VARIATIONS AND OPTIMAL CONTROL THEORY COMPARED We have shown earlier in (7. the optimality conditions required by the maximum principle are also equivalent t o those of the calculus of variations. in such a problem. Thus the third term on the right drops fut. Graph the terminal curve and the y*(t) path. the Hamiltonian function is This zero value of H * prevails regardless of the value of t. One may wonder whether. the functions in the integrand and f in the equation of motion do not contain t as an explicit argument. may list .ll. u. since t is absent from the F and f functions as an explicit argument. Assuming this function to be differentiable with m p e c t to u . and costate variables to Maximize subject to and LT y - (t2 + ul) dl =u Y(O) 4 = y(T) = 5 T free i 2 Find the optimal paths of the control. To see this. the maximum principle also stipulates that 3. An important consequence of this feature is that the optimal Hamiltonian-the Hamiltonian evaluated along the optimal paths of y. for instance. state. for both autonomous and nonautonomous problems.54) terminal state value y(T) = 0 as soon as possible. we find that 7.s) in the ty plane to achieve the This result holds generally." that is. the Hamiltonian evaluated along the optimal paths of all variables. we have aH/au = 0 (for an interior solution) or u = 0 (for a boundary solution).2) and (7. assuming that dy/dt = 2 u . satisfies the equation 3 =y +u 2 y(4) y(0) = 5 0 I u(t) 1 2 300 3 We wish to move from the initial point (0. = aH/aA and A = -aH/ay. y. which shows that the transversality condition is indeed satisfied a t all t.4 1 Find the optimal paths of the control. and A-will have a constant value over time. and costate variables to Maximize subject to and k43ydt When H is maximized. Consequently.21. the Hamiltonian must not contain the t argument either. state. and that the control set is the closed interval [ . In the special case of an autonomous problem. and the transversality condition can be applied at any point of time. So the second and fourth terms on the right exactly cancel out. then it must be zero for all t. The transversality condition [ H ] t = T= 0 is normally expected to hold at the terminal time only.

city planning. In the present section.= = 0.. in a democracy." Revrew of Economrc Stud~es. = 0. we end up with the single condition i. f Differentiation of (7. the optimal-control transversality condition is [ H I . and pollution control. and the technique has become fairly common. "The Political Business Cycle. this is precisely the transversality condition under the calculus of variations given in (3. we can transform this condition into ! $ 3 1: f The first equation in (7. attention is focused on economic policies only. But in view of the second equation therein. = $ ( T ) under optimal control theory can be translated into the corresponding condition under the calculus of variations. however. attempts by an incumbent political party to prevent its rival party (or parties) from ousting it from office may encourage the pursuit of economic policies that result in a particular time profile for the rate of unemployment and the rate of inflation within each electoral period.10). this may be written as [-q]t=Tor. in control of the national government. of course. April 1975.56) can be rewritten as A = -Fu. The Vote Function and the Phillips Tradeoff The incumbent party. The details of the demonstration are. Using (7. and vice versa. ..11). the repetition of that pattern will then manifest itself as a series of business cycles rooted solely in the play of politics. is obliged in a democracy to pursue policies that appeal to a majority of the voters in order to keep on winning at the polls.57) with respect to t yields 7. However.and microeconomics all the way to such topics as fishery. 13william D. this is precisely the transversality condition in the calculus of variations presented in (3.PART 3. it can be further rewritten as (7. the third equation in (7. Over successive electoral periods. Except for the slight difference in symbology. left to the reader. let us take a look at the transversality conditions. Further differentiation of the d H / d u expression in (7. By (7. we introduce an interesting model of William Nordhaus. In view of (7. By which is identical with the Euler equation (2.13 which shows that. and in fact on only two economic variables: U (the rate of unemployment) and p (the rate of inflation). The reaction of the voters to any realized values of U and p is assumed to be Again. [F + Au]. the condition 4H/au should be accompanied by the second-order necessary condition a2H/au2 I 0. < 0 au2 yy This. p b . Nordhaus. Its applications range from the more standard areas in macro.6 THE POLITICAL BUSINESS CYCLE Applications of the maximum principle to problems of economics mushroomed between 1965 and 1975. is the Legendre necessary condition. pp. In the present model. It can also be shown that the transversality condition for the problem with a terminal curve y . this choice of focus is certainly reasonable.. this means .55).57) A = -F. the transversality condition is A(T) = 0. Now.56) gives another expression for equating the two expressions. Thus the maximum principle is perfectly consistent with the conditions of the calculus of variations. equivalently. Since the ill effects of unemployment and inflation seem to have been the primary economic concerns of the electorate. OPTIMAL CONTROL THEORY CHAPTER I OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 193 the following conditions by the maximum principle: For the problem with a horizontal terminal line.56) yields a2H -=Flu= F. however.571. = 0. When the Hamiltonian is maximized with respect to u .56) again and after substituting y for u .18). For a control problem with a vertical terminal line. however. 169-190.

we are not yet equipped to deal with such a constraint. there is a tradeoff between the two variables U and p.rr b(p 7j = . in contrast to the expression e-pt.59) shows that its value at any time t will become determinate once the values of the state and control variables are known. Then the p equation will disappear as a separate constraint.194 PART3: OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 195 variable.6 embodied in an (aggregate) vote function where u is a measure of the vote-getting power of the incumbent party. but its reverse. p. The plan-and this is an oft-used strategy-is to impose no restriction and just let the solution fall . just a function of the other two variables. no nonnegativity restriction has actually been placed on the control variable U. but. To use U as a control variable. what we have here is not a discount factor. we can take T as a state variable. and the next election is to be held T years later a t t = T. as the boundary conditions indicate. we have retained the expectations-augmented Phillips relation in the problem statement. on the political side. where r > 0 denotes the rate of decay of memory. Expectations are assumed to be formed adaptively. however. If the voters have a short collective memory and are influenced more by the events occurring near election time. the two variables under consideration are also linked to each other in an economic tradeoff via the expectationsaugmented Phillips relation L T v ( u . and T. out of the three isovote curves illustrated. We may then formulate the optimal control problem of the incumbent party as follows: Maximize (7. like u. the pair of realized values of U and p will determine a 0 TI. where.60). U .6. (7.T) ( r 0 . But since U can affect p via (7. Such values of u for different points of time must all enter into the objective functional of the incumbent party. p.59) and then dynamically drive T via (7. Third. however. We may thus view it as neither a state variable nor a control variable.60) constitutes an equation of motion for T. At any time in the period 1 . the incumbent party faces a vertical terminal line. At the moment. we now have three variables.61) subject to FIGURE 7. Fourth. Aside from the political tradeoff.61) may be in order. it must come with a given equation of motion in the problem. But which of these should be considered as state variables and which as control variables? For a variable to qualify as a state variable. since T (election time) is predetermined. This fact is reflected in Fig. The notion of the isovote curve underscores the fact that. The partial derivatives of v with respect to each argument are negative. The incumbent party then has a total of T years in which to impress the voters with its accomplishments (or outward appearances thereof) in order to win their votes. does not come with an equation of motion. T given) and ~ ( 0 = rO T(T ) free ) where T denotes the expected rate of inflation. This function shows that the u values at later points of time are weighted more heavily. the variable p can easily be eliminated by substituting that equation into the vote function and the equation of motion. the weighting system for the u values pertaining to different points of time has been given the specific form of the exponential function ert. As to the remaining A few comments on (7. on the other hand. although the rate of unemployment is perforce nonnegative. we can use it as a control variable. Second. it can hope to recoup the vote loss via a sufficient reduction in the rate of unemployment. If the incumbent party displeases the voters by producing higher inflation. But the various values may have to be weighted differently. then the u values of the later part of the period [O. Since (7. the highest one is associated with the lowest v. TI should be assigned heavier weights than those that come earlier. First. Fortunately. specific value of u. because high values of U and p are both conducive to vote loss. The Optimal Control Problem Suppose that a party has just won the election at t = 0. 7. Note that. The variable U . according to the differential equation All in all. depending on the time of occurrence. requires the implicit assumption that the government in power does have the ability to implement any target rate of unemployment it chooses at any point of time.p ) e r t dt p = + ( U ) + a.

the control path in (7. but A is an arbitrary constant to be definitized.. Letting t = T in (7. 14.66) indeed maximizes the Hamiltonian at every point of time. then there is no need to worry. we shall have to amend the problem formulation. upon simplification. we can make use of the transversality condition for the vertical-terminal-line problem. =~ ~ b ( l .63) into (7. Chiang. respectively. The Optimal Costate Path The search for the costate path begins with the equation of motion 1 It is this control path that the incumbent party should follow in the interest of its reelection in year T. Using these specific functions.1 (for the complementary function) and Sec. ' we can find the complementary function A. In (7. we find that the Phillips relation + ( U ) has been linearized. and solving for A.6 (for the particular integral). all it takes is to substitute (7. and after substituting (7. McGrawHill. if not.611. applying the transversality condition. 15. A(T) = 0. as the maximum principle requires. it can be seen that the partial derivatives of v are indeed negative. To definitize A . The presence of A in the U ( t ) solution now necessitates a search for the A(t) path. we find that A = ( . Sec.hj 3 + hk U .64) subjectto and ?i=b[j-kU-(1-a)a] ~ ( 0 = TO ) (T.ha T)e T ( T ) free " dt (7. I t follows that the definite solution-the optimal costate path -1s Maximizing the Hamiltonian The Hamiltonian is Maximizing H with respect to the control variable U . See Alpha C. 14 .67).62).a ) t From (7. As stated in (7.66) to derive the optimal control path. Nordhaus assumes the following specific function forms: When rewritten into the form the equation is readily recognized as a first-order linear differential equation with a constant coefficient but a variable term. The result is. we have the equatim 1 The Optimal Control Path Now that we have found A*(t). 3d ed..PART 3 OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 197 out however it may. and the ~ particular integral h to be.63). and thus cannot yield a quantitative solution. the problem contains general functions.ha/B)eBT. we now have the specific problem: Maximize ( A arbitrary) kT( u' - It follows that the general solution for A is . Employing the standard methods of s o l ~ t i o n . If U * ( t ) turns out to have economically acceptable values for all t . 1984.67') into (7. Since a2H/aU2 = -2ert < 0. To solve the problem quantitatively. New York. B is merely a shorthand symbol we have chosen in order to simplify the notation. and only if not. A.621. T given) Note that the two constants A and B are fundamentally different in nature. Fundamental Methods of Mathematical Economics.

7. less than some maximum tolerable unemployment rate Urn. This means that the strategy of deliberately not imposing any restriction on the variable U does not cause any trouble regarding the sign of U in the present case. Alternative assumptions-such as changing the linear term .62) and the expectations-augmented Phillips relation (7. The Optimal State Path The politically inspired cyclical tendency in the control variable U must also spill over to the state variable T . [Note:B = r .also cause U * ( t )= 0 for all t? Explain the economic implications and rationale for such an outcome. by differentiating (7. the time profile of p* tends to be the opposite of that of U * .63) undoubtedly exert an important influence upon the final solutions. Unless the parameter values are such that U*(O) 5 Urn. the political business cycles will tend to persist. b = 0. The reader is reminded that the conclusions of the present model-like those of any other model-are intimately tied to the assumptions adopted. And.06. Specifically.b + ab. accordingly. with parameter changes in different electoral periods. the curvature of the U * ( t ) path does not always have to be concave as in Fig. aside from causing dU*/dt = 0.7. But the problem formulation is also likely to become much more complicated.b + ab = .Urn.64) and write out the new problem.30. 3 How does a change in the value of parameter r (the rate of decay of voter memory) affect the slope of the U*(t) path? Discuss the economic implications of your result. 7. the U* values at all values of t in [0.70. . For. < 1.PART 3 OPTIMAL CONTROL THEORY CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 199 What are the economic implications of this path? First. U * ( t ) . b.. In other words. to set a high unemployment level immediately upon winning the election at t = 0 . to be economically meaningful.]. ( a ) Solve the new problem by carrying out the same steps as those illustrated in the text for the original problem. the optimal levels of unemployment at time 0 and time T can be exactly determined. In fact. kh/2. The general pattern would be for the optimal rate of inflation to be relatively low at the beginning of each electoral period. Nevertheless. as is the exponential expression.0. the model needs to be reformulated by including the constraint U ( t ) E [0. In particular.7. for illustration. The typical optimal unemployment path.03. is a positive quantity. we can see that EXERCISE 7.6 1 ( a ) What would happen in the Nordhaus model if the optimal control path were characterized by dU*/dt = 0 for all t? ( b ) What values of the various parameters would cause dU*/dt to become zero? (c) Interpret economically the parameter values you have indicated in part (b). the specific forms chosen for the vote function in (7. we have because k. They are If.hp in (7. The vote-maximizing economic policy is. we note that U* is a decreasing function of t. and hence also to the actual rate of inflation p. where we also show the repetition of similar U * ( t ) patterns over successive electoral periods generates political business cycles. the position as well as the curvature of the U * ( t )paths in succeeding electoral periods may very well change. 2 What parameter values would. and then let the rate of unemployment fall steadily throughout the electoral period [O.62) to -hp2-are likely to modify significantly the U * ( t ) solution as well as the p*(t) solution. Note that the terminal unemployment level. But we shall not go into the actual derivation of the optimal inflation rate path here. and a are all positive. and a = 0. However. T I . But a positive value of B will turn the curvature the other way. but undergo a steady climb. And since U * ( T )represents the lowest point on the entire U * ( t )path.T I must uniformly be positive. However. is illustrated in Fig. h. then B = s . r = 0.] 4 Eliminate the ert term in the objective function in (7. and the U * ( t )path will be concave.69) with respect to t . more realistically. U*(O) must be less than unity or.

it is clear that S plays the role of a state variable here. pollution is assumed for simplicity to be nonaccumulating.69). that is. CHAPTER 7 OPTIMUM CONTROL THE MAXIMUM PRINCIPLE 201 0 in the results of the original model.5. Maximizing the Hamiltonian The Hamiltonian function (7. appears in (7. pp. a truncated vertical terminal line characterizes the problem. With a single control variable E and a single state variable S . 8.73) C = (7.75) subject to and s = -E S(0) = So S(T) 2 0 ( S o . Up may decrease from.71) in the derivative form.3 with the method of the calculus of variations. S. it certainly behooves its citizenry to be concerned about the question of how the limited supply of the resource is best to be allocated for use over time. which creates disutility. makes possible the production of goods and services for consumption. C. which creates utility. Then we have (7. Forster.h E . C(E) and P(E).T given) C(E) (C' > 0. 1 2 a I I I 4 4 The Optimal Control Problem If an Energy Board is appointed to plan and chart the optimal time path of the energy-use variable E over a specified time period [O. P . P" > 0) While energy use raises consumption at a decreasing rate. The only other variable in the model. but also generates a flow of pollution.Up. with derivatives as follows: (7.3). therefore. Forster specifies the consumption function and the pollution function as follows: (7. < 0 reveals that the marginal utility of pollution is negative and diminishing (given a particular increase in P .the dynamic optimization problem it must solve may take the form Energy use.Up<O. the problem can be solved with relative ease.71) S = -E The specification of Uc > 0 and Ucc < 0 shows that the marginal utility of consumption is positive but diminishing. the specification of Up < 0 and Up.72) (7. say.P) 3 a 7. it generates pollution at an increasing rate. especially (7.Uc. treating pollution as a stock variable and involving two state variables. The social utility function depends on consumption and pollution.68) and (7. In this particular model. But the citizens of the present-day world are also intensely concerned about the quality of the environment in which they live. We have discussed some of the issues involved in Sec. 1980. then what is the optimal time path for energy use? We shall illustrate how such a question can be tackled by optimal control theory with a model of Bruce A. If the use of the exhaustible fuel generates pollution as a by-product.7 ENERGY USE AND ENVIRONMENTAL QUALITY When an economv is faced with an essential resource that is exhaustible. ''~ruce A. fossil fuel. In contrast. Since the terminal time is fixed. Instead of writing a simple utility function U(E) as we did in the introductory section of this chapter.Up). E.<O. in the final analysis.p=O) : The Social Utility Function Let S(t) denote the stock of the fuel and E(t) the rate of extraction of fuel (and energy use) at any time t. In terms of the marginal disutility of P ( = . Since both C and P in turn depend on E. subject only to the natureimposed restriction that it be nonnegative. the social utility hinges. And it grants the Energy Board the discretion of selecting the terminal stock S(T).<O. it is a flow that This particular formulation allows no discount factor in the integrand. which assumes a single energy source producing a nonaccumulating pollutant. dissipates and does not build up into a stock.76) H = U[C(E). "Optimal Energy Use m a Polluted Environment.74) U=U(C. . here we shall confine our attention exclusively to the first one. Upp < 0 signifies increasing marginal disutility. Forster. 6. on energy use alone-positively via consumption and negatively via pollution. a practice in the Ramsey tradition. therefore.200 ( b ) Check your results by setting r = PART 3 OPTIMAL CONTROL THEORY . Since it is a variable dynamically driven by the control variable E. Another model.15 (Uc>O. our new utility function should contain two arguments. While this paper presents three different models. P(E)] . leaving E as the prime candidate for the control variable." Journal of Envrronmental Economrcs and Management. 321-333.U. will be discussed in Sec. C" < 0) P =P(E) (P' > 0. This is exemplified by the auto-emission type of pollution. say.2 to . TI. This means that C and P can both be substituted out.

we can examine the S ( T ) 2 0 restriction qualitatively.C" + U. we need to look into the time path of A.81) can satisfy the S ( T ) r 0 restriction.79) A(T) r0 S(T) r0 A(T)S(T) = 0 [by (7. like U . With the higher rate of energy use E*. is in effect. C(E). C.Thus the optimal state path can be written as The value of S*(t) at any time clearly hinges on the magnitude of E*. it is easily seen that k represents the initial fuel stock So. the Energy Board would still be within the bounds of its given authority. still a matter to be settled. dependent on E. would patently violate the S ( T ) 2 0 stipulation. What (7. the equation of motion s = .781. on the other hand. For this purpose. to set A(T) = 0 is in effect to set ~ ( t = 0 for all t. its solution is constant over time: (7. 7.MAXIMUM PRINCIPLE THE 203 involves nonlinear differentiable functions U.80) does is. we must find the state path S(t).72). measures the effect of a change in E on U via C. Nonetheless. P). if our solution E* in (7.80). For the problem at hand. (7. E * cannot be assigned a specific numerical value or parametric expression. That is. The maximum principle tells us that the equation of motion for A is (7. When the low rate of energy use E*.P + <0 [by (7.. Thus we may maximize H with respect to the control variable simply by setting its first derivative equal to zero: When solved. To make sure that (7. in principle. the optimal stock S*(t) appears as a gently downward-sloping straight line.77) maximizes rather than minimizes the Hamiltonian. The Optimal Costate and Control Paths To elicit more information about E from (7. S i n e this equation is independent of the variable t . entailing the premature exhaustion of the fuel endowment. which. much as the familiar MC = MR rule requires a firm to balance the cost and revenue effects of production. and P. the second derivative is a2H -= Meanwhile. With constant energy use.81) dE2 uCccf2U.77) does maximize H. the UPP1(E)term expresses the marginal disutility of energy use through its pollution effect.35)] In practical applications of this type of condition. the fuel stock is brought down to zero a t time T. with a truncated vertical terminal line. Consider the three illustrative values of E* in Fig.81) E*(t) = E* ( a specific constant) [if A*(t) = 01 Whether this solution is acceptable from the standpoint of the S(!l') 1 O restriction is. and P ( E ) -are all general functions. E*3. by setting t = 0 in this result. it represents the marginal utility of energy use through its contribution to consumption.202 PART 3 OPTIMAL CONTROL THEORY CHAPTER I OPTIMUM CONTROL.P'~ + U.73) 1 and (7. But the other case. (7. . such that S * ( T ) is positive. the condition takes the form (7. UcCr(E). it is useful to examine the economic meaning of (7. therefore. to direct the Energy Board to select a E * value that balances the marginal utility and disutility of energy use. < E*.. The Optimal State Path It remains to check whether the E* solution in (7. can be solved for the optimal control path. Thus.77) reduces to an equation in the single variable E. however. < E*. Even so. Since Uc and Up are. the standard initial step is to set A(T) = 0 (as if the terminal line is not truncated) to see whether the solution will work. where E*. Since A(t) is a constant by (7. we check the sign of a2H/a~'.77). we can resort to the transversality condition. Since the functions we have been working with-U(C. this equation expresses E in terms of A. ) With A(t) = 0.8. however.. The first term.78) A=---=o aH as implying A(t) = c (constant) To definitize the constant c.74)] Its negative sign guarantees that (7. Similarly.E can be readily integrated to yield S(t)=-Et+k (karbitrary) Moreover.

83) E* CHAPTER s o = T 8 MORE ON OPTIMAL CONTROL w r- [if (7.83)]or nonbinding [as in (7. How should Fig.. the E* path then will turn out to be decreasing over time. If a discount factor is introduced [see Prob. and solve the problem as one with a given terminal point. 2 Let the terminal condition in the Forster model be changed from S ( T ) 2 0 to S ( T ) r S. E*. > 0. But if it is like E*. results in S ( T ) > S. Robert Dorfman shows that each part of the maximum principle can be assigned an intuitively appealing economic meaning.81) violates S * ( T ) 2 0 1 This new @ can be illustrated by E*2 in Fig. However.204 PART 3. and the problem solved. December 1969. then the transversality condition (7. satisfy the "marginal utility = marginal disutility" rule? ( b ) Does E*.. and E*.. 8. concavity sufficient conditions.78)? (c) If the transversality condition A(T) = 0 applies. ( b ) Examine the optimal costate path. E*. is constant over time." American Economic Review. 3 in Exercise 7. and that each condition therein can be made plausible from a commonsense point of view. it is always helpful to understand the intuitive economic meanings behind them. provided that A*(t) > 0.. the E* value can be directly found from (7. instead. What assumption in the model is responsible for this particular result? The answer lies in the absence of a discount factor...82) by setting t = T and S ( T ) = 0: (7..71. and problems with multiple state and control variables. which fails to satisfy the S ( T ) r 0 restriction. .8111. in the other case where A*(t) = 0. is E*. I I I 205 . ( a ) Does E*. EXERCISE 7. we shall then move on to some other aspects of optimal control theory such as the current-value Hamiltonian. is binding [as in (7..1 AN ECONOMIC INTERPRETATION OF THE MAXIMUM PRINCIPLE In a remarkable article..8 be modified to show that E*. After that is accomplished. ( a ) Write the new Hamiltonian. 7.8. satisfy that rule? If not. characterized by "marginal utility < marginal disutility" or "marginal utility > marginal disutility"? Explain. Do you still obtain a constant A path as in (7..79) is met.. 7. the optimal rate of energy use. then we must set S ( T ) = 0. pp. 'Robert Dorfman. 817-831. This constancy of E* prevails whether the terminal-stock restriction.7 1 Suppose that the solution of (7. and find the condition that will maximize the new Hamiltonian.' We shall draw heavily from that article in this section. results in S ( T ) <?.80)? What can you conclude about E* for this case? ( d ) If the transversality condition is A(T) > 0 and S ( T ) = 0 instead. or E*. what will become of the H-maximizing condition in part (a)? Find the derivative dE/dt and deduce the time profile of the E*(t)path for this case. For this reason. 3 Suppose that the Energy Board decides to incorporate a discount factor e-pt into the objective functional.. S ( T ) 2 0. "An Economic Interpretation of Optimal Control Theory. and consequently the Energy Board is forced to select the lower rate of energy use. It is a notable feature of this model that E*. we shall now seek to buttress our mathematical understanding of the maximum principle by an economic interpretation of the various conditions it imposes. E* will still be constant. OPT~MAL CONTROL THEORY turns out to be like E*.S . In that event. results in S ( T ) = S.80) turns out to be E*. I 1 For a better grasp of mathematical methods and results. what will become of the H-maximizing condition in part (a)?Can the condition be simplified to (7.

. = 0. If we had one more (infinitesimal) unit of capital initially. the A value at time T measures the shadow price of a unit of the terminal * capital stock.' The maximum principle requires the maximization of the Hamiltonian with respect to u. A is in the nature of a Lagrange multiplier and. to get n* = j T [ ~ ( tK*.. What this means is that the firm must try at each point K = f( t. A*(O). and does not have to be weighed against the future-profit effect. the terminal value of the optimal costate path.4). is a measure of the sensitivity of the optimal total profit II* to the given initial capital. In sum.206 PART 3." Unlike the first term.A*(O) an* aKO and . To confirm this. U ) as in (7. For this interpretation to be valid. TI.3): the f(t. then it will normally involve a sacrifice in the future profit. u) function indicates the rate of change of (physical) capital. however. u ) . MORE ON OPTIMAL CONTROL 207 Consider a firm that seeks to maximize its profits over the time interval [O. selection .A*(T)K*(T) + P ( 0 ) K o . representing some business decision the firm has to make at each moment of time (such as its advertising budget or inventory policy). corresponding to policy u. since the objective of capital accumulation is to pave the way for the production of profits for the firm in the future. What about the costate variable A ? As intimated in an earlier section. There is a single state variable. u ) + h(t)f(t. K. At any moment of time. We may think of it as representing the "current profit corresponding to policy u. ] 0 Partial differentiation of II* with respect to the (given) initial capital and the (optimal) terminal capital yields (8-2) . and if it turns out that A. then we would have to sacrifice our total profit by the amount A*(T). ' ~ o t e that if the Hamiltonian is written out more completely as H = A. u*. based on the current capital and the current policy decision taken at that time. K. we have to write the Lagrange-multiplier expression as A(t)[ f(t. the Hamiltonian represents the overall profit prospect of the various policy decisions.1) is (K. So. Hence.Tgiven) The first component on the right is simply the profit function at time t. u )I.u also bears upon the rate at which capital K changes over time. And there is a single control variable u.K ] rather than h(t)[ K . it should have the connotation of a shadow price.&. T 0 U ) dt The Hamiltonian and the Profit Prospect The Hamiltonian of problem (8. as such. / ~ ( tK. its profit function is ~ ( tK. can be normalized to one. Otherwise. the latter expression can be taken as the imputed value or shadow price of a unit of initial capital. K. the profit of the firm depends on the amount of capital it currently holds as well as on the policy u it currently selects. K. which relates to the current-profit effect of u." In the second component of (8. These two effects are in general competing in nature: If a particular policy decision u is favorable to the current profit. it is converted to a monetary value.1) subject to and II = A*(T). we would expect A.-A*(T) = aK*(T) an* Thus. and costate.2). capital stock K. K. in which case A. If we wished to preserve one more unit (use up one less unit) of capital stock at the end of the planning period.. but when the f function is multiplied by the shadow price A(t). h*(t) for any t is the shadow price of capital at that particular point of time. then. state. II* would be larger by the amount A*(O). Thus.but the terminal capital stock is left open. and plug in the optimal paths or values for all variables. It follows that the optimal control problem is to Maximize (8. . .f(t. The control variable u and the state variable K have already been assigned their economic meanings. that is.22") to the present context. to be nonzero. the second component of the Hamiltonian represents the "rate of change of capital value corresponding to policy u. But the policy . then this would mean economically that the current-profit effect of u somehow does not matter to the firm. the second term can be viewed as the future-profit effect of u. K. Such an outcome is intuitively not appealing. In a meaningful economic model. u ) K(0)=Ko K(T)free The Costate Variable as a Shadow Price The maximum principle places conditions on three types of variables: control. let us adapt (7. The firm starts out at time 0 with capital KO. In the other partial derivative in (8. is seen to be the negative of the rate of change of II* with respect to the optimal terminal capital stock. In general. OPTIMAL CONTROL THEORY CHAPTER 8. the optimally determined initial costate value. A would become instead the * negative of the shadow price of K. with both the immediate and the future effects taken into account. again. then. K is affected by u. A*) + ~ * ( t ) i *dt . Therefore. K. u).

1. say KT. the firm has a prespecified terminal capital level. and the second. Kmi. then the restriction placed upon the terminal capital stock proves to be nonbinding. In that case. being too late to be put to use.4)]. examine the weak version of the "Max H" condition: Transversality Conditions What about transversality conditions? With a free terminal state K(T) at a fixed terminal time T (vertical terminal line). K... and that whatever capital stock that still exists at time T. that condition is 4 $ % i When it is rewritten into the form this condition shows that the optimal choice u* must balance any marginal increase in the current profit made possible by the policy [the left-hand-side expression in (8.1) itself. at the (chosen) terminal time. the tacit understanding is that only the profits made within the period [0. but is free to choose the time to reach the target. the firm should not attain KT. For a firm that intends to continue its existence beyond the planning period [0. The reason for this is that the valuableness of capital to the firm emanates solely from its potential for producing profits.. The transversality condition now stipulates that h(T) 2 0 and [ K * ( T ).. a ~ / a K . In view of this. The outcome is the same as if there is no restriction. we would have a truncated vertical terminal line instead.. A(a f/aK). In that case. included as part of the problem statement (8. the sum of the current and future profits pertaining to that point of time must be zero. The amount of terminal capital actually left by the firm will therefore be exactly at the minimum required level.. Rather. The one for the state variable K. T ] would matter. this would require the balancing of prospective gains in the current profit against prospective losses in the future profit.Kmin]A(T)= 0 [from (7. of course. in the sense that it is preventing the firm from using up as much of its capital toward the end of the period as it would otherwise do.. at a time when the sum of immediate and future profits (the value of H ) is still positive. If K*(T) turns out to exceed K. it should-after .. indeed is binding. Scrooge-who places no value on any material possessions that he himself cannot enjoy and must leave behind upon his demise. The equation of motion for the costate variable is This means that the shadow price of capital should be driven to zero at the terminal time. it may be reasonable to stipulate some minimum acceptable level for the terminal capital. say. we would not expect the firm to engage in capital accumulation in earnest toward the end of the planning period. consider the case of a horizontal terminal line. would have no economic value to the firm. What the maximum principle requires is that the shadow price of capital depreciate at the rate at which capital is contributing to the current and future profits of the firm. represents the marginal contribution of capital to the enhancement of capital value. and the old condition A(T) = 0 will still apply. In other words.5). The transversality condition simply means that. then the restriction K. To see this more clearly.5) denotes the rate of decrease of the shadow price over time. merely specifies the way the policy decision of the firm will affect the rate of change of capital.. The Equations of Motion The maximum principle involves two equations of motion. Given the rigid planning horizon T. The equation of motion requires this rate to be equal in magnitude to the sum of the two terms on the right-hand side of (8. The left-hand-side of (8.411 against the marginal decrease in the future profit that the policy will induce via the change in the capital stock [the right-hand-side expression in (8. Hence. The situation is not unlike that of a pure egotist-a sort of Mr. it should be trying to use up most of its capital by time T. But if the terminal shadow price A(T) is optimally nonzero (positive). rather. or the rate of depreciation of the shadow price. The first of these.35)] or. after multiplying through by . it is only natural that the shadow price of capital at time T should be set equal to zero.represents the marginal contribution of capital to current profit.208 PART 3: OPTIMAL CONTROL THEORY CHAPTER a MORE ON OPTIMAL CONTROL 209 of time to maximize the overall profit prospect by the proper choice of u. Specifically. TI. Finally.

it may be desirable to define a new Hamiltonian that is free of the discount factor. We shall do this for the vertical terminal line and the horizontal terminal line only.8) H = G(t. Let us therefore first define a new (current-value) Lagrange multiplier m: (8. The concept of the current-value Hamiltonian necessitates the companion concept of the current-value Lagrange multiplier.101).. compared with the original equation of motion for A. the integrand function F often contains a discount factor e-Pt. u ) [by (8. For the former. at a time when the sum of immediate and future profits has been squeezed down to zero. Thus the revised condition is simply I - 8. Such a Hamiltonian is called the current-value Hamiltonian. we have. Hc is now free of the discount factor. u ) + mf(t. we can deduce that As intended. y. where the term "current-value'' (as against '"resent-value") serves to convey the "undiscounted" nature of the new Hamiltonian. It remains to examine the transversality conditions. For the left-hand side.10) Hc = HeP' = G(t. the condition is essentially unchanged except for the substitution of Hc for H.2 THE CURRENT-VALUE HAMILTONIAN In economic applications of optimal control theory.y.u)e-ptdt 0 T The equation of motion for the state variable originally appears in the canonical system as = aH/dA. the new one for m involves an extra term.8) and (8. i we shall first transform each side of this equation into an expression involving the new Lagrange multiplier m. The particular u that maximizes H will therefore also maximize H. instead of H.= -aH/ay. and then equate the two results.iePt Using the definition of H in (8. then all the conditions of the maximum principle should be reexamined to see whether any revision is needed. When we switch to the current-value Hamiltonian Hc = HePt. we can rewrite the right-hand side as 'I Upon equating these two results and canceling the common factor e-P'. u ) boundary conditions - 8% am [equation of motion for y] By the standard definition. by differentiating (8. we arrive at the following revised equation of motion: (8.1 G(t. y.9).. Since dH/aA = f(t. can be written as (8. But since the maximum principle calls for the differentiation of H with respect to u and y. y. y. u ) = aHc/am [by (8. Note that (8. y. and since the presence of the discount factor adds complexity to the derivatives. The Maximum Principle Revised If we choose to work with H. = f(t. the Hamiltonian function takes the form (8.13) h = .12) y = I 3. . y.+ pm aHc ay (implying A = me-p') [equation of motion for m ] Then the current-value Hamiltonian. Such an F function can in general be expressed as so that the apkhal control problem is to Maximize (8-7) subject to and c 7 V .10) implies . p m. This is because the exponential term ePt is a constant for any given t. The first condition in the maximum principle is to maximize H with respect to u a t every point of time. u)enPt+ A f ( t . this equation should now be revised to (8.1 210 PART 3 OPTIMAL CONTROL THEORY CHAPTER 8 MORE ON OPTIMAL CONTROL 211 f I 6 having taken advantage of all possible profit opportunities-attain KT.8) and (&lo)]. u ) To revise the equation of motion for the costate variable.9)] Note that.9) m = . denoted by Hc.

and I = net investment. the problem is. we see that dm m=Cr(I)>O 3 -d l = Cn(I) > 0 .17) -- dHZ -0 dt or H. 5.16) subject to and f =f(y.=. (8. The T and C functions have derivatives T"(K) < 0 Cf(I)> 0 and C"(I) > 0 [seeFig.C ( I ) ] e . 0 From the Hamiltonian function H = [ T ( K ) . both the G and f functions m y contain no t argument.e-pT = [by (8. And all the revised maximum-principle conditions (8.-. However. = 0 [Hc].7).* dt l k =I boundary conditions Autonomous Problems As a special case of problem (8.C(I)]e-p' +A1 3.7).212 For the latter. we have (8. The current-value Hamiltonian is.15) [H].16) has an additional property not available in problem (8. Another View of the Eisner-Strotz Model Consider the following autonomous control problem adapted from the Eisner-Strotz model. however.* = constant [autonomous problem] This result is. they may take the forms G = G ( Y .. originally studied as a calculus-of-variations problem The latter version is simpler because it is free of the discount factor e-ft. = f(y. and the only control variable is I.18) subject to and L.16) as an autonomous problem-in the special sense that the t argument explicitly enters in the problem via the discount factor only.11 V = j T ~ ( yu)e-pt dt ..54). u) The meanings of the symbols are: T = profit. K = capital stock. But the current-value Hamiltonian of the autonomous problem (8.2: Maximize (8.11) to (8. nothing but a replay of (7.16) as it is to (8. 5. nonautonomous.~ ' explicitly contains t.19) Hc = T ( K ) . That is. Since Hc now specializes to the form 1 If we decide to work with the current-value Hamiltonian.20).C ( I ) + mI [by (8. as applicable to problem (8. its value evaluated along the optimal paths of all variables must be constant over time.[T(K) [H. similar reasoning shows that (8. C = adjustment cost. that is.101)] 0 .=O = . u ) boundary conditions there emerge the maximum-principle conditions (not including the transversality condition) Since the integrand G(y.~-P']. still strictly speaking. The only state variable is K. It is for this reason that economists tend to view problem (8. we can in effect take the discount factor e-Pt out of consideration. 3 PART 9: OPTIMAL CONTROL THEORY I/ CHAPTER 8 MORE ON OPTIMAL CONTROL with an infinite horizon in See. From (8. by using the current-value Hamiltonian.7). of course.15) still hold. of course.10)] with equivalent maximum-principle conditions which is free of the t argument. ~ ) and Then the problem becomes Maximize (8. ~ ) e .

the initial condition and the transversality condition should give us (8. y*. u*) and ( t .(t.y*. so that = F. Moreover.21). u ) in the domain.28) . ( 4 .F y ( t .u*) [cf.u * ) 3 0 . u * ) = 0 This implies that 8. y*. u * ) = . u * ) ( u .Y * ) u*) + F.u * ) ( y . I F'(~. we can express K as (8.(t.if (1)both the F and f functions are differentiable and concave in the variables ( y .F ( ~ . [If f is linear in y and in u . y . u ) jointly. u * ) ( y . from the costate equation of motion.A*f.y. then A(t) needs no sign restriction. Accordingly..y * ) . on Nonlinear Syntcma. those of Mangasarian and Arrow.24) K = $(m) for all t E [ 0 .29) yo* = yo (given) and A*(T) = 0 V= / TF ( t .23) into (8. These relations will prove instrumental in the following development. 4.= i -aH/ay.22) constitute a system of two simultaneous equations in the variables K and m . and ( 2 ) in the optimal solution it is true that (8. and definitizing the arbitrary constants by the boundary and transversality conditions.(t. u*) I f y ( t .2 Find the revised transversality conditions stated in terms Hamiltonian for the following: 1 A problem with terminal curve yT = d ( T ) . Mangasarian. u*) . "Sufficient Conditions for the Optimal Control of SZAMJourna~ Control. y*. y.] As a preliminary to demonstrating the validity of this theorem. then the conditions stipulated by the maximum principle are sufficient for maximization. 3 A problem with a truncated horizontal terminal line. we have (8. we should i*= which implies that (8. Vol.u) . u ) ~ ( 0=Yo ) ( Y OT ii6ven) .26) A( t ) 2 0 Substituting (8. (8. u ) dt 0 subject to and 3.23). we should be able to write an inverse function $ = c'-': (8. let us first remind ourselves that with the Hamiltonian EXERCISE 8. u ) . ~ a n ~ a s a r i a na h sat for the & ~ optimal control problem Maximize (8. We shall present here only two such sufficiency theorems.Y*. In general. February 1966. Y * . these conditions are not sufficient.u * ) Fy(t. After solving these for the K * ( t ) and m*(t) paths. 139-152.f ( t . y*. + f . assuming for the time being that the problem has a vertical terminal line.30) F ( t . u*) + A*f.y. u ) .23) I=$(m) ($'>0) the necessary conditions of the maximum principle are also sufficient for the global maximization of V. 2 A problem with a truncated vertical terminal line. = f ( t . L.25) Finally. pp.L.A*fy(t. 4 ) ] - - .24) and (8.'' (8. the optimal control path u*(t)-along with the associated y*(t) and A*(t) paths-must satisfy the maximum principle.(t. y*.T ] if f is nonlinear in y or in u Paired together. However. we can find the optimal control path Z*(t) via (8.214 PART 3 OPTIMAL CONTROL THEORY CHAPTER 8 MORE ON OPTIMAL CONTROL 215 ! that is. Then.y. u * ) ( u . for two distinct points ( t . when certain concavity conditions are satisfied.3 SUFFICIENT CONDITIONS The maximum principle furnishes a set of necessary conditions for optimal control. u * ) i * The Mangasarian Sufficiency Theorem A basic sufficiency theorem due to 0.30') f(t.y*. y * .y*. have . y * . Now let both the F and f functions be concave in ( y . ( t . y * . m is a monotonically increasing function of I.

f y ( t . u * ) ( u .30') will become strict inequalities. upon substitution into (8. u ) . But the latter expression will also vanish i f the problem has a fixed terminal state yTo.u * ) ] dt < -0 T h e last inequality follows from the assumption of A 2 0 in (8.311. Thus the Mangasarian theorem is applicable as long as T is fixed.y*.li. the Hamiltonian function is maximized by a particular u .28) and (8. Providence. A proof of the theorem is also provided in Morton I.y*)dt(= ~ ~ v d ~ ) - [-A*(y . Jr. Veinott. y .30) over [0. American Mathematical Society.27)] The first component o f the last integral. u * ) ( u . so the restriction in (8.261.f ( t . Kamien and Nancy L.A*(Y .31) V . must be concave in ( y . Note that i f the f function is linear in ( y ." in George B. we shall describe its essence without reproducing the proof.j * ) d t his theorem appears without proof as Proposition 5 in Kenneth J.P f Y ( t . (8. 92. T o see this. being the sum o f two concave functions. and Optimal Fiscal Policy. 1968. Part 2. the said expression will vanish. Hence. i f the problem has a truncated vertical terminal line. y*. to write (8. p. H = F A f . y. u ) . . 45.u * ) ] dt [by (8. = + Fu(t. the theorem is also valid for other problems with a fixed T (fixed terminal point or truncated vertical terminal line). I f the F and f functions are both concave in ( y .V* 5 0 regardless o f the sign o f A*. as claimed i n the theorem. too. relating t o the expression -i*(y y*).31') V .(t. Dantzig and Arthur F. u * ) . as will the inequalities in (8. published for Resources for the Future. u ) . the bracketed expression i n the integrand in The Arrow Sufficiency Theorem Another sufficiency theorem.TI. ( t . u * ) ( y . Schwartz.j * ) d t .V* I k T [ ~ . MD.. and can be considered as a generalization o f the latter. Although the proof o f the theorem has proceeded on the assumption o f a vertical terminal line. due to Kenneth J . that inequality becomes (8.. y*.31"). In either event. then either the transversality condi* tion A ( T ) = 0 is satisfied (ift h e truncation point is nonbinding) or we must treat the problem as one with a fixed terminal state at the truncation point.f ( t . Baltimore. the weak inequalities in (8. T h e maximum principle will then be sufficient for a unique global maximum o f V . given the values o f the state and costate variables y and A .26) can be omitted. At any point o f time.30) and (8. and i f h is nonnegative.V * 5 T 0 ~ * [ f ( t .0 = . Moreover. It also appears with proof as Proposition 6 in Kenneth J." Journal of Economic Theory.*(Y .y*. then (8.31r).yT*) vanish. u * ) ] dt ( This result enables us.y*) y*.y*)]: kT . MORE ON OPTIMAL CONTROL 217 - Upon integrating both sides o f (8.y*. In that case. T h e above theorem is based on the F and f functions being concave. u ) . kT . eds.29) is applied in the integration-by-parts process (see the associated footnote) t o make the expression -A*(T)(yT . 9 * . SO. ~ a weaker uses condition than Mangasarian's theorem. y*.u * ) ] dt L T [ .i * ( y . "Applications of Control Theory to Economic Growth.y * ) . Consequently. pp. 207-214. u*. * ~ e u = -A* and v = y .i * d t and du = (9 . it is possible to combine Mangasarian's conditions ( 1 ) and ( 2 )into a single concavity condition on the Hamiltonian. 3 - + which establishes V * to be a (global) maximum. In the application o f this theorem. and we can have the desired result V . the theorem can be restated i n terms o f the concavity o f H. can be integrated by parts to yield4 kT .YT*). u * ) ( u . and the * fact that the bracketed expression in the integrand is I 0 by (8. 1971.31).(t.. p. Vol. Arrow and Mordecai Kurz.u * ) ( ~ y * ) -A*f. for then the said expression will become -A*(T)(yT0 . "Sufficient Conditions in Optimal Control Theory. -y*) -f.and it must be zero because yT* has to be equal to yTo. u ) .30') becomes a strict equality. Public Investment. by the Johns Hopkins Press.30').31') is zero. 1970. y*. 3. and (8. RI. A r r ~ w . Then du t = . then t h e Hamiltonian.PART 3: OPTIMAL CONTROL THEORY CHAPTER 8. Mathematics ofthe Decision Sciences. Here. I f those functions are strictly concave. The Rate of Return. Arrow. the final result is (8. recall that i n the proof of this theorem the transversality condition A*(T) = 0 in (8. Inc.Y * ) d f = k T ~ *f [ t .

r and it satisfies the Arrow sufficient condition. so the concavity condition relates to u alone. u*) + Af(t. The Arrow theorem states that. u.26) is irrelevant. we recall from (7. and that u* takes the boundary values 2 and 0 in the . u). Moreover. y . and from this it follows that H0 is concave in y. the conditions of the maximum principle are sufficient for the global maximization of V.33) H O ( t . which differ from H and H O . only by a discount factor e . Besides. all the second-order partial derivatives of F and f are zero. u).respectively. As to the f function. which establishes the negative semidefiniteness of the quadratic form involved. it is no longer necessary to check the Arrow condition. As to the Arrow condition. the other arguments remain. for given A. and A*(t) for every point of time. But if we do wish to apply the Arrow theorem.14) that the optimal control depends on A.26) irrelevant. that is. it is automatically concave in u.13)) E y(O)=4 and u(t) [O.33) is concave in the variable y for all t in the time interval [0.32) is substituted into the Hamiltonian.O.Tgiven) [from(7. But H0 can be concave in y even if F and f are not concave in (y. as in Example 1. as stipulated by Arrow. 7. EXAMPLE 2 In the problem of Example 2 in Sec.A ~ ) . so that H 0 = HO(t. we discussed the a h c e problem Maximize subject to and 2v=jT-(l+u2) 0 1/2 dt jr = u y(T)free (A.y. we note that neither the F function nor the f function depends on y.53) and (7. y . both the F and f functions are linear in (y. In contrast. as stipulated by Mangasarian.y. we have here IDII = 1D21 = 0. although the theorem has been couched in terms of the regular Hamiltonian H and its "maximized" version H O . Like the Mangasarian theorem. then H = F + A f is also concave in (y. A is y. In the present example. u ) and A r 0. Also. Thus H 0 is linear and hence concave in y & given A.2: Maximize subject to ~=/o2y-3u)dt ( jr =y 2 +u y(2)free [from (7.218 which depends on t. we can proceed to check whether the maximized Hamiltonian H O is concave in y. thus. u). the resulting H 0 express& contains A alone. Thus F is concave in u . the fact that f is linear makes condition (8.2. and A: (8. the Hamiltonian is When the optimal control u ( t > = ~ ( . A) BART 9: OPTIMAL CONTROL THEW CHAPTER 8 MORE ON OPTIMAL CONTROL 219 For the Mangasarian theorem.54). while the u argument is substituted out. and the optimal solution found earlier does maximize V (and minimize the distance) globally. if the maximized Hamiltonian function H 0 defined in (8. we obtain When (8. the y. and its "maximized" version H. H 0 is evaluated along u*(t) only. Hence.p t . the conditions of Mangasarian are satisfied. y. With r_eferenceto the test for sign semidefiniteness in (4.121. The reason why the Arrow theorem can be considered as a generalization of the Mangasarian theorem-or the latter as a special case of the former-is as follows: If both the F and f functions are concave in (y. Example 1.( 1 + u2) .251. Once the Mangasarian conditions are satisfied. evaluated a t y*(t). As a result.TI.32) U* = u*(t. f = u . 7. From the F function. and A arguments can all be substituted out.3/2 <0 [by the product rule] Note that the concept of H O is different from that of the optimal Hamiltonian H * encountered in (7. u*) F . which makes the Arrow condition a weaker requirement.. with no y. The conditions of the Mangasarian theorem are therefore satisfied.~ /[from (7. = -u(1 + u2) -'I2 and F. since it is linear in u. u). the validity of the Arrow theorem actually extends to problems with other types of terminal conditions as long as T is fixed.2] EXAMPLE 1 In Sec. it can also be rephrased using the current-value Hamiltonian H. u*(t). we obtain what is r e f e d to as the maximized Hamiltonian function (8. the stipulation in (8.A ) = F ( t . Consequently. Since H * denotes the Hamiltonian evaluated along all the optimal paths. ) still a function with three arguments. both F and f are concave in (y. in the optimal control problem (8. = .9)] 1 ~ is substituted into H to eliminate u. leaving H * as a function of t alone: H * = H*(t).7)] y(O)=A Let us apply both sufficiency theorems.

. is linear and concave in I.2.3 Check the Mangasarian and Arrow sufficient conditions for the following: 1 Example 1 in Sec. we should have two maximized Hamiltonian functions. 7. 2d ed. The generalization to problems with multiple state variables and control variables is.8) and (4. 2 Example 3 in Sec. I). see Morton I. =a(K) 8. From the Hamiltonian H = Substitution of this optimal control into H.irTT(K). 112-129. 7. The checking of concavity in the present example is slightly more laborious than in the preceding two examples. with the current-value Hamiltonian [from (8.20) and (8. 1987.irl(K) and a2~. For the Mangasarian conditions. therefore. satisfied. we need the second-order partial derivatives of F = [.26) is again irrelevant. f = I . C1(I) > 0. 1991. O / ~ K= .220 two phases of the solution as follows: u*. because even though the model is simple. one for each phase. pp. we find here ID. 5 The Nordhaus model in Sec./aK2 = rTTIt(K) 0. Elsevier. with reference to the test for sign definiteness in (4. and C'Y I ) > 0. But the solution procedure naturally becomes more complicated. Atle Seierstad and Knut Sydsaeter. yields the w value Hamiltonian e d current- 2y .4 PROBLEMS WITH SEVERAL STATE AND CONTROL VARIABLES For simplicity. indicating that F is strictly concave in (K.6 Frequently. H 0 is linear in y for given A so the Arrow condition is . we recall from (8. solving for m in terms of I . 6 The Forster model in See. 8.6. Since condition (8.19)] we can.4. 0 [from (7. thereby obviating the need to check sufficient conditions.C(I)le-pt. pp. 1 * 5 EXAMPLE 3 Let us now check whether the maximum principle is sufficient for the control problem adapted from the Eisner-Strotz model (Sec. Example 1): Maximize subject to and L T [ a ( ~. = PART 3: OPTIMAL CONTROL THEORY 1 1 CHAPTER 8 MORE ON OPTIMAL CONTROL 22 1 form = 2 and u*. Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management. or . 222-225. To check the Arrow sufficient condition. New York. the maximized current-value Hamiltonian is strictly concave in the state variable K.7.4.r 'i t I K =I boundary conditions [from (8.23)] Therefore. very straightforward. models are constructed with the appropriate concavity/convexity properties incorporated or assumed at the outset. it involves general functions.3u + A(y + u ) < Since ~ H . in principle. New York.17)] I = I+?( m ) [from (8. 4 Problem 3 in Exercise 7. Schwartz. Kamien and Nancy L. express the optimal control in the H.4.[ < 0 and ID21 > 0. 7. 3 Problem 1in Exercise 7. In either phase.9). and then writing Z as an inverse function of m. Optimal Control Theory with Economic Applications. I).18) 1 where a"(K) < 0. by setting aH.4. we have so far confined our attention to problems with one state variable and one control variable.C(I)]e-~'dt ) .23) that. the Mangasarian conditions are fully met./aZ = 0. we can immediately see that the f function. . The Arrow sufficient condition is thus satisfied.. Elsevier. 7. To check the concavity of the F function in the variables (K. we obtain. For more elaborate general-function models.C(I) + mI I more examples of checking the Arrow sufficient condition. EXERCISE 8. These are found to be l I FKK = a"(K)e-Pt < 0 FKI = FIK= 0 FII = -CU(I)e-pf < 0 Thus. checking concavity-especially for the maximized Harniltonian-can be a tedious process. upon eliminating u.

u.. or n > m. In contrast. .. ... = f 1 ( t ...222 PART 3 OPTIMAL CONTROL THEORY CHAPTER 8 MORE ON OPTIMAL CONTROL 223 The Problem Let there be n state variables. y n . u. 0 .. The only difference is that. y. y...... and therefore involve much more than what is visible on the surface. . and m control variables.. we should also have n given terminal conditions on the state variables. n = m.. Assuming the problem to be one with fixed terminal points..y.. where u(t) and 8 are both m x 1vectors. y.. y.. u ) . the arguments in the f j functions can be simplified to (t. If we define two more vectors There should be n initial conditions on the y variables.34") subject to and V = / F(f..34') can be abbreviated to F(t.. finally.) dt then the equations of motion can be collectively written in a single vector equation: j = f(t. u.. F e all n x 1 vectors.. we may restate the problem as Maximize (8.. y.. Y. that even in the vector statement of the problem.. y(T).. not all the symbols represent vectors. m. First of all. Using the index j for the state variables and the index i for the control variables. The two numbers n and m are not restricted in relative magnitudes. to form the Hamiltonian. where y(O). u). By the same token. dt which. the optimal eontrol problem can be expressed as Maximize subjectto v = j 0T ~ ( ty. I ' u(t) E 9 This rather lengthy statement of the problem can be shortened somewhat by the use of index subscripts.. Obvious extensions of the idea to the boundary conditions would enable us to write the vector equations y(0) = y o and y(T) = yT.. on the values of yJ and the other state variables at time t. Define a y vector and a u vector Then the integrand function F in (8. . there should be in the problem n differential equations in the form To have an even more compact statement of the problem. and so are the symbols t and T. u..... And. The symbol V is clearly a scalar. but each control variable u..... The Maximum Principle = and YJ(O) =YJQ Y J ( ~=yjT ) u i ( t ) E 9... in (8. u). we introduce a . chosen at time t.. t.....Assuming fixed initial and terminal points for the time being.. ... u).yn.y.. u. . therefore.(t) E 8 . we may employ vector notation. in appearance... y 1 . = f( t.. u . Y(T) = YT and ul(t) E 9 1 . (8. Each state variable yJ must come with an equation of motion..u ) y(0) = y... . u. In vector notation... u.... ... may be subject to the restriction of a control region 9.... y. u ) .. Note.. .. subject to yj = fJ(t. we can express the control-region restrictions by the vector statement u(t) E 8 .... yo. several symbols represent vectors... however.j 1.34') V = l T ~ ( y. ( i = I. we can have either n < m. u ) . the statement of the optimal control problem is simply Maximize .. Thus.. describing how yJ specifically depends on t. the control variables are not accompanied by equations of motion... as well as on the values of each control variable u . ..yn. is not at all different from the problem with one state variable and one control variable. . j .n ) The extension of the maximum principle to the multivariable case is comparably straightforward.. and y.34")..u)dt 0 T 3..

37') where y and u are the state.=. y. The 2n arbitrary constants that are expected to arise from these differential equations can be definitized by using the 2n boundary conditions.38)-representing a total of 2n differential equations-constitute the canonical system of the present problem. The transversality conditions for the terminal-em-ve and truncatedterminal-line problems are also essentially similar to those for the onestate-variable case.(T) = [ifTisfree] [if yJT is free] 0 Similarly. Y . u . the equations of motion for the costate variables are simply Together. these n equations are still expressible in terms of the derivatives of the Hamiltonian: I From this. u+ ) C AJfJ(t. also be written in vector notation. . those two terms will expand to an expression with (n + 1) terms: But u is now an m-vector.35') H(t. ) ' - An( The noteworthy thing about this is that H is differentiated with respect to A' (a row vector). the following two basic transversality conditions arise: (8. AT .y.41) as there are state variables with free terminal states. is the fact that (8. so we must at each point of time make a choice of m control values. and y. (8. If the terminal point is variable. The summation term on the right can. MORE ON OPTIMAL CONTROL 225 Lagrange multiplier for every f J function in the problem.A) =F ( t . ( . When there are n state variables in the problem.36) Max H U I Transversality Conditions Problem (8.u) J=1 n If desired. here. are required to have their terminal values tied to the terminal time by the relations . Suppose that the terminal time is free.34). for the one-state-variable problem. Referring back to (7. * As to the equations of motion. u ) A1f(t.37) and (8. except that. The equivalent statement for (8.A(T) Ay. Note that the Hamiltonian itself is also a scalar. the last term is a scalar product-the product of the row vector A' and the column vector f(t.40) (8.y.37') is in line with the mathematical rule that the derivative of a scalar with respect to a row (column) vector is a column (row) vector. we see that. . More to the point. .u) + which looks strikingly similar to the Hamiltonian in a one-variable case.u.38) should be the Hamiltonian can be rewritten as (8.40) for the present n-variable problem contains more terms than its one-variable counterpart. not A (a column vector). we again need appropriate transversality conditions.. = 0. if desired.41) [HItzT=O A.and costate vectors defined earlier.y. the vector-notation translation of (8. We can thus still write the condition (8. the canonical system can also be translated into vector notation.30). however.35) H E F ( ~ . we have (8. Clearly. Moreover.. and two of the state variables. Once we define another n x 1vector 9=ax aH [ A' = transpose of A] A = [ ] with transpose A = A .*. these conditions are essentially no different from the transversality conditions for the one-state-variable case. . Thus. u . they are just the n differential equations appearing in (8. y. and (2) there will be as many conditions of the Aj(T) = 0 type in (8. u).224 PART 3: OPTIMAL CONTROL THEORY CHAPTER 8. The requirement that the Hamiltonian be maximized at every point of tirne with respect to the control variables stands as before.37) is (8. the transversality conditions are derived from the last two terms of the d V / d E expression set equal to zero: [HI.34) assumes fixed endpoints. The only differences are that (1)the Hamiltonian H in (8. those for the state variables always come with the problem. Accordingly. ~ . A look at (8.35') would make it clear why this should be the case.

The other state variables.. . 1962.'(T) AT Using these to eliminate Ay. as in (8. first-order equations of motion y=z and i = u one for each state variable.. = 4.42) provide three relations to determine the three unknowns T... This example actually starts out with a single state variable y.Pontryagin et al. are still subject to the type of transversality condition in (8. An Example from Pontryagin While the conceptual extension of the maximum principle to the multivariable problem is straightforward. we have to translate this second-order differential equation into an equivalent pair of first-order differential equations. this function leads to corner solutions for u. two state variables and one control variable. we have this time a second-order differential equation. .y.40).y.l] The boundary condition on the new variable z is equivalent to the specification of an initial value for 3. Since the second-order derivative y is not admissible in the standard format of the optimal control problem. and y. And we now have two . and Ay. which we wish to move from an initial state y(0) = y o to the terminal state E [--l. we can rewrite the latter The transversality condition that emerges for this terminal-curve problem is Condition (8. Finally. Step i The Hamiltonian function is Being linear in u. The specification of both y(0) and y(O) is a normal ingredient of a problem involving a second-order differential equation in y. To effect the translation. we introduce a second state variable z: z =y implying i =y Then the equation j = u can be rewritten as i = u. we have a time-optimal problem. Instead of the usual equation of motion in the form of y = f(t. . the optimal control should be '. If the terminal time is fixed. It is this translation process that causes the problem to become one with two state variables.. In view of the given control region. y.226 PART 3 OPTIMAL CONTROL THWBY CHAPTER 8 MORE ON OPTIMAL CONTROL 227 Then. for the problem with a maximum permissible terminal time. . The problem is then to This condition serves as a replacement for (8. we may expect the following to hold: AyIT = &'(T) AT and By.47) This condition is another type of replacement for (8. S. where y = dZy/dt2..I L n York. however. A taste of the increasing complexity can be had from a purely mathematical example adapted from the work of Pontryagin et a . " with l.1]. u). and all the state variables have truncated vertical terminal lines. Thus. (y. and where u is assumed to be constrained to the control region [ . pp. T .40)-together with the two equations in (8. economic applications with more than one state variable are not nearly as common as those with one state variable. the transversality condition is (8.41). 'SAc M&emtlticd l % m y of Optimal P m w e s .. for a small AT. the actual solution process can become much more involved..41).).39).. In other words.. For this reason.44)-which should replace (8. assumed here to have free terminal values. y = u. 23-27... Maximize subject to kTy =z ldt i = u ~ ( 0 =yo (given) ) z(0) and u(t) = Y ( T )= 0 T flw z.1. then the transversality condition is y(T) = 0 in the shortest possible time. this example also serves to illustrate how to handle a problem that contains a higher-order differential equation in the state equation of motion.

path. Note also that the terminal value of z is z ( T ) = 0. when k = 0. c2 arbitrary) Excepting the special case of c. This will enable us to depict the movement of y and z in a phase diagram in the yz Possibility 2 u* = . according to (8. because T is not yet known. this path plots as a straight line that intersects the t axis. The opposite is true below the horizontal axis. But c.ZO2) 1and i = 1.1 and i = .. we examine the state variables. Inasmuch as u can optimally take only two possible values. with z dy/dt on the vertical axis and y on the horizontal axis. represents the value of z at t = 0. = z. since y = z = t + c.1 C1 2 - = c. Figure 8. quadratic time path for y: (8. there cannot be more than one switch in the value of u*.52) may be definitized to (8. the equation of motion for z integrates to z ( t ) = -t + c. or the other way around. Thus.50) A2(t) = -c.511. we note that c. If we denote z(0) by z.1. Possibility 1 u* (8. Each parabola is associated with a specific value of k which is based on the values of yo and z... The initial condition y(0) = y o tells us that c.1. However. Under the second possibility. there are only two possible forms that equation cantake: i =l a n d i = -1.51) Then.51) and (8. this equation plots as a family of parabolas as illustrated in Fig.54) as (8. but also the exact starting point on that parabola. straight integration yields the following (c.1 is a one-variable phase diagram.54') y = +z2+ ( Y o . we get the parabola that passes through the point of origin. switches its algebraic sign and. Of course. The variable z has the equation of motion i = u. In particular. u* = f 1. the time path for z is z(t) = t + c.z2 +k (k yo - . But it is possible to condense them into one single equation by eliminating t and expressing y in terms of z. according to these arrowheads. we can pinpoint not only the relevant parabola.49). i t 2 + z0t + y o The way these last two equations are written.-= 0 implying A. (8.zo2) = . the optimal control u* must make a concomitant switch from 1 to . = -t + zo [cf. = 0 . y and z appear as two separate functions of time. At that intersection. In this case. 8. FIGURE 81 . A. Above the horizontal axis. the time path for A. then c. Glancing at (8.54) z(t) = t y(t) = + zo In the yz plane.. however.(t) ay aH A ---=-A . the directional arrowheads attached to the curves should uniformly point eastward. is not as easy to definitize from the terminal condition y ( T ) = 0. once Y O and 20 are numerically specified.228 PART 3 OPTIMAL CONTROL THEORY CHAPTER 8 MORE ON OPTIMAL CONTROL 229 Step ii Since u* depends on A. is linear: (8..1. . (8. Note that. = y o . the designated terminal state Y ( T )= 0 is attainable only if we start at some point on the heavy arc of the ~ a r a b o l a that passes through the point of origin. with 9 > 0.53)] Since this result also represents the j.t + c2 ( c . thereby yielding a bang-bang solution. Step iii Next.53) (8.. since the linear function A2(t) can cross the t axis a t most once.. c4 arbitrary) ~ ( t=) i t 2 + c3t + C . (constant) a~ Thus. further integration and defini- - -- - a_ . we must next turn to the equations of motion for the costate variables: aH AI ..52) = we can solve this for t 2 and rewrite (8.

we are able to proceed on course along the optimal path CDO. let us place them together in Fig. the directional arrowheads uniformly point eastward above the horizontal axis. Note that the two shorthand symbols k and h are not the same. Each parabola is uniquely associated with a value of h. in such cases.Since the parabola involved is taken from Fig. 8. the arrowheads go in opposite directions on the two arcs. Once point D is reached.1 and 8. however. Should the actual initial point be located anywhere on arc A0 or arc B 0. there is a unique optimal path leading to the designated terminal state. only the points located on the heavy arc can lead to the designated terminal state y(T) = 0. there is no need to reproduce in Fig. for example. and the second leg is to take us to the origin along the latter arc. they should be 5P kept distinct in our thinking. traces out a step function composed of two linear segments. again. respectively. as shown in Fig. however. the selection of the proper control would start us off on a direct journey toward the origin. 8.3. as may be expected. Here. the one passing through the origin corresponds to h = 0. Step i v This link can best be developed diagrammatically. the control appropriate t o the circumstance is u* = . But because of the negative coefficient of the z2 term. the control called for is u* = 1. The first leg is to take us from the initial point to arc A0 or BO. Thus. and vice versa. and only by so doing. For each such initial point.m E R 8: MORE ON OPTIMAL CONTROL 231 FIGURE 8. No switch in the control is needed. the optimal movement of y over time is never monotonic. For this reason. Since the latter comes from Fig.54') in nature. 8. still z(T) = 0.1 and 8. and that path must involve one (and only one) switch in the optimal control. Although we have analyzed the two cases of u* = 1 and u* = . in a bang-bang solution. The terminal value of z is.2 are the only routes that can lead ultimately to the desired terminal state. one sequentially after the other. 8. for instance. Like (8. as the case may be.1. the curves should all bend the other way.2. By observing these u* values. both possibilities will. and they call for different optimal controls-u* = + 1 for arc AOand u* = -1for arc BO. For an initial point located off the two heavy arcs. Since the heavy arcs in Figs.2 under possibility 2. Even though the two arcs appear to have been merged into a single curve. then the first leg is to follow a parabolic trajectory to point D.2.1 under possibility 1.3 every part of every parabola in Figs. this new condensed equation plots as a family of parabolas in the yz plane. 8. All we need are the portions of the parabolas flowing toward and culminating in arc A0 and arc BO. an initial rise must be followed by a subsequent fall. It remains to establish the link between the two and to find out how and when the switch in the optimal control is carried out. on the other hand. if needed. but westward below. the journey to the origin would consist of two legs. become realities. 8. As before. 8. If. Other initial points off the heavy arcs (such as point E ) can be similarly analyzed. we obtain the single result which is comparable to (8.54'). . The optimal movement of the control variable over time.1 as separate possibilities. point C is the initial point.3 tization of the constant gives us Combining the last two equations. we should detour onto arc AO.

the longer it would take to reach switching time. whereas the quantity shown in (8. and hence at time 7. At time 7 . A. For the interested reader. Since we have already found T in (8.60) a=zo+fi s This result shows that the larger the values of z. will change sign and thereby trigger the switch in the optimal control. for the present purpose.61) should be added together. the value of z goes from Z ( T ) = .50) into this simplified transversality condition yields another equation relating c.. and c. From the initial point C . hence. and hence a longer ~ourney reach arc to AO.6 to zero. Point D is the intersection of two curves. The relevant equation for that movement is (8. One of these. That is.54') with k = 0. both (8.1 + h. plot the time path u*(t). The purpose of finding the values of c. being part of a parabola taken from Fig. = 0. reduces to .501. At point D. since z ( T ) = 0 and u ( T ) = 1. and is described instead by (8. at point D. we then find that (8. we mention here that c. must be zero (at the juncture of a sign change).60) via another route.59) - Point D = ( $ h. in (8. to c. Hence. 8. From your description. although the procedure for finding it is the same. In other words.3.. and c. from (8. arc AO.531.2 under possibility 2. we obtain Thus. these two relations enable us to definitize c. where.fi. describe the way y* changes its values over time.58) are satisfied.fi.55): z ( t ) = . CDO in Fig.also to determine the exact time. Second. .232 PART 3:OPTIMAL CONTROL THEORY CHAPTER 8 MORE ON OPTIMAL CONTROL 233 More about the Switching Having demonstrated the general pattern of the bang-bang solution. however. we ) should consider T as the initial time and z ( ~= . the equation should be modified tor z ( t ) = t .6. However.50).(T) = 0.. its value should be The preceding discussion covers most aspects of the time-optimal problem under consideration.At the terminal time-which we shall denote for the time being as Tf-the left-hand-side expression becomes z ( T t ) = 0.(t) path. 8.60) and (8.3 will confirm the reasonableness of this conclusion: A larger 2. (8.. at which the switch in the control should be made. On that second leg of the journey. To get the optimal value of T. Specifically. plot the time path y*(t). when t = 7 . can be definitized by means of the following two relations: First. and is described by (8. 7 . falls under possibility 1. and c. the transversality condition Setting Z(T) equal to . Pontryagin et al. the movement of z follows the pattern given in (8.4 2 On the basis of the optimal path 1 On the basis of the optimal path CDO in Fig. Thus. dropped the example a t this point. we have The reason we use the symbol T' instead of T is that the symbol T is supposed to represent the total time of travel from the initial point C to the origin.. EXERCISE 8.z attains the value .3. Solving these two equations simultaneously yields y = i h and z = . Since the intercept on the y axis is positive. we have It is also not difficult to calculate the amount of time required to travel from point D to the final destination at the origin. For switching points located on arc BO.T + c. to find the coordinates of the switching point D and.6(the positive root is inadmissible). It should be realized. Substituting (8. bearing in mind that the value of z represents the rate of change of y. 8. of course.61) tells only the travel time from point D to the origin.. and the right-hand-side one becomes T' . that (8.t + 2. 8.fi) = And the switch in u* should be made precisely when y fi* $h and z ( = y) = The calculation of the switching time T can be undertaken from the knowledge that. we have -C. About the only thing not yet resolved is the matter of the arbitrary constants c. where the switching point is to be found.f i . 8. (8.60) is appropriate only for switching points located on arc AO.1. the expression for T is different. and c. we can proceed a step further with the discussion of the CDO path.fi as the initial value z. h is positive. and c. . A look at Fig. Together.57) and (8. there is little point in worrying about c.56'). so as to know when A . and/or a larger h would mean an initial point farther away from arc AO. is only to enable us to definitize the A. The other is a parabola taken from Fig. because the assumptions of a negative z and a positive h used in its derivation are applicable only to arc AO. and h .

zo) = (2." Journal of Enuiroruruup s tal Economics and Management. 5 Let point E in Fig. (c) Find the optimal total time for traveling from point E to the destination at the origin. pollution is taken to be a flow ~ a r i a b l e .E [same as (7. 8. we have - The Control Problem 8. (b) Find the switching time 7 . exponential decay at the rate 6 > 0. C" < 0) [from (7. This formulation contains two state variables and two control variables. if the pollution stock is subject to Maximize subjectto L T u [ c ( ~ )P] dt . Forster. we can write The Stock of Energy Resource The implementation of antipollution activities A in itself requires the use of energy. by the proper choice of unit A.71)].64) also reveals that E (energy use) and A (antipollution activities) should play the role of control variables in the present analysis.3 would contain a switch in the control. First. Combining these factors that affect P .63) and (8. we use the symbol E to represent the extraction of fuel and energy use. A formulation with pollution as a stock variable is considered by Forster in the same paper cited previously. the stock of the energy resource. 4 Let point E in Fig. But in other types of pollution.zo) = (. < 0. ( a ) What is the optimal path in the figure? (b) Describe the time path u*(t). and explain why: (a)(yo. indicate whether the resulting optimal path in Fig.7. If the amount of pollution flow is directly proportional to the amount of energy used. A causes a reduction in S. 8. then we have P/P = -6.3 be the initial point. ( a ) Find the coordinates of the switching point on arc BO. we have P = -PA. PI ( U . the pollutants do emerge as a stock and produce lasting effects. Let A stand for ( the level of antipollution activities. Plot the y*(t) path.l) (b) (yo.74) and (7. we can. (c) Describe the time path y*(t). 1980.64) delineate the dynamic changes of P and S .72)] then the dynamic optimization problem may be stated as The Pollution Stock As before. we should also have s = . Assuming a proportional relationship between A and S . which. Combining these considerations. these equations can evidently serve as the equations of motion in this model. But the symbol P will now denote the stock (rather than flow) of pollution. such as radioactive waste and oil spills.234 PART 3: OPTIMAL CONTROL THEORY CHAPTER 8: MORE ON OPTIMAL CONTROL 235 3 For each of the following pairs of yo and z. ~ This is exemplified by auto emission.. Ucc < 0.5 ANTIPOLLUTION POLICY In the Forster model on energy use and environmental quality in Sec. and assume that A can reduce the pollution stock in a proportionate manner.~ P (6-65) $=-A-E P(O) S(0) and = Po> 0 So> 0 P(T) r 0 free (Tgiven) = S ( T ) 2 0 free E l 0 OIASA ' B N C ~A. then we can write P = LYE. as is the terminal time T. while currently detrimental to the environment. 321-333. simply write S = -A. ( p > 0).63) and (8. 8. The use of energy generates a flow of pollution. Further. But since S is also reduced by the use of energy in other economic activities. 7. -2) (c) (yo. That is. pp. dissipates quickly and does not accumulate into a long-lasting stock.3 be the initial point. values.a > 0). while the initial values of P and S are fixed. A further look at (8. P = ~ E . Since the relations in (8. Then.Up.P A . the terminal values of both the pollution stock P and the energy resource stock S are . "Optimal Energy U e in a Polluted Environment. If we adhere to the same utility function used before: U = U [ C ( E ) . This then points to P (pollution stock) and S (fuel stock) as the state variables. with P as its flow. Two aspects of this problem are comment-worthy. respectively. 1) . taking into account the fact that z 9. > 0. from this consideration. zo) = (2. so that P = -6P. Up < 0. C' > 0.

MORE ON OPTIMAL CONTROL 237 left free. Given the initial pollution stock Po > 0. Second. to maximize H. Substitution of this into (8.236 PART 3: OPTIMAL CONTROL THEORY CHAPTER 8. however. the control region is [0. with Besides. . is a From (8. too. A]. must also be a constant.(T) constantby (8. since A. For a problem with a truncated terminal line.68) results in the alternative condition The optimal choice of A thus depends critically cm A. consider the equations of motion of the costate variables: Maximizing the Hamiltonian As usual. requires Up to be constant. where E 2 0. it is required that A. And for A. where A denotes the maximum feasible level of antipollution activities. = 0 [by (8. and the right-hand-side boundary A* = A should be selected if aH/aA is positive. we see that But the zero value for A p would imply. H should be maximized with respect to A. means that = 0. As (8.. Forster shows that it is not possible to have an interior solution in the present model. which in turn implies that ~ + UCCn< 0. which.70. there can only be one value of P that would make U p take any particular constant value. an interior solution for A* must be ruled out in the present model.Ci(E) = 0. the left-hand-side boundary solution A* = 0 shouid be chosen if aH/aA is negative. Consequently. That is. too. In addition.70). For E2 the control region is [0.6B1). m). the transversality condition includes the stipulation that With a positive P ( T ) . Since budget considerations and other factors are likely to preclude unlimited pursuit of environmental purification. then PAp + A. by (8. This means that there is a truncated vertical terminal line for P . The Policy Choices Categorically. SO H indeed is maximized rather ~ c ~ ~ Note that d2H/aE2 = u than minimized. But the constancy of A.68)] Since As is a constant by (8. the assumption of an upper bound A does not seem unreasonable.66) shows. both control variables E and A are confined to their respective control r e e n s . the Kuhn-Tucker condition is dH/aE _< 0. It then follows from complem6ntary slackness that we must satisfy the condition If A* is an interior solution. must be a constant if A* is an interior solution. f f ] . subject only to nonnegativity. this last equation shows that A.67). H is linear in the variable A. Thus. we must postulate E > 0.that U. we start the solution process by writing the Hamiltonian function: where the subscript of each costate variable A indicates the state variable associated with it. Thus P . To see this. and another one for S. To maximize H with respect to the control variable E . Since U is monotonic in P. with the complementary-slackness proviso that E(aH/aE) = 0. which contradicts the assumptions that ?LC and C' are both positive. But inasmuch as we can rule out the extreme case of E = 0 (which would totally halt consumption production). to have a constant P is to f x i the terminal pollution stock at P ( T ) = Po > 0. E is restricted to the closed control set [ O . the optimal choice of antipollution activities A i either an s interior solution or a boundary solution.

lead to the exhaustion of the energy resource. (8. 2.68) to check whether the new conditions are equivalent to the old. That is. the shadow price of pollution. measured by -PA.a h p [from (8. Therefore.(T) = 0. As (8. But. we find Since the expression in parentheses is negative. the effect of energy use on utility through consumption. E and sign. the parameter p. Let us look further into the implications of the A* = 0 case. In distinguishing between these two situations. and As (instead of m.73) UcC1(E) = As . which is initially negative. A* = A.74) A* = (i) is associated with A ~ ( 5 ) . Since the pollution stock is left alone in this case. measured by As. constant means that.67)] positive constant because As denotes the shadow price of a valuable resource.68)] The economic meaning of the first line in (8.(T) = 0 by the complementary-slackness condition. the terminal stock of pollution is certainly positive. and assume that there exists a tolerably low level of P below which the marginal resource cost of further pollution abatement effort does not compensate for the marginal environmental betterment. as P steadily diminishes as a result of antipollution activities. it is not worthwhile to expend the resource on antipollution activities because the resource cost is greater than the benefit. which measures the efficacy of antipollution activities. So we can conclude that i have the .5 What will happen to the optimal solution if the control region in the Forster model for control variable A in (8.Despite the maximal effort. If so. The positivity of the A.? ( c ) Rewrite these conditions in terms of A. plays an important role. and the rest of the story is qualitatively the same as the case of A* = 0. S ( T ) must be zero. signifying the exhaustion of the energy resource stock at the terminal time 2'. exceeds that of pollution abatement. same The increasing use of energy over time will. < 0. Up will steadily rise from a negative level toward zero. A. In this situation.238 PART 8: OPTDdAL COIWUX CHAPTER 8. ( b ) What are the conditions for maximizing H. As. EXERCISE 8. In the new problem with the discount factor in Prob. use the regular (present-value) Hamiltonian H. Up will reach a tolerable level where we can no longer justify the resource cost of further pollution-abatement effort.67) and (8. we know that.72)] Interpreted economically. fighting pollution at the maximal rate is justified in the opposite situation. Turning to the other policy.72) mandates that A. however. But the two policies differ in that (8. we may expect the terminal pollution stock P ( T ) to be positive. UcC'(E). we have pollution abatement at the maximal feasible rate.PAp [by (8.> 0 - Now if we differentiate (8. As is a constant-a . in order to satisfy the transversality-condition stipulation S(T)As(T) = 0 [cf. When the P stock becomes sufficiently small.74) is that a hands-off policy on pollution is suited to the situation where the shadow price of energy resource. In answering this question. (a) Write the current-value Hamiltonian H. the transversality stipulation P(T)Ap(T) = 0 in (8. should increase over time to a terminal value of zero to satisfy the complementary-slackness condition. Upon the assumptions that Up < 0 and Up. ( p > O).65) is changed to A 2 O? Let a discount factor e-P'. for the new problem. and compare them with (8.). should under both policies be equated to the shadow value of the depletion of the energy resource. These two policies share the common feature that (8. then A. we do not expect the antipollution activities to run the pollution stock P all the way down to zero.70) shows. adjusted for the shadow value of pollution via the -&Ap term. be incorporated into problem (8. in the present case. MORE ON OPTIMAL CONTROL 239 Boundary Solutions for Control Variable A The only viable policies are therefo5e the boundary solutions A* = 0 (r$o attempt to fight pollution) or A* = A (abatement of pollution a t the m&mal feasible rate).65).. This suggests that A. check whether it is possible to have a n interior solution for the control variable A.. and m. as shown in the second line.73) totally with respect to t . With P ( T ) > 0.

For that case. we may expect the transversality condition to take the form (9. the infinite horizon is specified to be accompanied by a fixed terminal state. it is shown that the maximum-principle condition-including the transversality condition for what we have been referring to as the horizontal-terminal-line problem-apply as they do to a finite-horizon problem. Pontryagin et al. as t . the transversality condition is When the problem is autonomous. in case the variable terminal state is subject to a prespecified minimum level. sality condition to be (9.3) H=O forallt€[O.4) lim A(t) t-m = 0 [transversality condition for free terminal state] Similarly. which.~) In our discussion of problems of the calculus of variations.y. in the infinite-horizon context.y . Pontryagin et al. One main issue is the convergence of the objective functional. pp. there is no need to elaborate on the topic here. 5. moreover.. But the other major issue-whether finite-horizon transversality conditions can be generalized to the infinite-horizon context-deserves some further discussion. we may expect the infinite-horizon transver.1) can be expressed not only as lim H t-m = 0 but more broadly as (9.1. The fact that the transversality condition for the horizontal-terminalline problem can be adapted from the finite-horizon to the infinite-horizon context makes it plausible to expect that a similar adaptation will work for other problems. For the latter.m.1 TRANSVERSALITY CONDITIONS In their celebrated book. The same can be said about problems of optimal control.CHAPTER 9 INFINITE-HORIZON PROBLEMS 241 infinity" is assumed to take the specific form1 CHAPTER 9 INFINITEHORIZON PROBLEMS In other words. 189-191.3 to derive the finite-horizon 'L. . but at any point of time at all.in] t) = 0 The Variational View i 9. The only digression into the case of an improper-integral functional is a three-page discussion of an autonomous problem in which the "boundary condition a t I That (9. so the condition H = 0 can be checked not only at t = T.5) t-m lim h(t) 2 0 and t-+m lim ~ ( t ) [ ~ (. for instance. S . Accordingly. New York. If the terminal state is free as t -t m. because serious doubts have been cast upon their applicability by some alleged counterexamples in optimal control theory. Interscience. Since we have discussed this problem in Sec. concern themselves almost exclusively with problems with a finite planning horizon. the transversality condition pertaining to an infinite-horizon autonomous problem with the boundary condition (9. The Mathematical Theory of Optimal Processes. it was pointed out that the extension of the planning horizon to infinity entails certain mathematical complications. the maximized value of the Hamiltonian is known to be constant over time.4) is a reasonable transversality condition can be gathered from an adaptation of the procedure used in Sec. is an improper integral. We shall examine such counterexamples and then argue that the counterexamples may be specious. 7. 1962.

) must vanish individually. we wrote T = T* and yT = yT* + E AyT [from (7. y . such a requirement is appropriate whether or not the firm's terminal state (capital stock) is fixed. As a result. we let the state variable represent the capital stock of a firm. But a full-fledged proof for this condition has been provided by Michel. term will vanish without requiring any restriction on the terminal value of A. However.26)] E.. to make a. We shall argue.22'11 Then.4). that gives rise to transversality conditions.20).. however. Instead. Intuitively. regardless of whether the terminal state is fixed. it does matter whether yT is fixed or free. disqualify that condition as a necessary condition. it is necessary to impose the condition lim. In contrast. to make the a. we rewrote V as V = k T [ H ( t . A ) + y ( t ) i ] dt . Suppose.... There. that the terminal state is fixed. For an infinite-horizon problem. Optiraai PmMQata. on the other hand. U . Then we have lim. we first combined-via a Lagrange multiplier-the original objective functional V with the equation of motion for y into a new functional the condition (9. there is yet profit to be made by the appropriate choice of the control. July 1982. R. first. A(t) = 0-has been thrown into doubt by several writers who claim to have found counterexamples in which that condition is violated. so that (9. the transversality condition (9. so that AT is nonzero. If. with a free terminal state. term in (9. (9. The Variability of T and y.. 975-985.6) vanish. the functional V was transformed into a function of The first-order condition for maximizing V then emerged as [from (7. however. "On the Transversality Caakdidiora kr &a@xits B & Econometrics. Despite the preceding considerations.same as in (9. after integrating the second integral by parts. A )dt . This condition appears to be identical with (9. the terminal time T is' not fixed..lim l ( t ) Ay. = ym. The transversality condition (9. following Dorfman (see Sec. R 2 and R3. t -tm In order to satisfy this condition. for variable T and yT. the control variable the business policy decision.vanish. If true. A(t) = 0. R. To require H + 0 as t -+ m means that the firm should see to it that al the l profit opportunities will have been taken advantage of as t + m. because the problems involved do not have a truly free terminal state. = 0 at time infinite. Unlike the latter. So long as H remains positive. to write u(t) = u*(t) +~p(t) + EAT and y(t) = y*(t) +~q(t) [from (7.1). With regard to the O. It is the vanishing of the last two terms. It therefore carries greater significance. the R.j 0 ~ ( tjdt T T ) [from (7.30)] When adapted to the infinite-horizon framework.2). Consequently. Since this implies Ay. we adopted perturbing curves p(t) for u and q(t) for y. y.4)-lim.7) lim H = 0 t-tm [infinite-horizon transversality condition] v = j 0 H ( t . The proof is quite long. we shall add a few words of economic interpretation to clarify its intuitive appeal.6).. such counterexamples would. each of the three component terms (O.24) and (7. many writers consider the question of infinite-horizon transversality conditions to be in an unsettled state. 8. and it will not be reproduced here.22)] To generate neighboring paths for comparison with the optimal control path and the optimal state path.7) is not in dispute." . Thus. this equation be- .242 PART 3: OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS . that those are not genuine counterexamples.term i n (9.1).pp.25)] Similarly.4) ought not to have been applied in the first place. then the Hamiltonian function sums up the bverall (current plus future) profit prospect associated with each admissible business policy decision. u . we no longer have Ay. of course. and this provides the rationale for the transversality condition (9. we must impose '~hillipe Michel.y .7) plays the role of a general transversality condition for infinite-horizon problems. = 0 at time infinite.6) = lrn[($ + + i)q(t) . and the F function the profit function of the firm. comes (9.A(T)yT + A(0)yo [from (7. 243 transversality conditions for problem (7. We have based the justification of this condition on the simple fact that the terminal time is perforce not fixed in infinite-horizon problems..

Mathematrcal Economzcs. The rationale behind the conversion of the indefinite integral l u dt to the definite integral /. l ] . 46 (footnote). 267-272. Since u ( t ) E [ O . e New York. MD. 1 - Inasmuch as the integrand function F is identical with the f function in the equation of motion. 271. The constant k differs from the constant c.. Applzcations of Control Theory to Economzc Analysis. 1970.3. let us first find the y ( t ) path. It follows that any u ( t ) path that makes Z ( t ) + co as t -+ @--thereby .11. Cambridge.1). Pitchford and Stephen J . makes it possible to use the derivative condition aH/au = 0.3 (last subsection). Johns Hopkins Press.11 = +1 dt + 1 ( c arbitrary) ( k arbitrary) Next. the objective functional is seen to depend exclusively on the terminal state at the infinite horizon. Arrow and Mordecai Kurz. the definite solution is y(t) = 1 . p. In other words.. we find 0 so that k (9. Fundamental Methods of Mathematzcal Economzcs. Cambridge University Press. 1977. 16. < 1) + u(t)y= u(t) would serve just as well. For such a problem. To find the upper bound of y. p.1. and the value of y ( t ) must lie in the interval [O. and Optzmal Ascal Polzcy. making e-'(') . because it has absorbed another constant that arises in connection with the lower limit of integration of the definite integral. the value of e-'(" must lie in the interval (0. 14. Therefore. p.10) y(t) = REEXAMINED ce-1" ke-16. 1984. Amsterdam.8) subjectto and y=(l-y)u ~ ( 0= 0 ) u ( t ) E [O. is an interior point in the control region [O. where u . Hence.y ) u the condition aH/au = 0 means ( 1 + A ) ( l . NorthHolland.y ) = 0. The fact that Arrow and Kurz chose u ( t ) = u. Halkin himself suggests the following as an optimal contral: u*=O u*=l fort~[O.9) Lmgdt = [ y ( t ) ] .10') and be optimal. except for their role as stepping stones leading to the final state value. as Arrow and Kurz point out (op.2 SOME COUNTEREXAMPLES variable term. 461. which can be written as y forte(1. Baltimore." 1dt is divergent. Sec. we find A*(t) = . 2d ed. 3d ed. any constant time path for u .244 PART 3 OPTIMAL CONTROL THEORY CHAPTER s INFINITE-HORIZON PROBLEMS 9.. dt The Halkin Counterexample Perhaps the most often-cited counterexample is the following autonomous control problem constructed by Ha1ki1-1:~ Maximize (9. by setting t = 0 in the general solution and making use of the initial condition y(0) = 0.y ) u = ( 1 +A)(1..'u dt in (9. "Necessary Conditions for Optimal Control Problems with Infinite Horizons." t Econometrzca. inasmuch as y is always less than one.e-16udt. 13. Public Inuestment. u*(t) = u ." Essay 1 in John D. or more simply ( 1 + A ) = 0. eds. resulting in an infinite Z ( t ) .l] Viewed in this light.y ( 0 ) t4m = t+m lim y ( t ) where Z ( t ) is a shorthand symbol for the definite integral of u . to maximize (9. March 1974. there is no unique optimal control.y ) u +A(1 . 11. Hence.11) is a first-order linear differential equation with a variable coefficient and a H = ( 1 .10') = =y(0) = keo + 1 = k +1 e-z(t) . 1985. Turnovsky. "Optimal Control Theorems.1..9) is tantamount to requiring the state variable to take one specific value-the upper bound of y-as its terminal value. . 625. for all t (0 < u . for the standard formula for solving such a differential equation. cit. Since the Hamiltonian for this problem is (9. Its general solution is4 (9. the Rate of Return.10) is discussed in op. Z ( t ) is nonnegative. cit. McGraw-Hill. known as a terminal control problem.= lim y ( t ) . the initial and intermediate state values attained on an optimal path do not matter a t all.. and Ngo Van Long and Neil Vousden.0 and y ( t ) + 1-will maximize the objective functional in (9. The equation of motion for y. especially p. Sec. 3 ~ u b e rHallun. A slightly different version of the Halkin counterexample is cited in Kenneth J.~) This would work because 1. p. 4 ~ eAlpha C. But. Hence. Chiang. pp. The Halkin counterexample has also been publicized in Akira Takayama. the objective function can be rewritten as (9.

What about the arbitrary constant A? To definitize A. the conclusion that A*(t) = . Using the transversality condition H = 0 for all t.31. we choose the control u*(t) = 1 for all t Then we have the equations of motion the transversality condition H = 0 for all t. can be rewritten in per-capita (per-worker) terms as y = w b @meralsolutions are. 1. Let n denote the rate of growth of labor. in no way disqualifies it as a dynamic optimization problem. that such an objection is valid only for a static optimization problem. regardless of the y*(t) path chosen. the terminal time) must. with the objective functional depending exclusively on the terminal state.1is the same. we can approach the Halkin problem as follows. Put differently. the structure of the Halkin problem is such that it requires the choice of the upper bound of y as the terminal state. Kuhn and G. j = -=(l-y)u= sH 8 A 1-y Other Counterexamples (Aarbitrary) In another alleged counterexample. P.5 Suppose. instead. however. but rather lim. noting that the Hamiltonian is linear in u. we can determine that B lefinite solution of the state variable is y(t) = 1. New York. u*(t) = 1. nor does it alter the general rule that any requirement placed on the terminal state should be viewed as a restriction on the problem.4) does not apply when the problem is to maximize a n integral of per-capita consumption c(t) over an infinite period of time: 1. the only way to satisfy the condition H = 0 is to have A = . we should use the condition H = 0 for all t as in (9. but an entire path. A = 0. First. for t E [O. not a single value of a variable.9). and seemingly makes this problem a counterexample. pp. as a general rule. More specifically. From (9. As pointed out earlier.. it represents a perfectly acceptable conclusion drawn from the correct transversality condition. 241-292. there is an implicit fixed terminal state. namely.PART 3 OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS . 1969. thereby preventing y from growing.L). Vol. where the objective is merely to choose one particular value of y from its permissible range. V ( k ) < 0 where y = Y / L and k = K / L . we only need to consider the boundary values of the control region as candidates for u*(t). "Applications of Pontryagin's Maximum Principle to Economics. ciincs o k CBe ny . Springer-Verlag. The &b byIMk&docshe feature that u = 0 initially. The fact that the problem is a special case. 273-275. 5 k a r l Shell." in H. the correct transversality condition is not lim. The consequence of choosing u = 0 at any point of time. be viewed as a restriction on the problem itself.4) should not have been expected to apply in the first place. In such a context. A(t)=Ae"1 y(t) = Be-' +1 ( B arbitrary) = By the initial condition y(0) = 0. because the problem is autonomous. In this light. 247 which contradicts the transversality condition (9. 11. the optimization process calls for the choice of. is in the nature of an outcome of optimization rather than an implicit restriction on the terminal state.111. eds. however. W. we call upon 4(k) with & ( k ) > 0. on the assumption of linear homogeneity.1for all t. we see that.erminal state counts in the present problem. either u = 0 or u = 1."c(t) dt. is to make j = 0 (by the equation of motion) at that point of time. Rather. A(t) = 0 as in (9. But in the present context of dynamic optimization. which.e-. and 'The choice of u = 0 for some limited initial pe&d B f thae 60138 ab Bann..4). especially pp. Some may be inclined to question this conclusion on grounds that the requirement on the terminal state. Even though our optimal control differs from that of Halkin and that of Arrow and Kurz. Szego. we must avoid choosing u(t) = 0 for all t.4). A moment's reflection will reveal. with u = 1 and y < 1 for all t. For this reason. can naximize the objective functional just as those of Halkin and Arrow and Kurz. Hence. The Shell problem is based on the neoclassical production function Y = Y ( K ..2).. Halkin's problem does not constitute a valid counterexample. we do not view this as a violation of an expected transversality condition. being based on the maximization of (9. Mathematical Systems Theory and Economics. any restriction on the state variable at a single point of time (here. with lim y ( t ) t-m = 1 rhis latter fact shows that the control we have chosen. Karl Shell6 shows that the transversality condition (9.. In view of this. so that (9. Yet. however. But we can again show that the problem has an implicit fixed terminal state. . respectively. even though this is not openly declared in the problem statement. H = 0 as given in (9.

16) is to ensure convergence. as t m. the Shell problem in fact also contains an implicit fixed terminal state. we should have k = 0. To see this. since it is derived from the maximization condition dc/dk = 0.14) is satisfied as t -r m. it is a condition dictated by the convergence requirement of the objective functional. k(t) = k. rather. However. which is just another way of saying that the marginal propensity to consume is restricted to the interval [O. . 1 . for this autonomous problem.. Since this integral obviously does not converge for c(t) 2 0. we can verify Shell's result that A(t) 4 1 as t + m. to get + k = +(k) . this result by no I 'see the discussion of Condition I1 in Sec. is confined at every point of time to the control region [0. Now consider the problem of maximizing 1. no terminal value is explicitly mentioned. By setting k = 0 and transposing.14) is known as the golden rule of capital ac~urnulation.r n . k(t) must tend to k. The important point is that this terminal state does not come about as the result of optimization.15).17) equal to zero. = lim-=1 . A(t) = 0. we obtain the following expression for the steady-state per-capita consumption: (9. is (9.( n + 6)k Although the state variable. it is clear that H will tend to zero. k(t) = k. The condition given in (9. c and k will take the values t and k . rather than lim.15).16) subjectto im( c . a continuous function of time. k . we can resort to L'HSpital's rule. thereby disqualifying (9. 'see E. and the optimal state path is characterized by lirn.( n + 6)k [steady-state value of c] - . in the i form of lirn. 638-643.~ Since +(k) is monotonic.13) c = +(k) . since both the numerator and the denominator in (9. rewriting the objective functional as in (9.. the only way for the integral to converge is to have (c . Let that particular value of k-the golden-rule value of k-be denoted by R . +(k)]. if only implicitly. S.( n + S)k t . What this does : to fix the terminal state. and let the corresponding golden-rule value of c be denoted by 3.1° From our perspective. from (9.17). We can easily check the correct transversality condition H = 0 for t -+ a. This rule is christened the "golden rule" because it is predicated upon the premise that each generation is willing to abide by the golden rule of conduct (''Do unto others as you would have others do unto you. we can actually solve for A by setting (9. we have lim A ( t ) t-rn = lim t-rn ff(k)k . ''TO do so. As t becomes infinite.18) approach zero as t becomes infinite. moreover. c. so that the appropriate transversality condition is H = 0 for all t.( n + 6)k and 0 5 c(t) I +[k(t)] 'F'or details of the derivation. Then This value of c represents the maximum sustainable per-capita consumption stream. Since.t).14). 5.") and adopt a uniform savings ratio to be adhered to by all generations.' Hence. there is only one value of k that will satisfy (9.12) I CHAPTER 9 INFINITE-HORIZON PROBLEMS k = +(k) .c .t ) dt Shell shows that the optimal path of the costate variable has the limit val%e 1 as t -+ m. c(t) must tend to t and. Hence..C -C -C . the problem is to Maximize (9. Note that the steady-state value of c is a function of the steady-state value of k. I1 By taking the limit of this expression. In other words. The control variable. Phelps. the capitd-bhr ratio.4).4) as a transversality condition... H should be zero for all t." American Economic Review. has a given initial value k.t ) + 0 as t 7m.25)."c(t) dt.PART 3: OPTIMAL CONTROL THEORY let 6 denote the rate of depreciation of capital. Shell adopts the Ramsey device of rewriting the integrand as the deviation from "Bliss9'-with the "Bliss" identified in this context with 3. that is. Recalling that (9. maintains the same sign for all t and offers no possibility of cancellation between positive and negative terms associated with different points of time. however. September 1961. in line with (9. recall that the very reason for . the Hamiltonian is 1 which is similar to the funldamental differential equation of the Solow growth modeL7 In a steady state.15 . The highest possible steady-state c is attained when dc/dk = #(k) ( n + 6) = 0. For this autonomous problem. c^ and & will be related to each other by (9. see the ensuing discussion leading to (9. Then the rate of growth of k. Since the integrand expression (c . "The Golden Rule of Capital Accumulation: A Fable for Growthmen.1. The fact that A(t) optimally tends to a unit value instead of zero leads Shell to consider this a counterexample against the transversality condition (9. when . pp. respectively.c .

" Management Science. however. While Pitchford offers no reason for this phenomenon. Pitchford. . "Duality Theory for Dynamic Optimization Models of Economics: The Continuous Time Case.?]e-p' as t -t m. not wanting to rule out the possible equilibria (0. 1-19. 127-154.pp. aPtd Nd- kk "M. Applications of Control Theory to Economic A&. 1973. and indeed follows from.20 I.especially pp. The relevant work on the continuous-time case comes later in a paper by Benveniste and Scheinkman.0). then convergence will merely call for the vanishing of [c(t) . and (I?. the transversality condition implied by (9. the terminal state for the problem is in effect fixed.5) on the other. Stephen J. it takes a rather lengthy excursion before he is - " ~ o h n D. and I.4) or (9. K. p.13 What our foregoing discussion can offer is a simple intuitive explanation of the link between the absence (presence) of time discounting on the one hand and the failure (applicability) of the transversality condition (9. . he reports a finding of Weitzman12 that. say. defined by the relations 4. "L. The case where the objective functional does contain a discount factor is illustrated by the neoclassical optimal growth model which we shall discuss in Sec.(t) = 0. it is perfectly consistent with. .(Kl) + 42(K2) . except that there are here two control variables. footnote) that all the known counterexamples apparently share one common feature: No time discounting is present in the objective functional. of course.). I The Role of Time Discounting In a noteworthy observation." Journal of Economic Theory.14).pp. the correct transversality condition. H = 0 for all t. and K. n& Holland." Suppressing population growth. 0) a priori.6K." Essay 6 in John D. Weitzman. Pitchford also assumes implicitly that the terminal states are constrained to be nonnegative. However. Moreover. M. Since the terminal value of each of the costate variables is found to be one. . the problem with time discounting would indeed be one with a free terminal state.c^.. = I. > 0 and I?. (9. is not satisfied (Pitchford. so that the integral is. the transversality condition involving the costate variables indeed becomes necessary when there is time discounting and the objective functional converges.. Consider the case of maximizing j t [ c ( t ) . 128. if the objective functional had beenArequiredto converge. with (9. whether or not explicitly acknowledged in the problem statement. Since the only way for this to occur is for c(t) to approach 8.) = 4. to which a different type of transversality condition should apply.5) for nonnegative terminal capital values. But if time discounting is admitted. Pitchford points out (op.19) K2 = I 2 . 783-789. Ai(t)K. Pitchford decides to proceed on the assumption that the integral is not required to converge-which. And the model is not a genuine counterexample.. 1977.4) or (9. he considers the problem of choosing I.c^] tends to some finite number. A.5) as its expected transversality condition. 143). Because of this assumption. Rather. This. as well as certain inequality constraints (a subject to be discussed in the next chapter). I?. The solution to this problem involyes the terpinal states I?. The Shell and Pitchford models omit time discounting.E]e-""t. which can be achieved as long as [c(t) .. makes one wonder why he would then still employ the Ramsey device of expressing the integrand as c(t) .(O) > 0 I. op.pp."[c(t) . the integrand [c(t) .c^] dt ) led back to the conclusion that the objective functional has to converge after all.c^] must tend to zero as t -t m. lim.rO The symbol 6 again denotes the maximum sustainable level of consumption. 134-143. (0. and Ki must tend to Ki. I. .'(K2) = S [cf. L. a device explicitly purported to ensure convergence. For then c(t9 must tend to 6. would not have come as a surprise.6~~ c and = 4. n = 01. cit. cit. for discrete-time problems. The upshot is then the same: The terminal states are implicitly fixed. Turnovsky. thereby making the problem one with fixed terminal states. "Duality Theory for Infinite Horizon Convex Models. and given that no possibility exists for cancellation between positive and negative terms. If this functional is required to converge. the rate of investment in each of two regions. here.Benveniste and J. and nor is K(t) obliged to approach a unique value K. ]. eds. 1982. not necessarily zero.I 2 2 0 Kl(0) > 0 K. K. and I.. (9. namely. Pitchford conjectures that a similar result would hold for the continuous-time case.'(K. p. and thus do not fall into this category.c^]dt.250 PART 3 OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS 251 means contradicts any applicable transversality condition. Inasmuch as c(t) is no longer obliged to approach a unique value c^ for convergence.3. "Two State Variable Problems. Amsterdam.I. so as to Maximize subject to k m [ c ( t . Another intended counterexample is found in the regional investment model of John Pitchford. The model is essentially similar to the Shell counterexample. Scheinkman.. 9. and two state variables. > 0..

L). At first glance.21) lim ~ ( t ) [ ~ (. accepted as a necessary condition for infinite-horizon problems. pp.T]. for given A V= recall that A = -F'.PART 3 OPTIMAL CONTROL THEORY CHAPTER 9. u*) + h f(t. However. positive marginal products.y*(t)] to be either positive or zero as t . Moreover. Here we use instead the notation Y." Review of d Economic Studies. = f(t. 5. has exerted a strong influence on economic thinking. y. to establish the maximality of V*.3l')I. The . y.y*(t)] t) t+m 2 The Model 0 This theory is labeled a "neoclassical" theory. y. u ) is concave in (y. reader may also find it of interest to compare (9. can be rewritten in per-worker terms -or per-capita terms.57) we Y (average product of labor) (capital-labor ratio) K k =- L 1 4 ~ a v i Cass.44) are reverses the sense of inequality.y. u ) ~ ( 0 =Yo ) (YO given) 1I I 1 and (9. . INFINITE-HORIZON PROBLEMS 253 The Transversality Condition a s Part of a Sufficient Condition Although the transversality condition lim.20) we have merged Mangasarian9s concavity conditions on the F and f functions and nonnegativity condition on A(t) into a single concavity condition on the Hamiltonian H. u). it is in fact also acceptable to have A*(T)(yT . In that proof. h(t) = 0 is not universally . But.yT*) which we have encountered earlier in the proof of the Mangasarian theorem [see footnote related to (8. especially at the macro level. establishing V* to be a global maximum. as y =L where y*(t) denotes the optimal state path and y(t) is any other admissible state path. which deals with the important issue of intertemporal resource allocation. jointly. the corresponding condition in the calculus-of-variations context.S) oa$l (3.44). u ) d t m 0 3. The terminal state can be either fixed or free. In an amalgamated form-combining the concavity provisions of Mangasarian as well as those of Arrow-the sufficiency theorem states that the conditions in the maximum principle are sufficient for the global maximization of V in the infinite-horizon problem Maximize subject to and provided (9.21) with (5. and diminishing returns to each input. although this influence did not come into being until after World War 11. a limit condition involving A does enter as an integral part of a concavity sufficient condition for such problems. we denoted output by Q. 233-240. the same basic issue is formulated as a problem of optimal control rather than the calculus of variations.31) and (8. Our discussion of this subject will be based primarily on a classic paper by David Cass. an inequality by a nqpttive number e3eas that (9. July 1965.44) may strike one as entirely different from (9. being linearly homogeneous.21) because it shows the " I" rather than the " 2 " inequality.*) positive.31. the expression gust cited is shown to vanish.15 Such a production function.21) restricts the limit of A(t)[y(t) .. Letting the lowercases of Y and K be defined.3 THE NEOCLASSICAL THEORY OF OPTIMAL GROWTH The Ramsey model of saving behavior (Sec. the new treatment-labeled as "the neoclassical theory of optimal growth"-extends the Ramsey model in two major respects: (1) The labor force (identified with the population) is assumed to be growing at an exogenous constant rate n > 0 (the Ramsey model has n = 0).21) is the infinite-horizon counterpart of the expression A*(T)(yT . y. or H o = F(t. u) + A f(t.a. from (7. assumed to be characterized by constant returns to scale. discussing the Ramsey model. It is along the same line of reasoning that condition (9. The limit expression in (9. Arrow's condition is that H o be concave in the y variable alone. which is another commonly used symbol for income and output. as we shall draw no distinction between the population and the labor force. u*) is concave in y for all t.14 1F(t. because its analytical framework revolves around the neoclassical production function Y = Y ( K . condition (5. respectively. In contrast. In a more recent development. when that model was "rediscovered" by growth theorists long after its publication. The concavity of H is in (y. y. and (2) the social utility is assumed to be subject to time discounting at a constant rate p > 0 (the Ramsey model has p = 0). it indeed exactly the same. Since multiplying. y.20) either H = F(t. "Optimum Growth in an Aggregate Model of Capital Accumulation. Note that in (9. 9. .u ) for all t~ [O. so that the result V I V* emerges.

then the functional will reduce to the simple form (9. where r = p . since the population (labor force) grows at the rate n. but a utility index .1. net investment. we have - p - n > 0) The right-hand side now contains per-capita variables only. we shall proceed on the basis of the simpler alternative (9. is what determines the utility or welfare of society a t any time. 9.26'). for all c > 0 U"(c) < 0 Ur(c) > 0 lim Ur(c) = and lim U'(c) = 0 c+0 c-tm we can express the production function by (9. that this is equivalent to stipulating a single positive discount rate r . The total output Y is allocated either to consumption C or gross investment I. The problem of optimal growth is thus simply to Maximize subjectto Lmu(c)e-" d t k=+(k)-c-(n k(0) = [product rule] k. Cass stipulates that p . and using the symbol c = C/L for per-capita consumption. to be summed over time in the dynamic optimization problem. is the fundamental differential equation of neoclassical growth theory which we have already encountered in (9. weighting social utility by population size and simultaneously requiring the discount rate p to exceed the rate of population growth n.12). U(c).25) L FIGURE 9 1 . SK = Y . I = K. with a discount rate p. it is stipulated that k+O lim#(k)=m and lim#(k)=O k+m The graph of +(k)-the APPL curve plotted against the capital-labor ratio -has the general shape illustrated in Fig. however.22) y = +(k) with # ( k ) > 0 and @'(k) < 0 for all k > 0 The U(c) function is. c. can be expressed as K =I . is mathematically no different from the alternative of not using population weights at all but adopting a new. It might be pointed out. we let L o = 1 by choice of unit.C .2) (the deviation of c from the golden-rule level). we fhdly cAt&n an equation involving per-capita variables only: and 0 I c(t) I + [ k ( t ) ] This resembles the Shell problem (9.23) and rearranging. The level of per-capita consumption.161.26') im~(c)e-rtdt (r Dividing through by L.. For this reason. is assumed to possess the following properties: (9.n > 0. the objective function takes the form Additionally. further. positive discount rate r. Therefore. describing how the capital-labor ratio k varies over time. but the left-hand side does not. If. of course.SK [S = depreciation rate] To ensure convergence.PART 3: OPTIMAL CONTROL THEORY CHAPTER 9: INFINITE-HORIZON PROBLEMS 255 This equation. dK K=--= dt -( kL) dt d = k- dL dt + Ldt dk In other words. But. we make use of the relation .n. The social utility index function. Cass decides that the social utility attained at any point of time should be weighted by the population size a t that time before summing. Hence. By substituting this last result into (9. but here the integrand function is not (c . To unify the two sides.

we obtain the condition = -m[4'(k) . = 4 ( k ) ( n + 6 ) k . as we did in Sec. the negative slope of the broken straight line. Without knowledge of the specific forms of the U ( c ) and $ ( k ) functions. H is indeed maximized.27). 5. is seen to contain a hump. it follows that c. A. c. the maximum of H corresponds to a value of c in the interior of the control region [O.35).4(k)].25). the first additive component of H would plot against c as the illustrative broken curve in Fig. the marginal utility of per-capita consumption should be equal to the shadow price of capital amplified by the exponential The ensuing discussion will be based on the current-value maximumprinciple conditions (9.25). This makes possible a qualitative analysis by a phase diagram.33) through (9.2 function of per-capita consumption. The broken Constructing the Phase Diagram Since the two differential equations (9. hence. optimally. and k . since it represents the shadow price (measured in utility) of capital at that time. . We take A to be positive at any point of time. But straight line has vertical intercept A[b(k) . with its peak occurring at a c value between c = 0 and c = c. There is only one state variable.. The equation of motion for the state variable k can be read directly from the second line of (9.c . Since c. c.. the maximum principle requires that dHc/ac or is nonlinear in c. Hence. More specifically. < 4 ( k ) . k = = dH/dA. One of these. we can only undertake a qualitative analysis of the model. H . so that c. entails for the present model the differential equation i (9. can accordingly We find the maximum of H by setting = U1(c) m - = 0. Since there is no explicit t argument in these. . and only one control variable. 16 The shape of the broken curve is based on the specification for U(c) in (9.31) should in principle enable us to solve for the three variables c. 9.4. Since a2H/ac2 = UV(c)e-" is negative by (9.( n + 6)k] and slope -A. it would be necessary first to eliminate the other variable. we now have an autonomous system. however.35) involve the variables k and m . but it can also be derived as And the equation of motion for the current-value multiplier m is From this. k . It is. is the solution of the equation [ $ ( k ) . the normal phase diagram would be in the k m space. also possible to work with the current-value Hamiltonian In this case. discounted at the rate r.34) and (9.( n + 6 ) k ]= 0 . This condition does maximize H.PART 3: OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS H (and components) I I term e".25).16 Their sum. of course. ! The three equations (9.2. -h[$(k) - ( n + S)] simply restates the constraint FIGURE 9. The Maximum Principle The Hamiltonian for this problem. at any given point of time.= -dH/ak. because a2Hc/ac2= U"(c) < 0 by (9.( n + 6 + r)] which states that. The maximum principle calls for two equations of motion. and the second component would appear as the broken straight line. To use such a diagram.30) And the other.29) through (9.

The intersection of the two curves in Fig. (9.331. x) . k is defined by #(%) = n + 6 + r. to obtain an expression for m: This.3b. 9. point B. In other words. U1(c)--instead of the plain c itself. and can never be zero. this requirement can be satisfied only at a single point on that curve. we now can work with the differential equation system C = -U. is I k (Modified golden rule) which is a differential equation in the variable c. 9. Denoted by k and E . In contrast.38) c = 0 curve and t b k (Golden rule) FIGURE 9. As (9. 9 .37) indicates. By the same token. The result. The analysis can then be carried out with a phase diagram in the kc space.33) with respect to t. determines the steady-state values of k and c. We begin by differentiating (9. As to the C = 0 curve.14). The 4(k) curve being monotonic. enables us to rid (9. after rearrangement.3b. the modified-golden-rule value of c-the height of point E in Fig. together with the equation m = U'(c). by (9. 9. as illustrated in Fig. + 17 The . This is why point B (for must be located to the left of point A (for k).3b.38) requires that the slope of the $(k) curve assume the specific value n + 6 + r. which involves a larger slope of $ 0 ) .( n + 6 + r ) ] = To construct the phase diagram. In so doing. with horizontal intercept k. 9. respectively. 9. depart from Cass's procedure and seek instead to eliminate the m variable. where the tangent to that curve is parallel to the (n + 6)k line. As indicated in (9. (9. These are defined by the two equations (9.3b-must be less than the golden-rule value-the height of point D.3 4 ( k ) .3b. we shall create a differential equation in the variable c as a by-product. we first draw the k t = 0 curve. Thus it corresponds to point A on the 4(k) curve in Fig. Th only way for 6 to vanish is to let the bracketed expression in (9.35) of the m and m expressions. 9. already encountered earlier in Fig.36) be zero. term simply gives us an upward-sloping straight line.1. the task of eliminating c is more complicated than that of eliminating rn. 9. is reproduced in Fig.25).3a.:(c) u (c) [&(k) . therefore. the modified-golden-rule value of k (based on time discounting at rate r ) must be less than the golden-rule value of k (with no time discounting). which corresponds to a unique k value.37) (9. The graph of 4(k). these values are referred to in the literature as the modified-golden-rule values of capital-labor ratio and per-capita consumption. by (9. Plotting the difference between these two curves then yields the desired k = 0 curve in Fig. this curve expresses c as the difference between two functions of k: 4(k) and (n + 6)k. x. contains a function of c-to wit. at point E. as distinct from the goldenrule values k and !discussed earlier*in the Shell counterexample. 3 ~And the (n 6)k .38).258 PART 3 OPTIMAL CONTROL THEORY CHAPTER 9: INFINITE-HORIZON PROBLEMS since the other equation.U1(c)/U"(c) term is always positive. Consequently. k is defined by #(k) = n + 6. We shall. Thus the C = 0 curve_must plot as a vertical straight line in Fig.( n + 6)k (equation for k = 0 curve] [equation for t = @(k) = n +6+r 0 curve]17 The k = 0 curve shows up in the kc space as a concave curve.

as c increases (going northward). c.O: . c. assumed in the model. . E). the k-arrowheads must point eastward below the k = 0 curve. added vertical sketching bars to the k = 0 curve. For clues about the general directions the streamlines should take. given an initial capital-labor ratio k. leading to eventual capital exhaustion. such that the ordered pair (k.. and cross the E = 0 curve with zero slope. with a sustainable constant level of per-capita consumption. Otherwise.4. OPTIMAL CONTROL THEORY CHAPTER 9: INFINITE-HORIZON PROBLEMS 26 1 Analyzing the Phase Diagram To prepare for the analysis of the phase diagram. The streamlines drawn in accordance with such arrowheads yield a and E denote the saddle-point equilibrium at point E. where intertemporal equilibrium values of k and c. This is because a static production function. we have. at E. too.. Given the configurations of the k = 0 and C = 0 curves in this problem. Similarly. The fact that Y and K share a common growth rate is especially important as a sine qua non of a steady state or growth equilibrium.1 sign sequence. and downward to the right of it.. the simultaneous constancy of B and y means that. we partially differentiate the two differential equations in (9. Transversality Conditions To select a stable branch from the family of streamlines is tantamount to choosing a particular solution from a family of general solutions by definitizing an arbitrary constant. It may be noted that even at E. the steady state at E . the steady state is unique.40) . it must choose an initial per-capita consumption level c. The requirement that we choose a specific initial c.25) and (9. respectively. whereas situation (2) implies growing overindulgence. however. the variables Y. the c-arrowheads should point upward to the left of the E = 0 curve.4 .) sits on the stable branch is one way of doing it. Y = Y ( K . and horizontal ones to the E = 0 curve. it is difficult to illustrate the use of a transversality condition to definitize an arbitrary constant. is the only meaningful longrun target for the economy depicted in the present model. These sketching bars are useful in guiding the drawing of the streamlines-to remind us that the streamlines must cross the k = 0 curve with infinite slope. This means that. and its level cannot be raised further over time. So.not shown in the graph-lies on a stable branch. To make is possible a rising per-capita consumption. (x. Nevertheless.391. Since k = K/L and y = Y/L. in light of the phase-diagram analysis.= . and L all grow at the same rate. This is to be done with the help of some boundary conditions. the dynamic forces of the model will lead us into a situation of either (1) ever-increasing k accompanied by ever-decreasing c (along the streamlines that point toward the southeast).40) indicates that E should follow the ( + .260 PART 3.361. we can. culminating in eventual starvation. The constancy of k implies that y = c$(Z) is constant.0. (9. Situation (1) implies progressively severe belt-tightening. verify that the steady-state solution indeed satisfies the expected transversality conditions. to find that (9. One transversality condition we may expect the steady-state solution to satisfy is that This is because the objective functional does contain a discount factor and the terminal state is free. in Fig. or (2) ever-increasing c accompanied by ever-decreasing k (along the streamlines that point toward the northwest).U'(C) V ( k ) ac ak U"(c) <0 [by (9. the per-capita consumption becomes constant. Since the solution path for A in the Cass model is FIGURE 9.K . Since neither of these is a viable long-run alternative. k should follow the ( + .22)] According to (9. This will be discussed in the next section. technological progress must be introduced.L). A n alternative is to look at an appropriate transversality condition. Note that the only way the economy can ever move toward the steady state is to get onto one of the stable branches-the "yellow brick road"-leading to point E.). and westward above it. Hence. 9. Since we are dealing with general functions and not working with quantitative solutions of differential equations. such that the ordered pair (k.) sign sequence as k increases (going eastward).

1984.25)./ak in (9. see that to have U'(c) -t m. 18 - = $~(k) function and the Uf(c) function. OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS and since the limit of Uf(c*) is finite as t -+ m. directional arrowheads in Fig.361. we must have c + 0. 18. 19 For more details of the procedure of linearization of a nonlinear system. in the solution. I ( a ) Find the y E I 6 g From (9.( n + S + r)] EXERCISE 9 3 .42) is also satisfied.42) The four partial derivatives. moreover. is equal to zero by definition of steady state. In the remaining term. Chiang. where turn out to be #(%) n = + 8 + r. which establishes the steady state to be locally a saddle point. (9.. 9. c* we does not tend to zero as t -r m. It follows that the Jacobian matrix takes the form Checking the Saddle Point by Characteristic Roots In Fig. We can do this by examining the characteristic roots of the linearization of the (nonlinear) differential-equation system of the model.El.22) and (9. Another condition we may expect is that. see Alpha C. McGraw-Hill.40).lS this expression does satisfy the transversality condition (9.25). ( b ) Write the specific optimal control problem.( n +6)k This implies that the two roots have opposite signs.41). we already know from (9. t). the solution path for H takes the form Since U(c*) is finite as t .4 are determined from the signs of the partial derivatives ak/ac and ai. it may be desirable to check the validity of the conclusion by another means. In the present problem.m. the transversality condition (9. and r.6.7)] For the present problem. Solve for the steady-state values (k.0 . Check whether these conform to the specifications in (9.41) that A tends to zero. Therefore. (c) Apply the maximum principle. the exponential term U(c*)e-" tends to zero . 9. as t -+ m [by (9. ( d ) Derive the differential-equation system in the variables k and c . thus the limit of U1(c*) is finite as t + m.lg On the basis of the two-equation system (9. . the bracketed expression. Sec.4. Since the sketching of the streamlines is done on a qualitative basis with considerable latitude in the positioning of the curves.c . ai. representing k * by (9. . the pattern of the streamlines drawn leads us to conclude that the equilibrium at E is a saddle point.36): r r -11 The qualitative information we need about the characteristic roots r. U'(c> c = -U ( c ) [4'(k) .39) and (9. when evaluated a t E. H. as t becomes infinite. In the phase-diagram analysis of the Cass model. 1 Let the production function and the utility function be we first form the Jacobian matrix and evaluate it a t the steady-state point E. New York. or (W. to confirm the saddle point is conveyed by that result that k =4(k) .262 1 PART 3. Fundamental Methods of Mathematical Economzcs. 3d ed.I5 I ai. using the current-value Hamiltonian. .

9. this type of technological progress is said to<be purely labor-augmenting. suppose that the economy is currently located on the lower stable branch and is moving toward point E. with the roles of K and L interchanged. stays constant at Z. It is the fact that A is totally detached from the K variable that explains how the Y / K ratio can remain undisturbed at the same MPP. c.L ) expression. we can simply write where Y is taken to be linearly homogeneous in K and A(t)L. Once technological progress is allowed. but it offers no explanation of how the progress comes into being. The culprit responsible for the rigid capping of per-capita consumption is the static nature of the production function. Let A = A(t) represent the state of the art. This representation can thus be used only for exogenous technological progress.L ) is taken to be linearly homogeneous. however. we . An alternative way of writing a dynamic production function is to introduce a technology variable explicitly into the function. That is. if we hold K / L constant and examine the MRTS's before and after progress./MPP. but the k = 0 curve is unaffected. Let there be a parameter change such that the 6 = 0 curve is shifted to the right. Considering the way A(t) and L are combined.46): 9. Harrod neutrality is frequently assumed..isively to L . Neoclassical Optimal Growth with Harrod-Neutral Progress In growth models with exogenous technological progress./ac in place of ai. Since A(t) is attached exc1. where Y ( K . no upward shift over time can occur. Solow neutrality. as technological progress occurs.46) over (9. we may view technology and labor as perfect substitutes in the production process. with dA/dt > 0. it can yield a dynamic equilibrium in which Y and K may grow apace with The advantage of (9. The reason why the variable A will not disturb the ratio of the marginal products of K and L is that it lies outside of the Y ( K .4 EXOGENOUS AND ENDOGENOUS TECHNOLOGICAL PROGRESS The neoclassical optimal growth model discussed in the preceding section provides a steady state in which the per-capita consumption.27) satisfies the Mangasarian sufficient conditions. shows that technological progress does take place.46)that displays this feature is Dynamic Production Functions For a general representation of a dynamic production function. leaving no hope of any further improvement in the average standard of living. Specifically.c. we can use the following special version of (9. To capture this feature.4. A third type of neutrality. The function can then be written as The positive sign of Y. technological progress is Hicks-neutral if it leaves the marginal rate of technical substitution (MRTS = MPP. The main reason for the popularity of Harrod neutrality is that it is perfectly consistent with the notion of a steady state. ( a ) What specific parameter changes could cause such a shift? ( b ) What would happen to the position of the steady state? (c) Can we consider the preshift position of the economy-call it (k.. The special version of (9. Since the same production technology holds for all t . we can easily remove the cap on per-capita consumption.)-as the appropriate initial point on the new stable branch in the postshift phase diagram? Why? 4 Check whether the Cass model (9. is the mirror image of Harrod neutrality. Y = Y ( K .) unchanged at the same K / L ratio. now can either leave A as an exogenous vari postulating how A is determined in the mod Neutral Exogenous Technological Progress Technological progress is said to be "neutral" when it leaves a certain economic variable unaffected under a stipulated circumstance.sign of ai. with an explicit variable A. then we can write where Y is taken to be linearly homogeneous in AK and L./ak? In Fig. the two MRTS's will be identical. technological progress is Harrod-neutral if it leaves the output-capital ratio ( Y / K ) unchanged at the same MPP. In contrast.45) is that.PART 3: OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS 265 (a)Is it possibleend/yr desirable to accomplish the same purpose by using ak/ak in lieu of ak/ac? Why? ( b ) Is it possible and/or desirable to consider the.L).

Previously. I 4(k. . K. The result is (9. Thus.= = The optimal control problem is therefore Maximize (9. =a +n + 6 + r [equation for k. = con 4(k. as long as A increases as a result of technological progress.57) m' = = = U1(c.c. (9. -(a = + n + &)A. ) . . (9. the new state variable. and all grow at the same rate. ( a + n + 6 + r)] [cf. the maximum prindpb F .(O) and k. = 0 curve delineates a steady state in which Y.24). (9.) . = = 0 curve] 0 curve] 4(k.221. m' - + n + 6)k. The cap on the average standard of living has been removed.33)] (a k.) The similarity between this new formulation and the old problem (9.58) %=u*(cI) u'(c. we had c = C / L constant in the steady state.c. .48) becomes By eliminating the m' variable.) - [cf. To rectify this shortcoming.51) can be treated mathematically just as a static production function.m' [+'(k.52) k.59) c. the last three equations can be condensed into the differential-equation system [cf. by calls for the following conditions: (9.CHAPTER 9 INFINITE-HORIZON PROBLEMS PART 3: OPTIMAL CONTROL THEORY each other.) . = +(k. 77.53) subject to L m u ( e-rt dt c. Endogenous Technological Progress While exogenous technological progress has the great merit of simplicity. Since the current-value Hamiltonian in the new problem is . 0 I c. it shirks the task of explaining the origin of progress. (9. [equation for i. economists were already exploring the economics of knowledge cre- where m' is the new current-value costate variable. But one major difference should be noted about the new steady state. and we shall indeed retain all the assumed features of 4 enumerated in connection with (9.56) (9. Now we have instead (9. Since the phase-diagram analysis is qualitatively the same as that of Fig.27) should be patently clear. 9. then (9. Suffice it to say that the intersection of the k. Let us define efficiency labor. can be found by the same procedure used to derive (9. there is no need to repeat it.) .48)-is really dynamic in nature.60) #(k.61) which implies that .c. it turns out that Harrod-neutral technological progress entails hardly any additional analytical complexity compared with the case where a static production function is used.) k.( a + n + 6 + r)] These then give rise to the following pair of equations for the new phase diagram: (9. despite that its forebear-(9. As early as the 1960s. = 0 curve and the d. = (9. we can rewrite (9.55) (9. As a bonus. it is necessary to endogenize technological progress.( a This is identical in nature with (9.) + ( k . = -71 . per-capita consumption C / L will rise over time pari passu. By the assumption of linear homogeneity..( a + n + S)k.. The equation of motion for k.35)] Then the production function (9.4.51) as (9.) + n + 6)k.C C .22).34)] [cf.> [q(k.C .36)] If we now consider efficiency labor 77 (rather than natural labor L)as the relevant labor-input variable. k.) .

for their significance lies not in their exact magnitudes. with two state variables.s ) ( l . Endogenous Technology a la Romer craY(K. which may be broadly labeled as human capital. and (by choice of unit) set it equal to one. It constitutes a "rival good" in the sense that its use by one firm precludes its use by another. The maximum principle is. October 1990. pp. the nonrivalry feature of technology implies knowledge spillovers. we assume that L is constant. If a government authority seeks to maximize social utility. to be referred to as technology.62) and (9.PART 3: OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS 269 ation. the specter of a capped standard of living is here to visit us again. there then arises the optimal control problem Maximize (9.20 for example. as Kazuo Sato (a discussant of the Shell paper) points may lie in the specification of A in (9. Arrow. The difficulty.64) subject to i m u [ ( l . Studies. Shell presents no explicit solution of the dynamics of the model. There is no need to enumerate these limiting values here. on the other hand. Vol. "Endogenous Technical Change. a(t) denotes the fraction of output channeled toward inventive activity at time t . is given the specific pattern of change where u is the research success coefficient. In contrast. pp.o)Y]e-ptdt then no cap on the growth of knowledge will exist so long as u a . denoting the stock of knowledge. a part will be saved (and invested). is person-specific. such that the discoverer of new technology will not be the A = and A(0) = A. pp. which ties A to Y. 62-68. June 1962. is generally available to the public. A ) . But now the variable A(t). May 1966. according to the Romer components. The K(t) variable thus changes over time according to This problem contains two state variables (A and K ) and two control variables ( a and s). and P is the rate of decay of technical knowledge. K(0) = KO The integrand in the objective functional may look unfamiliar." American Economic Review. 98. 155-173.621. a model of Karl Shell. But. the dynamics of accumulation can be traced from the equation system (9. but in the fact that they are constant. 5.62) is changed to where s denotes the propensity to save and S denotes the rate of depreciation. because C is the residual of Y 20 Kenneth J. makes the accumulation of knowledge explicitly dependent on the amount of resources devoted to inventive activity. If (9. One may wonder whether allowing a growing population and making the social utility a function of per-capita consumption will change the outcome. Out of the remaining resources. p. Romer. It is a "nonrival good" in the sense that its use by one firm does not limit its use by others. The answer is no. 21Karl Shell. of course. Since A -+ K and K -+ as t -+ w. Because of the rivalous nature of human capital. . As another example. "Towards a Theory of Inventive Activity and Capital Accumulation. In order to focus attention on the accumulation of knowledge and capital. May 1966. 79. the person who invests ip the accumulation of human capital receives the rewards arising therefrom. The other component." Journal of Political Economy. "The Economic Implications of Learning by Doing. a similar idea is featured in a recent model of endogenous technological progress by Paul Romer to ensure unbounded progress. In a decentralized economy. l No. after deducting what goes to inventive activity and capital accumulation: is the same as (9. directly applicable to this problem. 2 3 ~ a uM.PA can be classified into two major Knowledge. but shows that A and K will approach specific constant limiting values K and E. The well-known "learning-by-doing" model of Kenneth Arrow. S71-S102. the solution process is not simple. but it is merely another way of expressing U(C)e-Pt.46). Interestingly. regards gross investment as the measure of the amount of "doing" that results in "learning" (technological progress).21 which we shall now briefly discuss. The production function in the Shell model." Review of Economic "~rnerican Economic Review. The first component./3 > 0. Part 2.63) in the two variables A and K.

In other words.68) can be simplified to (9. K rather than K/L can . with no capital ( K ) and ordinary unskilled labor (L) engaged in that activity. both types of goods being subject to the same final-good production function. human capital is simply assumed to be fixed and inelastically supplied. but to avoid confusion with the symbol for the Hamiltonian. although its allocation to different uses is still to be endogenously determined.. is not fixed. It follows that Technology A. with So as its fixed total. f. To reduce the number of state variables. labor Lo. First. capital. K and L do enter as inputs along with human capital Sy and technology A. ~ e x t assume that (1)capital goods are nothing but foregone consumption . Our experience with Harrod neutrality suggests that this production function is consistent with a steady state in which technology-and. If we let xi = 0 for i 2 A. but it is detached from K. we shall use S (skill or skilled labor) instead. let the design index i be In this last function. along with it.67) A = a S A ~ Therefore. The production function for Y is developed in several steps. goods. By the same token. In the production of the final good Y.270 PART 3 OPTIMAL CONTROL THEORY CHAPTER 9 INFINITE-HORIZON PROBLEMS 271 sole beneficiary of that discovery. Since S can be used for the production of the final good. Romer denotes human capital by H. then A can serve as the index of the current technology. More importantly. we observe from the (SyA) and (LoA) expressions that technology can be viewed as humancapital-augmenting as well as labor-augmenting. technology A. of all x(i). The inability of the discoverer to reap all the benefits creates an economic externality that causes private efforts at technological improvement to fall short of what is socially optimal. A (state of the art). Thus technology can grow without bound. on the other hand. A A where u is the research success parameter. we have turned into a continuous variable. however. and P = 0 (technology does not deprecito ate). consumption C does not need to be expressed in per-capita terms because L o (identified with population and measured by head count) is constant. think of technology as made up of an infinite set of "designs" for capital. The production function for the final good is assumed to be of the Cobb-Douglas type: where L o indicates a fixed and inelastically supplied amount of ordinary labor. or for the improvement of technology. all four types of inputs are present: human capital S y .68') results in including those that have not yet been invented a t the present time. human capital SAhas replaced a (the fraction of output channeJ~d research). Then the amount of capital actually used will be 1 Substituting Ulis into (9. Both human capital and technology are created by conscious action.6~) Y = S. and capital K. and consumption-can all grow without bound. Note also that research activity is assumed to be human-capital-intensive and technology-intensive. Y. it is characterized by Harrod neutrality-here endogenously introduced rather than exogenously imposed. output.65)."L~P~-~-~-P . however. Note that A/A = u s A > 0 as long as both cr and SAare positive. It can be< by e n g a g i ~ human capital SAin research and applying the existi* technology A m follows: (9. except that. Romer allows all the designs xi to enter in an d step.us. This equation is closely similar to (9. For simplicity. we may deduce that there is a common level of use. In this model. (9. and (2) it takes y units of capital goods to produce one unit of any type of design. here. The production function then becomes Since all x ( i ) enter symmetrically in the integrand.

we get the conditions (9.uA A K ~ ( S o .p - c A(O) K(0) = KO For convenience.l A " + P ( ~ os = A.70) subject to A = = oSAA y d + P .741.A / h A that characterizes the steady state. define the shorthand symbol I J (9. ) " ~ .uA .AK= 0 - 1 t I i AK = C-' If we get another A .7211. then takes advantage of the re!ation i K / ~ i.we can calculate Then we have the current-value Hamiltonian CI-8 Hc = --k AA(uSAA)+ A ~ ( . as usual.h K a ( S O. = So .74).72) (9.74) through by A.~ .69) as their equations of motion-and two control variables.C ) A 1-0 where AA and A . represent the shadow prices of A and K.75) Y Y K K C C A A USA [by (9. society may consider an optimal control problem with two state variables. the system cannot be analyzed with a phase diagram. ~ ~ i . respectively. From this. But that. recalling that S .a. Net investment. the principle requires the following equations of motion for the costate variables: The Optimal Control Problem Facing this background.S.~ ~ y - I f f But we want to express the growth rate in terms of the parameters only.69) K=Y-c In addition to the A and K equations given in the problem statement. Instead. = C-' [by (9. Since A .=A.a + P .272 PART 3 OPTIMAL CONTROL THEORY appropriately serve as the variable for analysis. expression in (9.equation is not convenient to work with. Romer elects to focus the discussion on the properties of the balanced growth equilibrium inherent in the model: a steady state with Harrod-neutral technological progress. A and K-with (9. ) ' " L : K ' ..71) A .s ~ ) . expression from the second equation in (9. By dividing the A. it is found that - A=- A . Thus.SA) 1 Equating (9. And solving the system explicitly for its dynamics is not simple.. with SA substituted out. We thus have in the steady state --_=-=-= the control problem takes the form Maximize lwze -pt dt 1 (9. A. Romer first calculates A A / A A from the = first equation in (9. Questions of interest include: What is the rate of growth in that steady state? How is the growth rate affected by the various parameters? What economic policies can be pursued to promote growth? The basic feature of such a steady state is that the variables Y .. K and .67) and (9. Adopting the specific constant-elasticity utility function The Steady State Since there are four differential equations.'=A 0 as. we then ascertain that SA .6711 (9.73) ac aHc . is Just output not consumed. / A . then we can equate it to (9.' . K.76) and solve for S.76) and (9. C and S.l ~ u + P(so S . and simplifying. -= aHc C .77) and solving for SA. we have I / CHAPTER 9 INFINITE-HORIZON PROBLEMS (9. and C all grow at the same rate.

the methods of treatment for the we two categories are different. including truncated ones. how is the steady-state growth rate specifically affected by the following parameters? (a)a (b)p (c) 0 Constraints have earlier been encountered in Sec. To conclude this discussion. If so. p. The problem may have started at an early stage with the specification of (9. a .79) comprises not only the purenumber parameters cr. but also how the various parameters affect that rate. What would happen if (9. a variable with a physical unit. In the present chapter. control variables are absent.67). Yet the expression in (9.58) is equivalent to the set of equations (9. enters into the rate of growth AIA.56).( b ) Harrod-neutral. TI. In the first category.'~~ shall see.67) were changed to A = GSA$(A). the effects of the various parameters can be found by taking the partial derivatives of (9. As in the calculus of variations. (9.79): A rate of growth should be a pure number. (9.67).-- c A = a(a+P)SO-ap Y K C A ( ~+p e The result in (9. and economic commonsense would lead us to expect it to be a positive fraction.55).where $ is a concave function? In the Romer model. In the second category. 10. or ( c ) Solow-neutral. and (9. we turn to constraints that apply throughout the planning period 10.4. but a negative effect is exerted by the discount rate p. the treatment of constraints in optimal control theory relies heavily on the Lagrange-multiplier technique. Visual inspection is sufficient to establish that human capital So has a positive effect on the growth rate.4 1 2 3 4 5 6 Show that. Those constraints are only concerned with what happens at the endpoint of the path. Verify that the differential-equationsystem (9.56). in the Cobb-Douglas production function Y = A ( t ) K N L f i . either with or without the state variables alongside.79) shows not only the growth rate. it is necessary to distinguish between two magor categories of constraints. 7. More formally.274 FART I: WTXWAL C~WTWL THEORY attains in steady state the constant value CHAPTER It follows that the parametrically expressed steady-state growth rate is .24). and 8 . The magnitude of the expression thus becomes problematic. but also the parameter So which has a physical unit. Verify the validity of the maximum-principle conditions in (9.. But since optimal control problems contain not only state variables. and inequality integral con- .79). where SA.57). where we discussed various types of terminal lines.the technological progress can be considered to be either (a) Hicks-neutral. and (9. and the conditions that are developed to deal with them are in the nature of transversality conditions. the remedy would be to replace SA with a suitable ratio figure in (9.57). as does the research success parameter a .= . equality integral constraints. control variables are present in the constraints.52) by following the same procedure used to derive (9. p.1 CONSTRATNTS INVOLVING CONTROL VARLABLES Within the first category of constraints-those with control variables present-four basic types can be considered: equality constraints. inequality constraints. OPTIMAL CONTROL WITH CONSTRAINTS 1 EXERCISE 9.- I Y K .55). so that the constraints only affect the state variables. we should point out a complication regarding the rate-of-growth expression in (9. but also control variables. Derive the equation of motion (9.

y. as the constraint in problem (10. (10. of course. OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITHCONSTRAINTS 277 straints. as a function of t.5) constitute the first-order condition for the constrained maximization of H.y. . TI.276 PART 3. for every t E [0. u1.) = 0 for all t E [0. The maximum principle calls for the maximization of the Hamiltonian (10. (10. u. y . u. and u.7). u ..5) -= a0 c - g ( t .) = c. so it is desirable to have an alternative procedure for such a situation. Assuming an interior solution for each uj. u. y. y . it would make a difference in the equation of motion for A (10.2) H = P ( ~ . u. y.. u. Y .4) and (10.) = c to ensure that the constraint will always be in force. 2 or H with respect to y. ) g(t. u.61. This is necessitated by the fact that the g constraint must be satisfied a t every t in the planning period. we must also set a d (10. We shall in general include the state variables alongside the control variables in the constraints. u. would turn out the same whether we differentiate the Lagrangian function (the augmented Hamiltonian) or the original Hamiltonian function with respect to A. But this time the maximization of H is subject to the constraint g(t. whether we differentiate . u. we form the Lagrangian expression Inequality Constraints where the Lagrange multiplier 0 is made dynamic. Simultaneously. This is because inequality constraints allow us much more latitude in our choices than constraints in the form of rigid equalities. We first remark that when the g constraints are in the inequality form.) + A(t) f ( t . u . u. that afe required to satisfy the condition g ( t . by a proper second-order condition or a suitable concavity condition. we require that The substitution method is not as easily applicable to problems with inequality constraints.1) specifically prescribes.1) subject to 3 = f(t. Accordingly. u.) = c boundary conditions [equation of motion for A] plus an appropriate transversality condition. The rest of the maximum-principle conditions includes: [equation of motion for y ] and We shall refer to the g function as the constraint function. This must be supported.. On the other hand. we shall illustrate this type of problem with two . 1 and This is a simple version of the problem with m control variables and q equality constraints.. This is because. Substitution is therefore recommended whenever it is feasible. and the constant c as the constraint constant.. For simplicity.) u1. TI Equality Constraints - Let there be two control variables in a problem. The correct choice is the Lagrangian expression -8. it is usually simpler in actual practice to use substitution to reduce the number of variables we have to deal with. there is no need to insist that the number of control variables exceed the number of constraints. the y variable impinges upon the range of choice of the control variables. u. where it is required that q < m. and such effects must be taken into account in determining the path for the costate variable A. Together. While it is feasible to solve a problem with equality constraints in the manner outlined above. The control problem may then be stated as Maximize (10. but the methods of treatment discussed in this section are applicable even when the state variables do not appear. Note that the equation of motion for y.

so that the value of 2 will be identical with that of H = F + A f after maximization. The summary version here is adapted from Akira Takayama.12) denotes t h e same It should be pointed out that the symbol Lagrangian as defined in (10. and u 2 subject to the two inequality constraints. u. u2) - + 61(t)[c. u. 648. If the latter problem contains additional nonnegativity restrictions 11 1 i 4 then. and H.9) involving 6. The alternative approach of adding a new multiplier for each nonnegativity restriction u..(c1 .. . The rank condition is that the rank of this matrix be equal to the number of effective constraints. The Hamiltonian defined in (10. a constraint qualification must be satisfied. u.10) the first-order conditions aJ/du. The aJ/aO. we have in (10. y. any of the following conditions will satisfy the constraint qualification1: 3i i as well as d ! 1 (1) All the constraint functions g ' are concave in the control variables u. This procedure is directly comparable to that used in nonlinear programming. assuming interior solutions.5) because the constraints in the present problem are inequalities.. . (That is. Cambridge University Press. u. when evaluated at u. L. "Constraint Qualifications in Nonlinear ProgramK ming.) (4) The g'functions satisfy the rank condition: Taking only those constraints that turn out to be effective or binding (satisfied as strict equalities). Uzawa.g l ) + 6. u 2)]-a special case of (1). .)I. form the matrix of partial derivative [agL/8uJ]..1. J. But since the Hamiltonian is now to be maximized with respect to u. there exists a point in the control region u. Hurwicz.u2) F U + A(t) f ( t . January 1961.9) -.(c2 .U l ? uZ)] subject to y = f(t. u. is a point (u.9') 2 = F+Af g2(t. y . p.9). Mathematical Economics. [here. (where e indicates "effective constraints only"). not aJ/du. E U [here. in addition. without separate 0(t) type of multiplier terms appended on account of the additional constraints u. According to a theorem of Arrow. by the* W n . .(t) 2 0 is explored in a problem in Exercise 10.g 2 ( t . ~ and 2 5) C 2 + 6. (2) All the constraint functions g h r e linear in the control variables u. Arrow.) I The essence of 2 may become more transparent if we suppress all the arguments and simply write (10.) = 0 ensures that those terms in (10.. for these conditions to be necessary. I .g l ( t . we need to invoke the KuhnTucker conditions. variables are not restricted to nonnegative values in problem (10. Hurwicz. Note that. linear in (u .g2) boundary conditions The first-order condition for maximizing J calls for. (3) All the constraint functions g k e convex in the control variables u. 1985. u1. 1 ? i I Condition (10..11) differs from (10. all constraints g' are strictly < c.. J = ( ~ .. 2 0 condition merely restates the i t h constraint. And.2) is still valid for the present problem. Note that our constraints are written as g 5 c (rather than p 2 c as I1 in (10..)] such that. u. unlike in nonlinear programming.278 PART % OPTIMAL CONTROL W O Y E B CHAPTER 10. the constraint set has a nonempty interior. and Uzawa. we should replace the &z$/auj = 0 conditions in (lb. y .. Y . Cambridge. Ul. y ." Naval Research Logistics Quarterly.y.T u c k e r conditions.(t) 0.)] + 62(t)[c2 . OPTIMAL CONTROL WITH CONSTRAINTS 279 control variables and two inequality constraints: Maximize " We now augment the Hamiltonian into a Lagrangian function: (10. 2d ed.(dJ/aO.10) with 1 i 9 i 1 ' . = 0. and evaluate the partial derivatives at the optimal values of the y and u variables. concave in (u u . will vanish in the solution. Besides. and the complementary-slackness condition B.8). This is because the u. [here. 5 0.

as in the calculus of variations. m 9 = f(t. let us define i I we have the following conditions from the maximum principle: Max H U forall t E [O.l T G ( t . the control problem is known as an isoperimetric problem. y. we can work with the Hamiltonian without first expanding it into a Lagrangian function.y. of course. the costate variable associated with the integral constraint is. transversality conditions must be added. although the constraint is in the nature of a strict equality. u ) d t Y. Isoperimetric Problem When an equality integral constraint is present. Two features of such a problem are worth noting. the upper limit of integration is the variable t.Tgiven) i I ~(T)free i 1 i This new problem is an unconstrained problem with two state variables. Defining the Hamiltonian as The approach to be used here is to introduce a new state variable r(t) into the problem such that the integral constraint can be replaced by a condition in terms of T(t). too. The derivative of this variable is (10.16) -V11 aH i 1 (10. YU) d l . and one integral constraint: Maximize (10. This is why there is no need to limit the number of integral constraints.u) . To this end. r=- 2~ a~ and the initial and terminal values of r ( t ) in the planning period are aP aH r1 I P=-ar A(T) = [equation of motion for P ] [transversality condition] 0 What distinguishes (10. By incorporating r into the problem as a new state variable.14) subjectto LTF(t. constant over time.14) as Maximize subject to (10.280 PART 3 OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAlNTS Other maximum-principle conditions include the equations of motion for y and A.7): (10.T] " Y=x = aH [equation of motion for 41 (equation of motion for A] [equation of motion for where.18) T(T) = a d and A=-- a d JY . y. the new r variable has a fixed terminal point. Note that this procedure of substituting out the constraint can be repeatedly applied to additional integral constraints.11) f = .6) and (10. u ) [equation of motion for T] 1 . it is clear that we can replace the given integral constraint by a terminal condition on the r variable.21) from the conditions for the usual unconstrained I t . Inasmuch as this problem is now an unconstrained problem.y.19) and k T ~ ( t . While the y variable has a vertical terminal line. We shall illustrate the solution method with a problem that contains one state variable. First. y and r. the integral aspect of it obviates the need to restrict the number of constraints relative to the number of control variables. with each application resulting in a new state variable for the problem. and Y(O)=YO = k ( k given) (~o. Second.13) = and (10. From (10. These are the same as in (10. U ) y ( T ) free T(T)=-k (yo.T given) (kgiven) Y(O)= yo r(0)=0 jO1G(t.u ) d l 0 = -k [ Wherever appropriate. we can restate (10.. u) = -~ ( ty. u ) dt 9=f(t. not the terminal time T. the reader will note. one control variable.18). Y.G(t.

it follows that (10. The Hamiltonian of problem (10. Like the problem in (10. Since the r variable does not appear in the Hamiltonian.21) as well. given) T Y ( T ) free .25).26) has a truncated vertical terminal line. again. U )dt t.k (k given) This validates our earlier assertion that the costate variable associated with the integral constraint is constant over time. unlike (10. Define a new state variable r the same as in (10. aH ($@a)= r ap @ = Taking a cue from the isoperimetric problem. the equation of motion for p does impart a significant piece of information. we may omit its equation of motion from (10. whether equality or inequality. But as long as we remember that the p multiplier is a constant.282 PART 3 OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 283 problem is the presence of the pair of equations of motion for r and p. y. u) term to the Hamiltonian-a mission that has already been accomplished-and whose time path is of no direct interest.26) is simply Inequality Integral Constraint Finally. we can restate problem (10. we can safely omit its equation of motion from (10. @ = -- aH = o -k [by (10. therefore..T] aH j = x [equation of motion for y ] [equation of motion for A] [equation of motion for r] [equation of motion for p ] [transversality condition for y] r(T) k ( k given) (YO. we can deduce from the transversality condition .24) = r is I p(T) 2 0 +k 2 0 p(T)[r(T) + k ] = 0 [transversality condition for r] . moreover.23) as Maximize subjectto j=f(t. u ) d t t 0 Note. u ) [equation of motion for T] and its initial and terminal values are r ( o ) = . this is an unconstrained problem with two state variables.u) aH @=--=O dl- - p(t) = constant and r(0) = 0 T(T) 2 . and ~ ( 0 =Yo ) If the constraint qualification is satisfied. we can again dispose of the inequality integral constraint by making a substitution. In the present problem.19). p ( t ) =constant T(T) = .G(t. On the other hand. then the maximum principle requires that Max H U forall t E [O.H 0 A(T) where the upper limit of integration is t (not T).15): -ar = 8. But.u) L T ~ ( y. The derivative of simply (10.y. the new variable r in (10. y..22) Using (10. u ) dt 0 2 It becomes clear. Maximize (10 -23) subjectto j=f(t.l T ~ ( yt .23)) ar = 0 = . say.19).y. Since the r variable is an artifact whose mission is only to guide us to add the -pG(t.j O ~ ( y. aH A = -ay .21) at no loss. that because the Hamiltonian is independent of l we have ' . is constant over time. we consider the case where the integral constraint enters t b problem as an inequality.24) and (10. that the multiplier associated with any integral constraint.

In sum.P) t 0 [ j - kU + an.36) . For every constraint in the form of g(t.6 by a different approach.p ] In the preceding discussion. of course.P) and r(O)=r0 contains an equality constraint p = +(U) + a r . But in case they appear simultaneously in a problem. whether equality or inequality. 7.p = 0 [by (10. plus a transversality condition. 7.is the state variable.= haer" i .. n. we append a new 0-multiplier term to the Hamiltonian. r. since p was substituted out.25)] (. we can write the Lagrangian = c in (10.28) can be restated without reference to r as follows: MaxH forallt~[O.62)] [from (7.U 2 . y. EXAMPLE 1 The political-business-cycle model (7. u ) dt r 0 y.28) the equations of motion for r and p . moreover.211.J= p[k . If a constraint is an inequality.T] In this problem.61): Accordingly. U was the only control variable.rr .a ) ~ ] . y. u) term in the Hamiltonian. we can still accommodate them by combining the procedures appropriate to each type of constraint present. u. we introduce a new (suppressible) state variable r .284 PART 3: OPTIMAL CONTROL THEORY CHAPTER 10: OPTIMAL CONTROL WITH CONSTRAINTS 285 that the constant value of p is nonnegative. Now that p is retained in the model.) there is no explicit t argument in it. although If the following specific functions are adopted: v(U. k > 0) [from (7.33) . as long as we bear in mind that p is a nonnegative constant.p G ( t . Now we show how to deal with it directly as a constrained problem. . But we solved the problem as an unconstrained problem by first substituting out the constraint equation. the four types of constraints have been explained separately.kU . These should. u. it has an equation of motion.=j a0 a 1 .) d t I = 0 [by (10. . whose costate variable p-a constant-is reflected in a . we can in fact omit from (10. be equivalent to the conditions derived earlier in Sec.37) (10.U2 . y.Tgiven) 7i = b ( p .= ( .6: aH (10. there will be a complementary-slackness condition on the multiplier 0 or p .p ( t .30') .1).2U au (10.l T ~ ( t .3). p ) 4(U) = .5)] Maximize (10. y.63)] =j kU rr = constant r 0 and k .29) subject to LTv(u.38) 7j = + hk)ePt.(1 . Earlier. By (10. u ) s c . And for every integral constraint. as the case may be. the transversality condition for r-involving complementary slackness-should be retained to reflect the inequality nature of the constraint.hp - ( h > 0) ( j.a ) [maximizing the Hamiltonian] [state equation of motion] [costate equation of motion] b [ j . let us reproduce here for comparison the major conditions from Sec. however. which is thereby expanded into a Lagrangian.Abk = 0 Ab(1 .hp)ert + Ab(p . as in (10. the conditions in (10. Thus the constraint equation is in line with the general format of g(t. 0 then the Lagrangian becomes (10. In contrast.kU + a. the maximum principle calls for the conditions I (10. To verify the equivalence of the two approaches. But. it ought to be taken as another control variable. p)ert dt p = 4(U) + a~ r(T)free (r0. u ) = 0 or g(t.

< 8.31) results in To satisfy the maximum principle.48) in choosing the optimal control.4. < 8. and solve the problem accordingly.47) here are derived from d instead of H. using the 8 expression in (10.34) as which is the same as (10.35) as . they are the same as those derived from H in Sec. This enables us to rewrite (10. the Lagrangian 2 should be replaced by the current-value Lagrangian dc.a ) + haert plus a transversality condition. implies that 8.1] J with a constrained control variable. of course.u) [by (10. we get p = j . and will be left to the reader as an exercise.Aba = Ab(1 . So nothing more needs to be said here about these.39) 8 = -herf into a Lagrangian by taking into account the two constraints in (10.. we augment the Hamiltonian of this problem. First. The other case (A < 0) is similar. are nonnegative.(u + 1) + 8. < 0 or 8.32) for 8. H = -1+A(y+u) . > 0. it is possible to use the current-value Hamiltonian H.' CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 287 We shall first show that (10.4. Solving (10. But it is also possible to view the control set as made up of two inequality constraints (10. In that case.42) + Ab J = -1 + A(y + u ) + 8. merely restates the constraint. the condition 8.431. Then.41) . Example 3. This completes the intended demonstration.37).1I u ( t ) and u ( t ) _< 1 We shall demonstrate here that A > 0 indeed implies u* = 1. by the complementary-slackness condition in (10. and 8.41) is linear in u. we may rewrite (10. to satisfy (10.41).8. .(1 . Finally.43) through (10. Let A > 0.36). What needs to be verified is that conditions (10. Current-Value Hamiltonian and Lagrangian When the constrained problem involves a discount factor. Even though the two equations of motion (10. we must have 8. so the optimal control is u* = 1. the two approaches are equivalent. Next.46) and (10.1. We solved it by resorting to the signum function (7. Note that the constraint qualification is satisfied here since each constraint function in (10.9)] Substituting this into (10.k U + a ~ which. it follows that 1 .45) would lead us to the same choice of control as the signum function: which is identical with (10. The result is (10. Thus. 7.32)-conditions on the two control variables U and p-together are equivalent to (10. we must have: which is identical with (10.u = 0. in lieu of H.36).1d t 0 ' =y +u = y(0) u(t) 5 y(T) = 1 1 T free E [ . solving (10. Since both 8.31) and (10.45).33) for p. we encountered a time-optimal problem Maximize subject to and j IT . we find (10. Hence. 7. EXAMPLE 2 In Sec.= Ab i + h a e r t .39).38).

If we decide to use Hc in place of H.521.' as follows: 6 Since the right-hand-side expression is equal to . The equivalent new statement. u ) ~' And the maximum principle calls for (assuming interior solution)': The only major modification required when we use dc is found in the equation of motion for the costate variable. the Lagrangian~educes the Hamiltonian.51)must be changed f to -au o <- ad u r0 a d U- au = 0 [cf. (10. the result in (10. the procedure outlined in Sec.288 PART 3. accordingly. y . ~ ) e . using the new variable m. The major modification is. 8.a d c / a y [from (10. and multiplying through by ep" we find that we can introduce the current-value versions of H and .26).55) with respect to t.54).53) can be equivalently expressed with JC the new multipliers m and n as follows: and boundary conditions The regular Hamiltonian and Lagrangian are (10.y. to be found in the costate equation of motion.OPTIMAL CONTROL THEORY CHAPTER 10: OPTIMAL CONTROL WITH CONSTRAINTS 289 Consider the inequality-constraint problem Maximize subject to and kT@(t.+ A f ( t .55) m n = AePt = (implying A = me -pt ) OeP"implying 0 = ne-p') Equating these two expressions in line with (10.60) immediately follows. conditions (10. namely. no Lagrangian function is needed.56)]. (10. (10.541. to obtain Then differentiate -f with respect to y to get plus an appropriate transversality condition.511. (10.19) and (10. u)e-pt dt y = f(t. T t can readily be verified that u) Therefore.12)] .y. y.2 can be applied in a straightforward manner. replacing the condition 2 ~the control variable is constrained by a nonnegativity requirement. is To verify this. and (10. to put it differently. to This is because we absorb each integral constraint into the problem via a new state variable-such as r in (10.50) H = @(t. By introducing new multipliers (10. The maximum-principle conditions can. For problems with integral constraints only. be stated in terms of the Hamiltonian. first differentiate the A expression in (10. or. again.

see Ngo Van Long and Neil Vousden.u)forallt~[O.290 PART 3: OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 291 = -aH/ay by the condition rb = -aH. H 0 by H:.T]. not separately in y and in u ./ay + p r n . u ) I c form present in the problem." Essay 1 in John D. But. Similarly. u ) A f is concave in ( y . Amsterdam. Current earnings are determined by the level of human capital multiplied by the time spent on work. if the current-value Hamiltonian and Lagrangian are used. u ) and Og is convex in ( y . is assumed to have a constant elasticity n (0 < n < 1) with respect to sK. u ) = c form or the g ( t . eds. or H n is concave in y for all t E [ 0 . 1977. For simplicity. 11-34 (especially pp. Theorems 6 and 7). Pitchford and Stephen J. for example.28)].. y. T 1. composed of the F. ( a ) Formulate the individual's problem of work-study decision for a ) planning period [O. North-Holland. and EXERCISE 10. demonstrate that. u ) means concavity in the variables y and u jointly. study contributes to human capital K(t) (knowledge)and improves future earnings. if A < 0. then the optimal control is u* = . it must also be similarly reflected in H n . 25-28. then the optimal control is u* = 1. in the case of an inequality constraint. Second.1 1 In Example 1. By analogous reasoning. p is a constant. First. 8.1? How does it differ from the Dorfman model? . Why is it not taken to be a new state variable? 2 In Example 2. and G functions as follows: I 3 H=F+hf-pG and l=H+O[c-g] it is clear that (10. turn out to be valid also for constrained problems when the terminal time T is fixed. let H n denote the maximized Hamiltonian-the Hamiltonian evaluated along the u * ( t ) path. This would include cases with a fixed terminal point. f .1. the convexity of p G is ensured by the convexity of G itself. y .TI with a discount rate p (0 < p < 1. Finally. Applications of Control Theory to Economic Analysis. ( b ) Name the state and control variables. it is demonstrated that if A > 0. Since its equation of motion is fi = -8H/dT = 0.the problem will have a new costate variable p . where p is a nonnegative constant [by (10.61) is to be supplemented by a transversality condition (10. since every integral constraint present in the problem is reflected in H via the new costate variable p . As before. (10. for given A These conditions are also applicable to infinite-horizon problems.11)]. the convexity of Og is ensured by the convexity of g itself.. the concavity of 2 in (y. Besides. previously discussed in the context of unconstrained problems. where 0 2 0 [by (10. u ) p G is convex in ( y . a vertical terminal line. Let us use the symbol u to represent the vector of control variables. (9. but in the latter case. as pointed out before.61) eitherJisconcavein(y. "Optimal Control Theorems. The rate of change of human capital. or a truncated vertical terminal line. we have u = ( u u . we can consolidate the Mangasarian and Arrow sufficient conditions into a single statement3: The maximum-principle conditions are suficient for the global maximization of the objective functional if F is concave in ( y . however.62) lim A ( t ) [ y ( t ) t-m -y*(t)] 2 0 [cf.21)] A few comments about (10. pp. Turnovsky. . the Hamiltonian is understood to be maximized subject to all the constraints of the g ( t .I. results in immediate earnings.61) may be added here.1) and (10.8). For problems (10. T ] . g . in the present context. (10. since H and -8 are %or a more comprehensive statement of sufficient conditions. With a newly introduced state variable T.61) will be satisfied if the following are simultaneously true: Sufficient Conditions The Mangasarian and Arrow sufficient conditions. 3 An ~ndividual's productive hours per day (24 hours less sleeping and lensure time-normalized to one) can be spent on either work or study. (c) What boundary conditions are appropriate for the state variable? ( d ) Is this problem a constrained control problem? Which type of constraints does it have? ( e ) How does this problem resemble the Dorfman model of Sec. p is taken to be an additional control variable. Let the proportion of productive hours spent on study be denoted by s.61) can be easily adapted by replacing 2 by 4. u ) for all t E [0. In the ease of an inequality integral constraint. Work (10. K.

pp. August 1958. The usual properties listed below are supposed to hold: 1 . is nothing extraordinary because profit maximization has long been the accepted hypothesis among economists. Chiang.. the managers must keep in mind a minimum acceptable rate of return on capital. The initial states are fixed. Value and Growth. See also his Business Behavior. June 1972. 2. and growth makes possible greater sales. To address the issue of whether the Baumol proposition retains its validity in a dynamic world. of course.. L ) .). with the price of the firm's product normalized to one. 21. Since the Baumol model is static. seek to maximize sales. Hayne Leland has developed an optimal control model. however. one inequality constraint. ( b ) Define the Hamiltonian and the Lagrangian. New York.6. subject to a minimum requirement on the rate of r e t ~ r n The primary basis for this view is the separation of ownership and . Thus the revenue and (gross) profits of the firm are. . the stated objective is to maximize the total profit. two control variables (u.). ( a ) Write the problem statement. Alternatively.y. and one inequality integral constraint. To placate the stockholders. modern firms may actually try to maximize sales revenue. 5 ~ a y n e Leland. J. 187-198. R = Q ( K . "On the Theory of OligopoIy. L ) T=R-WL=Q(K. For simplicity.292 4 Consider the problem PART 3: OPTIMAL CONTROL THEORY CHAPTER 10: OPTIMAL CONTROL WITH CONSTRAINTS 293 Maximize subjectto y=f(t. This means that the managerial behavior is constrained by the inequality It remains to specify the decision on investment and capital accumulation. as we did in (10. y.~ management in the typical corporate form of the firm. respectively. Brace & World. u.u) ~ ( 0= yo ) y ( T ) free (YO.41. "The Dynamics of a Revenue Maximizing Firm. This model is discussed as an example of nonlinear programming in Alpha C. where the managers may.1 = Q ( K . [Use 8' do ' denote the new multiplier for the 0 < u ( t ) constraint and 2 to denote the corresponding new Lagrangian. 4~i11iam Baumol. which is linearly homogeneous and strictly quasiconcave. This. 376-385. McGraw-Hill. New York.. Write out the maximum-principle conditions under both approaches. The Model Consider a firm that produces a single good with a neoclassical production function Q = Q(K. THE DYNAMICS OF A REVENUE02 MAXIMIZING FIRM In the Evans model of the dynamic monopolist (Sec. apply the Kuhn-Tucker conditions to the e nonnegativity restriction on u ( t ) without introducing a specific multiplier. y. it is postulated that the firm always reinvests a fixed proportion of its profits 7 ~ Define a as the fraction of profits reinvested. Sec. revised edition. however. assuming interior solutions. In particular. ( c ) List the maximum-principle conditions. since profits serve as the vehicle of growth." Econometrics. ." International Economic E. A well-known model by Baumol takes the view. u ) < c type. but the terminal states are free at a fixed T. 1967.5 W can. pp. and compare the results.T given) and 0 < u(t) objective of maximizing profit instead. Fundamental Methods of Mathematical Economics. we may treat the nonnegativity restriction as an additional constraint of the g(t. Review. L ) .L)-WL To satisfy the stockholders. 1984. ro. despite the stockholders' All prices-including the wage rate W and the price of capital goods-are assumed to be constant. that instead of maximizing profits.121. Hareourt.] 6 In a maximization problem let there be two state variables (y. 3d ed. it would seem that in the dynamic context profit maximization may be a prerequisite for sales maximization. its validity has been questioned in the dynamic context. the managers must attempt to attain at least a minimum acceptable rate of return. in their own interest.

L ) .68) 8-4 .W L .58). Two of these are: aL dn E [O. L ) is convex in the control variable L . MPP.72) -t'. k .T given) Rather than employ a specific production function to obtain a quantitative solution. + pm From (10. The APP. or $ ( k ) .57) and (10. instead of working with the original input variables K and L . 3 ~the .(1+ma +n)#(k) + nr.67) (10.T] .64) K 7i- To determine i dynamics of the system.r o K ] K = a L [ + ( k ). K > O and 8-4 n2O n [ Q ( K . we state the problem of this revenue-maximizing firm as one with an inequality constraint: Maximize v L'Q(K. Accordingly. curve should lie below the APP..65) K(= I ) = CYP = a [ Q ( K . he uses the properties of the production function in (10. L ) . and the constraint set does contain a point such that the strict inequality holds (the rate of return exceeds the r.) curves against k .-Q ( K .( 1 + m a + n ) Q L .L)<O and K(0) = KO K ( T ) free There are only two variables in this model. Drawing together the various considerations. (10.5611 as (10.1. we then have the following first-order conditions for maximi~ing4: (10. Note that the constraint function W L + roK .Q ( K .63) to convert the maximumprinciple conditions into terms of the capital-labor ratio. this leaves L as the sole control variable.1 and 9 . Certain specific levels of the capital-labor ratio k have special roles to play in the model. Then we have and (10. we can write the current-value Lagrangian [see (10.71) (10. K and L.67) is restated as 8 4 --.66) subject to K =~ [ Q ( K . curve has been encountered previously in Figs.) . we rely on the two equations of h motion I In Fig. 10. [ k Osatisfies . we can And the two equations of motion become By augmenting the Hamiltonian with the information in the inequality constraint.W ] m = .( 1 WL+r.W L ] L Qualitative Analysis ( K O . L ) e U p t d t should be imposed since K ( T ) is free.W L ] Finally.W L .631. ( Q / L ) and the MPP.K-Q(K.294 PART 3 OPTIMAL CONTROL CHAPTEII 10 OPTIMAL COWPfEaL WITH CONSTRAINTS 295 divided by the constant price of capital goods. 9. where the original variables K and L still appear. = Q ( K .r o K ] = 0 h = the profit-maximizing level of k [ h satisfies QL = W or $ ( k ) . aL + m a + n ) [ + ( k ).( m a + n ) W = 0 # The Maximum Principle The current-value Hamiltonian of this problem is By dividing the first inequality in (10. Also.W L .L ) . we plot the APP.k q ( k ) ] .L ) .W = rok-by (10.711.W L ] + n [ Q ( K . The presence of an equation of motion for K identifies K as a state variable. Leland carries out the analysis of the model qualitatively. (Q.L ) ..k + ' ( k ) = W-by (10 -6311 1 k 0 = the k that yields the rate of return r. L ) + m a [ Q ( K .r . curve by the vertical distance . Thus the constraint qualification is satisfied. level).= r. the transversality condition with initial capital K(0) = KO. condition (10.( m a + n ) W = 0 for all t The k variable now occupies the center stage except in (10.68) by L write a new version of the condition: 0 and using (10.

as k -+ k t from the right. The two Curves intersect a t k = k s . k S = the k level that satisfies a d C / a L= m = (1) The numerator is negative. 10. (4) As k approaches zero. Thus. that since # ( k ) decreases with A (see Fig. the case where the rate of return exceeds the minimum acceptable level r. Call the expression in (10. 10. the a 4 / 8 L = 0 curve is positively sloped.296 PART 3 OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 297 k0 2 FIGURE 10. Such a slope expression is available by invoking the implicit-function rule.76) is positive. because v ( k ) < 0 . for any k E ( k O . as shown by the curvature of the APP. first. k ) . so that d m / d k is negative and the m = 0 curve is downward sloping. the denominator is also negative. curve in Fig.72). The m = 0 curve is where the multiplier m is stationary. (3) As k approaches & from the left. 10. This will simplify (10. The complementary-slackness condition in (10. &). This can also be seen in Fig. Let k + = the k value that satisfies p = a@( k ) The following observations may be gathered from this result: Then we see. which implies n = 0 . we also get a simpler version of m from (10.tersection of the QL curve and the W line determines the magnitude of k .75)between the two equals signs G ( m . the denominator can take either sign.1).2.( 1 + m a ) @ ( k ) + pm = 0 which can also be graphed in the A space.1 PI . What happens to m a t points off the m = 0 curve? The answer is provided by the derivative . While the position of k0 can only be ybitrarily indicated.75) mln=. Second. when n = 0 Such an intersection exists if and only if k f occurs to the left of &.1 since the QL curve lies below the W line to the left of k .Leland concentrates on the situation of A > kO. Then the slope of the curve is (10. (2) For any k < &. k ) . By concentrating on k > k O .76) . To use the implicit-function m rule again. The i.(for m = 0 ) dk dm = -= aG/ak aG/am (1 + ma)#'(k) -a@(k) +p While the numerator is negative. -+ 0 .2. the slope of the m = 0 curve tends to infinity. 10. Thus. 02 k @ ( k ) . it is reasonable to expect k 0 to be located to the left of k .If we set that m equal to zero.1. k0 k+ kS FIGURE 1 . for any k > k f the denominator in (10. then we have the equation (10. 10. the numerator tends to zero. so dm/dk m.. Then the slope of the m = 0 curve is This equation can be plotted as a curve in the km space if we can find an expression for its slope so that we can ascertain its general configuration.70) then leads to n = 0 .69) to These observations enable us to sketch the a l C / a L= 0 curve the way it is shown in Fig. let us call the expression in (10. so d m / d k -+ 0 .= .73) between the two equals signs F ( m . Since it is not permissible to choose a k < kO. These explain the general shape of the m = 0 curire in Fig. the denominator tends to zero.

10. Although we cannot in general expect the unconstrained solution to work. In fact.- T I solution \\. Should that solution turn out to satisfy the constraint. exemplified by the solid curve. 10. The true optimal solution. it may happen that when we ignore the state-space constraint and solve the given problem as an unconstrained one.y. Sometimes t h e true optimal path may have some segment(s) in common with the unconstrained solution path. T ] when the constraint is binding (effective). By coincidence. Leland shows by further analysis that there is an eventual tendency for the revenue-maximizing firm to gravitate toward k O . we may expect the method of Sec. too.3b.71) and rearranging. 10. u ) But more generally. Since k = K / L . That is. it should be clear that the stakionary point formed by the intersection of the two curves may not occur at k.y) 5 c boundary conditions In either case.1 to apply to this problem.-~/'\Unconstrsined solution FIGURE 10. placing the revenue-maximizing firm in the dynamic context does not force it to abandon its revenue orientation in favor of the profit orientation. whereas (10. and northward to the right. we have m < 0 to the left of the riz = 0 curve. so that Y I I /" . In that event.. we would expect the unconstrained optimal path to violate the constraint. With a meaningful constraint.298 PART 3 OPTIMAL CONTROL THEORY CHAPTER 1 : OPTIMAL CONTROL WITBI 0 299 Y The message here is that as k increases. If so. useful clues will usually emerge regarding the nature of the true solution.2.. such as the broken curve in Fig. with h(t. 3 ~ which fails the nonnegativity restriction. From Fig. the constraint Instinctively. A simple example of this type of constraints relevant to many economic problems is the nonnegativity restriction Dealing with State-Space Constraints Let the problem be Maximize (10. can contain one or more segments lying on the constraint boundary itself.71) involves K and L as well as k. Consequently. then the problem would be solved.71) has played no role in the preceding analysis.77) subject to and lOT~( d l 6 Y. + " L-t " " . there may be one or more "constrained intervals" (or "blocked intervals") within [0. the constraint may take the form h(t. That is. the optimal path y * ( t ) lies entirely in the permissible area. we have K = kL. ) y = f ( t . such as illustrated in Fig... 10. of the m = 0 curve. the control variable u is absent in the constraint function. then this equation can yield a corresponding optimal path for the control variable L. where the rate of return is at the minimum acceptable level r. Even if not. What such constraints do is to place restrictions on the state space. however. m will increase. This is because the analysis has been couched in terms of the capital-labor ratio k. 1 0 . This is why we have made the arrowheads on the a&aL = 0 curve point southward to the left. The only circu?stance under which the firm would settle at the profit-maximizing level k is when k + exceeds k. A simple transformation of (10.y) = c. the profit-maximizing level of k. is trivial. and demarcate the permissible area of movement for the variable y. but the two paths can also be altogether different.3 STATE-SPACE CONSTRAINTS The second category of constraints consists of those in which no control variables appear.3 Equating this with the K expression in (10. it is not a bad idea to try it anyway. and m > 0 to its right. we have the Lagrangian .71) will reveal what function it can perform in the model. The reader may have noticed that condition (10. we find that Once we have found an optimal time path for k.

Note that it is possible for b to be zero. is a function of t and y. which means that A may not be discontinuous at a junction point.g. 5. or. y) to increase. the solution procedure requires modification. y) = c (the constraint becomes binding) we must forbid h(t.791. New York. = u) 5 0 whenever h(t.y ) = c Note that the derivative dh/dt (a total derivative). however. At any rate. h(t. We now need to watch out for junction points. f ( t . (10. the problem statement now takes the form Maximize subject to LTF(t. then whenever h(t.1. y) has a sharp point at T. so that only the control variables are allowed to jump (e. The only difference is that (10. 8. Chap. T 1.y. in previously discussed constrained problems. 10. that is. respectively. or 'see Magnus R. see Atle Seierstad and Knut Sydsseter.. then the jump condition t is6 Therefore. y. we shall adopt distinct multiplier symbols A and O here. _< 0 whenever h(t. OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAlIV'PS 30 1 with maximum-principle conditions (assuming interior solution) a 4 in a more explicit way. more + h. unlike h itself. New York.1. y ) = c fits nicely into the g(t. u ) I c category discussed in Sec.81) is not meant for all t E [O. it would be easier to take an alternative approach in which the change in the status of the constraint h(t. as part of the maximum-principle conditions. u ) = h. Specifically. y ) is not allowed to exceed c. + h. With this new constraint. Chap. u ) df j = f( t. y. if T is a junction point between an unconstrained interval and a constrained interval (or the other way around). 1966. as well as u . u ) I 0 whenever h ( t . . y) = c. Let the Lagrangian be written as An Alternative Approach While it is possible to proceed on the basis of the conditions in (10. and if we denote by A-(7) and A+(T)the value of A just before and ~ u s after the jump. Hestenes. because plus a transversality condition if needed Why cannot these conditions be used as we did before? For one thing. Theorem 2. with pure state constraints.7 The major consideration of this latter approach is that since h(t. y) 5 c at junction points is taken into account b Then.j. Calculus of Varrat~onsand Opt~malControl Theory. but needs to be put in force only when h(t.300 PART 3. This latter situation can occur when the constraint function h(t. y) 5 c turns from inactive (nonbinding) to active (binding) status.f(t. the new constraint dh/dt explicitly. u ) and = h.y.81) h ( t .79). Wiley. Optimal Control Theoly with Economic Applications. 317-319. and make sure not to step into the forbidden part of the state space. Elsevier. But here. we require (assuming that the' constraint qualification is satisfied) that a more detailed discussion of jump conditions. or vice versa. y. u ) Since the value of b is indeterminate. Y ) = c. This can be accomplished simply by imposing the condition whenever h ( t . 1987. when h is discontinuous a t 7. y. bang-bang). especially pp. the costate variable A can also experience jumps at the junction points where the constraint h(t. this condition can only help in ascertaining the direction of the jump. the solution is predicated upon the continuity of the y and A variables.y) boundary conditions c In order to make possible a comparison of the new maximum-principle conditions with those in (10.

the maximum principle for the present approach also places a restriction on the way the O multiplier changes over time: At points where O is differentiable. y). of course. y) < c (constraint not binding) would mean O = 0. and how O itself must change over time. with two different multipliers. we can directly apply the conditions in (10. then the aA"/au = 0 y . Prob. h(t. f. [In the normal interpretation of complementary slackness. A distinguishing feature of this approach is that the f ( t . prescribing how the O multiplier affects the system.3. This restriction will be ex~lained later. as in (10. A (a costate variable) and O (a Lagrange multiplier for a constrained problem). Thus this is a stronger form of complementary slackness.= F. with more explicit information about junction points. Collecting all the results together. Conversely.82). y ) = c. h(t. = 0 These conditions allow for the possibility of a boundary solution. y) = 0 is consistent with O = 0 as well as 0 > 0. O takes a zero value. The reader is asked to demonstrate their equivalence by following the set of steps outlined in Exercise 10. y) < c. and (10. 6 must be nonpositive whenever h(t. While the approach summarized in (10.84) O[c-h(t. is easier to use than the approach in (10. with h(t.t )u dt 6 10 [ = 0 when h(t. u ) function enters into the Lagrangian 1' twice. y) = c. In contrast. 1. the reader is asked to write out the maximum-principle conditions for the special case of nonnegativity constraint y(t) 2 0.302 PART 3 OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 303 and condition should be replaced by the Kuhn-Tucker conditions While the set of conditions on O seems to serve the purpose of restating the constraint.79). the conditions in (10. it will become clear why the 6 < 0 condition is needed. y.1 In addition to the preceding. Boundary solutions can also occur. But when the constraint changes its status from nonbinding (inactive) to binding (active). when h(t.791.77) into the form of (10. under the new approach. y ) < c] 3.79) does not contain the 0 multiplier. we intend the complementary-slackness condition to imply O > 0. - Oh.84) without first transforming problem (10. h(t. after deriving the h expression from the constraint function h(t. the two approaches are in fact equivalent. To remedy this. the partial derivative f. if there is a closed control region for u. we append the complementary-slackness condition Then.84) reduces to the regular maximum-principle conditions.84).y)~c (10. In another exercise problem. alf au + A f.y)]=O An Example Let us consider the problem8 Maximize subject to k3(4 .84)-once with A and once with O. = u plus a transversality condition if needed Note that. y ) = 0 (constraint binding). If the control variable is itself subject to a nonnegativity restriction u(t) 2 0.84) will become fully operative. also appears twice in the a J ' / d u = 0 condition-the first line in (10. When the state-space constraint is nonbinding. and thereby nullify the conditions ' regarding aX/aO. it does not make clear that this set applies only when h(t. the a J / a u = 0 condition in (10.t s l . the behavior of O and its effects on the system are more explicitly depicted. we assume that the maximization of the Lagrangian with respect to-u yields an interior solution. we have (assuming interior solution) the following set of conditions: . which would cause the last term in l to drop out. In the process. Thus. Accordingly. Here.

The Lagrangian is. at which time the constraint becomes binding.t 5 1. 21. we may expect a boundary solution to arise. until the boundary constraint forced us to change course.4. however. which implies the state path y = 2t. and we should again stay on that path for as long . we initially adopt for the first time interval [O.1 ) [corner solution] which is linear in u . 1 ) = 2t Given the closed control region [O. . we have (10. If we disregard this constraint and form the Hamiltonian H=(4-t)u+Au 6I 0 y =u A =0 ( = 0 when constraint nonbinding) . The constraint boundary later becomes the new best path. we first try to ascertain the proper control. = A = constant we see that' H is linear in u . by (10.Wb P = (4-t)u + Au . = u .t 2 1 implies that y(t) = t +1 [same as the constraint boundary] Thus. we reason from the continuity of y that this second segment must start from the point ( 1 . 2 ) where the first segment ends. and then use the equation 3. y . from the integrand function we gather that in order to maximize the objective functional. To determine the third segment. = 1. and yields the path y ( t ) = t k . we obtain the simple linear path As soon as we hit the constraint boundary.304 PART 3 OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 305 Since the constraint function is linear in u (it does not con$& u ) . and adhered to that u value for as long as we could. the state-space constraint y .@ ( u . O becomes positive.86) u* [ 0 . steadily rising from the point of origin. 10.841. Specifically. we should let u be as high as possible. But this path obviously cannot take us to the destination point (3.3). and hence the highest value of 3.that is. up to t = I. the y path now begins to crawl along the boundary. To definitize the arbitrary constant k . First. by 420. After using the initial condition y(0) = 0 . Thus. because it corresponds to the highest value of u . can stay below the constraint boundary y = t + 1 only up to the point ( 1 .4 shows. As Fig. Let us now turn to (10. that P be maximized with respect to u which contains a state-space constraint. Then we have to change course. with slope No transversality condition is needed because the endpoints are fixed. Taking the clue from the earlier-discussed unconstrained solution. and the a P / a O condition tells us to set u = 1 [by complementary slackness] This means that the equation of motion becomes 3. . The maximum principle requires. 1) the control u = 2. the constraint qualification is satisfied.84) for guidance as to what new course to take. so we need at least one more new segment in order to complete the journey. This fact enables us to definitize the path to + This path is the one that allows the fastest rise in y. For the first segment.. This would imply that y = u = 2 and y ( t ) = 2t + k . 10. 2 ) . as shown in Fig. 1 ) = 2 y* [ 0 . this path. . we chose u* = 2 to ensure the fastest rise in y . Thus a reasonable conjecture about the unconstrained solution is u ( t ) = 2.

Since u ( = j ) cannot be negative. and the same is true of inventory.3).(CQ + sX) dt over a given period of time [0. In problems where nonlinearity exists. New York. and storage cost for inventory is s per unit. But this is not always the case. or use a combination thereof.88). .79). 10. . we finally obtain the complete picture of the optimal control and state paths. X. So the second segment should aim at y = 3 as its target.4 ECONOMIC EXAMPLES OF STATE-SPACE CONSTRAINTS In this section. This type of outcome is illustrated in Fig. These two variables are related to each other in that inventory accumulation (decumulation) occurs whenever output exceeds (falls short of) the demanded quantity. both assumed to be constant over time.86). the firm can either draw down its inventory. assume that the terminal time T is fixed. the true optimal path may have no shared segment with the unconstrained path. ( b ) Differentiate A totally with respect to t.79) and (10. or produce the demanded quantity as current output. what kind of transversality condition is appropriate? This suggests that Q can be taken as a control variable that drives the state variable X. it is not feasible to overshoot the y = 3 level and then back down. ( e ) Equate the two A expressions in parts (c) and ( d ) .84). An Academic. let the demand for the firm's product be given generally as D(t) > 0.10 To begin with. ( d ) Substitute the result from part (a) into the A expression in (10. TI. until we are forced by the consideration of the destination location to veer off toward the point (3. 2 (a) Write out the maximum-principle conditions under each of the two approaches when the state-space constraint is y ( t ) 2 0. to get an expressipn for li. and deduce that 6 I0 . we present two economic examples in which a nonnegativity state-space constraint appears.3b. terminating at the point (2. 135-137. By piecing together (10. It is obvious that output cannot be negative. In the present example.306 PART 3.SX) dt x = Q . The firm's objective is to minimize the total cost /.' EXERCISE 10. Q(t) 2 0 and X(t) 2 0.pp.84)by the following procedure: ( a ) Equate aJ//au and a Y / a u (both = 01. (c) Expand the expression obtained in part ( b ) by using the A expression in (10.Tgiven) & E [O. Thus we have 10. Thus. Assuming a given initial inventory Xo. Q. (10.D(t) -X 1 0 X(0) = X .89) and X(T)rOfPee (Xo. the first segment of the optimal path coincides with that of the unconstrained problem. Is there need for a transversality condition? If so.3). ( 10. Thus.3 1 Establish the equivalence of (10. Production cost per unit is c. OPTIMAL CONTROL THEORY CHAPTER 10 OPTIMAL CONTROL WITH CONSTRAINTS 307 as feasible. and solve for A. It then follows that the third segment is a horizontal st*ght slope u = 0: line with Inventory and Production The first example is concerned with a firm's decision regarding inventory and production. 1981. a) Note that minimization of total cost has been tramkited into maximization 'O~hisproblem is discussed in Greg Knowles. ( b ) I n a problem with y ( t ) 2 0 .89). to Applied Optimal Control. and (10. then simplify. To meet this demand. we can express the problem as Maximize subject to iT( -CQ .

because the constraint function . Note also that the nonnegativity state-space constraint automatically necessitates the truncation of the vertical terminal line. so that choosing the minimum admissible value of Q serves the purpose of the problem best.X I 0. Observe. when the given inventory Xo is exhausted. can be found from the equation p ( t ) dt = Xo [exhaustion of inventory] h = -X c=O and h = -X= -(Q-D) I I Thus. it must reorient the control. the firm must start producing. While following this rule.X is linear in Q (it contains no Q). we first choose The third possibility (unbounded Q) is not feasible. the junction point.83). In fact.84). with The rule for the choice of Q is therefore A(. so that a P / a @ = 0. the Lagrangian is If we assume that T < T. From the constraint . The rest of the story is quite simple. Having no inventory left. however. This is indeed so. OPTIMAL CONTROL WITH CONSTRAINTS 309 of the negative of total cost. that even in the case of A = c. In terms of the maximum principle. The value of T. as in the case where A < c. then the equation of motion becomes which simplifies the state equation of motion to This result indicates that the firm should produce nothing and meet the demand exclusively by inventory decumulation. the activation of the constraint makes the @ multiplier positive. However.)c - i Q * {is O = indeterminate is unbounded Acting on the clue from the unconstrained view of the problem. we can still select Q* = 0.T] . we readily find that But such a policy can continue only up to the time t = T. we first verify that the constraint qualification is satisfied.308 PART 3: OPTIMAL CONTROL THEORY CHAPTER lo.84) require Q=D fort E [r. then at t = T we need to revise the zero-production policy. Q* = 0 would make a lot of sense since Q enters into the objective functional negatively. by (10. ' The unconstrained view of this problem shows that the Harniltonian that 2 be maximized with respect to Q ' [corner solution] I is linear in the control variable Q. the inventory existing at any time will be and the firm's inventory is bound to be exhausted sooner or later. or which is linear in the control variable Q. When the firm runs into the constraint boundary X = 0. The conditions in (10. Before applying conditions (10. whereas the second possibility (indeterminate Q) is not helpful.

m) K(0) = KO It is the gad af the firm to maximize its present where the symbol p denotes the rate of return the stockholders can earn on alternative investments as well as the rate of return the firm can earn on its retained earnings. October 1980. whereas I can play the role of the control variable. Again. Since X(7) = 0. R . the nonnegativity of retained earnings. Schworm. this model can only be analyzed qualitatively. "Financial Constraints and Capital Accumulation. Inasmuch as X = Q . Consequently. -R(t).I(&) ) ) d R(0) = Ro I ( t ) E [O.D.I(t)]e-P' dt . Besides. That is why the problem features a state-space constraint. But whereas the assumption of Z(t) 2 0 automatically takes care of the nonnegativity of K. inventory should remain at the zero level from the point 7 on. of course. Thus its cash flow 4 0 ) is dependent only on the profit proceeds and the investment outlay: ~ ( t = P R ( ~ + a(t. we can state the problem as follows: Maximize subject to L m [ n ( tK ) .310 PART 3 OPTIMAL CONTROL THEORY : CONTROL WITH CONSTRAINTS 311 Under this new rule.Z + O[pR + a ( t . it follows that Assuming no &preciation. is linear in the control variable I. It cannot sell used capital. be nonnegative. The complete optimal control and state paths are therefore Capital Accumulation under Financial Constraint William Schworm has analyzed a firm that has no borrowing facilities. The . needs to be explicitly prescribed in the problem. by (10.84). and whose investment must be financed wholly by its own retained earnings.I] "~illiam E. Z(t) r 0.83). The constraint function also supplies the information that T h s .'' This model can serve as another illustration of a state-space constrained problem. the firm's retained earnings R(t) can be augmented only by any returns received (at the rate p) on R. are the costate variables for R and IP. pp. Starting from a given initial level. we also have . But we shall treat it as one with a state-space constraint and derive some of Schworm's main results by using the maximum-principle conditions in (10. Schworm first transforms the model into one where R(t) is a constrained control (rather than state) variable. K ) . Actually. the fifm should meet the entire demand from current production. Both state variables R and K must. because the constraint function. we have Being given in the general-function form. K ) and investment expenditures Z(t). we have the Lagrangian + A. where AR and A. 643-660. and by any positive cash flow. K ) . mpeetively. The firm is assumed to have gross profit a(t. meaning that no change in inventojr should be contemplated." International Economic Review. Collecting all the elements mentioned above. ~ ( t = I(t) ) These two differential equations suggest that R and K can serve as state variables of the model. it cannot borrow funds. we can verify that the constraint qualification is satisfied. Hence.

by equating this A. even though such specific functions may not be totally satisfactory from an economic point of view. It is for this reason that simple specific functions are often invoked in economic models to render the solution more tractable. =p [investment rule when R > 01 (10. the firm will be able to get as close as possible to the unconstrained optimal capital path. 7 ) (el X * [ T . then it follows that R = 0 and R = 0 in that interval. plus transversality conditions Note that. Rules (10. we hope-most of the basic topics in the calculus of variations and optimal control theory. the firm .95). the reader is referred to the original paper. t.89). With I positive. therefore. It is of interest that. R = 0 (financial constraint binding) in some time interval (t.90). We see. that when the R constraint is binding. we get ( d l X*[O. or EXERCISE 10. the firm should simply follow the usual myopic rule that the marginal profitability of capital be equal to the market rate of return. 2 In problem (10.312 conditions in (10. complementary slackness mandates that a J 1 / a I = 0. with the firm switching from one rule to the other as the status of the R constraint changes. the demand is generally expressed as Dit).91) will reduce to ) (10. K) [investment rule when R =R = 0 1 This means that in a constrained interval. the ~ ( t equation in (10. when retained earnings are positive (financial constraint nonbinding).92).98) Z(t) = r(t. the solution and analysis procedure may be quite lengthy and tedious. expression with (10. the firm should invest its current profit. From (10. By so doing.96) that whenever R > 0. We can thus conclude from (10. though separate. we find that While by no means exhaustive. If. on the other hand. the two sides of (10.5 LIMITATIONS OF DYNAMIC OPTIMIZATION Moreover.97) T.TI Differentiating this equation with respect to t. Schworm also derives other analytical conclusions. by virtue of the nonnegativity restriction on the control variable I. The reader has undoubtedly noticed that even in fairly simple problems.93) 6 I0 [ = 0 when R > 01 .7) ( c ) Q*[T. except for the 6 term on the right..98).). are to be used in combination.97) and (10. the Kuhn-Tucker conditions are applied in (10.84) stipulate that PART 3: OPTIMAL CONTROL THEORY CHAPTER 10. Would different specifications of D i t ) affect the following? (a)T ( b ) Q*[0. investment becomes further constrained by the availability of current profit. Hence. OPTIMAL CONTROL WITH CONSTRAINTS 313 should see to it that (10. we shall consider here only the optimization behavior for the case of I(t) > 0.4 1 Draw appropriate graphs to depict the Q* and X* paths given in (10. the present volume has attempted to introduce-in a readable way. While Schworm discusses both the cases of I(t) > 0 and I(t) = 0.96) are identical in structure. This result embodies the optimization rule for the firm when I > 0.93) we know that 6 = 0 when R > 0.TI 10. For those. a x - That is.

y(t). y'(t)l 2 (b)and(d) = 1 EXERCISE 1. ANSWERS TO SELECTED EXERCISE PROBLEMS EXERCISE 1. remain constant throughout the planning period. V * ( F ) = 10. But do please bear in mind what they can and cannot do for you. and phase diagrams to your heart's content. With the advent of powerful computer programs for solving mathematical problems. where the parameters are supposed to remain a t the same levels from here to eternity.2 3 The terminal curve should be upward sloping. Owing to the complexity of multiple differential equations. Of course.a ) 4 dZ/dx = 2e2" 6 y*(t) = A t 3 +1 8 y*(t) = e' + gt + e-' + i t e t . V * ( K ) = 2. V * ( A )= 23. this limitation may be largely overcome in due time.1 2 dZ/dx = 4x3(b . But the constancy assumption may have to be retained in the new formulation. optimal path DZ is DGZZ. we really ought not to end on a negative note. So by all means go ahead and have fun playing with Euler equations. including the discount rate. EXERCISE 1. Thus we have here a real dilemma.4 1 V * ( D ) = 8. Although there do exist in real life some economic parameters that are remarkably stable over time. IZ V * ( J )= 1 JZ . in practical applications involving specific parameter values. though. certainly not all of them are. optimal path EZ is EHJZ. Hamiltonians. optimal path FZ is FHJZ. After the reader has spent so much time and effort to master the various facets of the dynamic-optimization tool. the simple way out of the dilemma is to reformulate the problem whenever there occur significant changes in parameter values.314 PART s OPTIMAL CONTROL THEORY It is for the same reason that writers often assume that the parameters in the problem.3 1 F [ t . The constancy assumption becomes especially problematic in infinitehorizon problems. economic models also tend to limit the number of variables to be considered. V*(E ) = 8. 4 V * ( I ) = 3. ACFHKZ EXERCISE 2. transversality conditions. Yet the cost-in terms of analytical complexity-of relaxing this and other simplifying assumptions can be extremely high.

PG . = a + 2aab + pb + EXERCISE 4.2 1 ( a ) There is no y term in F.3 1 y*(t) = t EXERCISE 4. m 2 ( a ) L* = . = 10 and r. to determine T*. 2 0. = static monopoly price 2b(l ab) 4 The lowest point of the price curve occurs a t 1 P.l / ( b + l ) . = 2 ( --- i2~:')e-pt > 0 for all t . > 0 1 Problem 6: F. or k) pushes the location of L.. ( c ) Sufficient for a unique minimum. 3 ( b ) A .3 l) 1 ( a ) c * ( t ) = ~ . The other cases have rising price paths. Problem 9: F.2 = 1 EXERCISE 3. n n m ( b ) ?rl(L)= 0 when L = n ( c ) LT* is located to the left of m / n .. ( c ) Sufficient for a minimum. 3 ( a ) The determinantal test (4. satisfies the Legendre condition for a minimum. satisfies the Legendre condition for a minimum. A.4 >0 P = P.5 2 ( a ) Not different. such that the r ( L ) curve has a positive slope. T * EXERCISE 5.. EXERCISE 3..i t 1 3 ( a ) Only one condition is needed. = 0 for all t . on the positively sloped segment. EXERCISE 2.I = 0..12) is _satisfied for positive semidefiniteness because lDII = 8 and 2..4 1 ( a ) rr'(L.* to the left.+ (c*)-b/b ( b ) K*'(t) = c = (c*)-'~+" -c* b [since B = c] ... EXERCISE 5. The characteristic roots are r. r t / ( b += C*oert/(b+l) 1 B . 3 F. T I .3 1 y*(t) = $ ( t 2 . Thus LT* is located such that the slope of the d L ) curve is 2p&. EXERCISE 2. ( b ) The determinantal test (4. ( b )y*(t) = i t 2 .3t + 1) 3 ( a ) No. EXERCISE 2. and F is strictly convex in y'.3 Only case ( c ) can possibly (but not necessarily) involve a price reversal in the time interval [0..9) fails because ID.ANSWERS TO SELECTED EXERCISE PROBLEMS ANSWERSTOSELECTEDEXERCISEPROBLEMS 317 EXERCISE 2.2 L ( b ) An increase in p (or b. and ID.I = 0. = 4 for all t . = 0.2 1 ( a ) y*(t) = 4 3 ( a ) y*(t) = t + 4. satisfies the Legendre condition for a maximum as well as for a minimum.) = 2 p m > 0. + EXERCISE 3.



6 ( b ) Maximize

Lm~(q, qf)e-ptdt Lmqfdt

subject to


2 ( a ) No, (5.40) is valid with or without saturation. ( b ) Equation (5.41) is unaffected, but (5.42) should be changed to Q ' ( K ) = 0 because now u can never be zero. 3 The new equilibrium is still a saddle point, but it involves a positive (rather than zero) marginal utility in equilibrium. 5 ( a ) Only the streamlines lying above the stable branch remain relevant; the others cannot take us to the new level of K T . ( c ) Yes.



~ * ( t ) 3e4-' - 3 = u*(t)= 2 y * ( t ) = 7ef - 2 u * ( t ) = A*(t) = ~ , e @ ' A+- fit y*(t) = - l ) ~ , e f i '- (6+ l ) A , e - f i t



-e - 2 f i

where A ,

(I -


( a+ I )



5 The general solution is the same as in Example 2.

B 1 F= - U ( C ) + D ( L ) + A[-C + Q ( K , L ) - K'] - U 1 ( C ) - h =O * h = -U'(C)= - p 2 ( a ) Three variables and two constraints. ( b ) F=T - C)e-Pf A,[-T + ( Y K- p K 2 ] A,[-C ( a ~ " bK'1 = A1 = e - p f For P : e-pf - A1 = 0 A , = -e-P* For C: -e-Pf - A , = 0 =, d For K : A,(a - 2 P K ) - -[A,(2aK1 + b)]& 0 dt dA = , Al(a - 2 p K ) - Z ( 2 a ~ ' b ) - A,(2aK) = 0 dt

1 ( a ) The cyclical pattern will be eliminated. 1 = -(T - t ) 2 - k h b ~ e ~ ( ~ -0 )for all t (except t <'






1 ( a ) Yes. ( b ) No, E*, is characterized by "marginal utility 1 m b disuki&." gd 3 ( a ) H = U [ C ( E ) P ( E ) ] ~ -- AE , ~'



2 Similarly to (6.10), A may now be condition will come into play.


0. A complementary-slackness

( b ) The A path is still a constant (zero) path. ( c ) E * ( t ) = E* as in (7.81). ( d ) The condition in part ( a )becomes

re, = g - Ep

r ~ , ,= res + ,

FE E re* >

[ E > 1 for positive MR]



32 1

3 It is not possible to have an interior solution for the control variable A. [Hint: An interior solution A* would mean that P A , + A, = 0, which would imply A , = constant, and Upe-" = S A P from the costate equation of motion. The existence of a tolerably low level of P would imply A p = 0 for all t , which would lead to a contradiction.]

1 ( a ) 4 ( k ) AKa U1(c) = c - ( ' + ~ ' 1 (c) H, = U - - c - ~ + m[Aka - c - ( n + 6)kl b


1 The Mangasarian conditions are satisfied, based on F being strictly concave in u (F,, = -2), f being linear in ( y , u ) , and (8.26) being irrelevant. The Arrow condition is satisfied because H O = Ay + ;A', which is linear in y for given A. 4 The Mangasarian and the Arrow conditions are both satisfied. (Here, H 0 is independent of y , and hence linear in y for given A.) 6 Both the Mangasarian and Arrow conditions are satisfied.

+ 6 f r

E =A( Aa 3 ( b ) E will give way to a new-steady state, E', on the k right of E, with a higher k .



- ( n + 6)(



, I/("-') .

0 curve to the

(c) No.

2 For a positive yo, the y* value a t first increases a t a decreasing rate, to reach a peak when arc CD crosses the y axis. It then decreases a t a decreasing rate until 7 is reached, and then decreases a t an increasing rate, to reach zero at t = T. 3 ( a ) The initial point cannot be on arc A0 or BO; there must be a switch. ( b ) The initial point lies on arc AO; no switch. (c) The initial point lies on arc BO; no switch. 5 (a)The switching point, F, has to satisfy simultaneously

1 ( b ) To write it as Y = Ka[A(t)LP] will conform to the format of (9.48). 5 The marginal productivity of human capital in research activities could not grow in proportion to A.

3 ( a ) Maximize



> 0)

[equation for arc B 0 ] [equation for parabola containing arc EF ] (the negative root is

LT(l - s)Ke-" dt

y = ;z2


( k < 0)

subject to


= A(&)"

( A> 0) K ( T ) free

The solution yields y = $k and z = inadmissible). (b) 7 = - ZO (c) The optimal total time = 2- z,.

K(0) = KO

( K , , T given)

O s s s l

1 An unbounded solution can occur when A*

1 (a)A



+ Oh,


(b) A = ; +&,+8(h,+hyyf)

- F ~ A fy ( e l (0 + 6 ) h y = 0
(c) A =


+ Ohy + O h y + e ( h t y + hYYf)
= .


-0 I 0 [since



2 ( a ) Higher (lower) demand would lower (raise) T. But i t is also conceivable t h a t a mere rearrangement of t h e time profile of D(t) may leave T
(c) Yes, because Q*[T, is identical with D(t). TI ( e l X * [ r ,TI = 0 regardless of D(t), b u t t h e length of t h e period would b e affected by any change i n T .




Adaptive expectations, 55, 194 Antipollution policy, 234-239 Arc, 4-5 Arc length, 41 Arc value, 5, 13 Arrow, K. J., 217, 244n., 245-247, 268, 278 Arrow sufficient condition, 217-218, 252 applied, 219-221 for constrained problems, 290-291 Autonomous problem, 103, 212 constancy of optimal Hamiltonian in, 190 examples of, 106, 113,248-249, 257 Auxiliary variable, 167 Bang-bang solution, 164, 17&, fLPO$ B W Baumol, W. J., 292-293 Bellman, R. E., 20, 22 Benveniste. L. M.. 251 Bernoulli, J., 17 Beyer, W. H., 42n. Bliss, 113, 123, 132, 248 Blocked interval, 299 Boltyanskii, V. G., 19n., 167 Bolza, problem of, 15-16, 76 Boundary solution, 164, 173, 237-239 Burness, H. S., 152-153, 155 Calculus of variations, 17-18 fundamental (simplest) problem of, 18, 27 link with optimal control theory, 166-167, 191-193 Canonical system, 170-171



Cap~talaccumulation^. of a firm (see Jorgenson model; Eisner-Strotz model) under financial constraint, 310-313 golden rule of, 248, 259 modified golden rule of, 259 of society (see Ramsey model; Neoclassical theory of optimal growth) Capital saturation, 121, 125-126 Capital stock, 104 Cass, D., 253, 263-264 Catenary, 43 Catenoid, 43 Chakravarty, S., 99n., 100n. ~haracterisbc-root test, 85-86 Chiang, A. C., 38n., 40n., 50n., 65n., 86n., 88n., 117n., 153n., 174n., 183n., 185n., 189n., 197n., 245n.. 262n., mLn. Cobb-Douglas production function, 105,270 Complementary slackness, 138, 183,236,238, 279,287,302,312 Concavity, checking, 83-86, 88-90 conservation, 151-156 Constrained interval, 299 Constraint qualification, 278 a~olied. 294. 305, 308, 311 Constraints in calculus of vmations: differential-equation, 137 equality, 134-136 inequality, 137 integral, 138-141 isoperimetric problem, 138-141
'A, , '


32-34 expanded form of. 167. 10-11. 33 for several variables. R. 280 as shadow price. 287-289. 218. 301n. 220 phase-diagram analysis of. 76 McShane. G. 45-46 special cases of. 170-171 Harrod neutrality. 244n. 33.87-88. 290-291 Matthews. 130n.. 75 Hamiltonian function.. 19n. 130... 84-85 for sign semidefiniteness.. 143 with variable coefficient and variable term. 32. 220 Embedding of problem. 103. 127-128 Isoperimetric problem.. G. 210. J. 268 Legendre necessary condition. 50. 273 Hestenes. 148-156.257.. 61. 196. 63. H. 29-31 Iorfman. 217n. 117-120 sufficient conditions applied to. Hotelling. 259 modified. 150. 244-251 Inflation-unemployment tradeoff.. 221n. 178 constancy of. 308 Nordhaus. 164. 35-36 Linearization of nonlinear differentid equations.94 Multistage decision making. 149n. 99-101. 291 lynamic monopolist. 200. 183.294 Iantzig. R. 262 Long. 69-70.. Exhaustible resource. 280. hrrent-value Hamiltonian. 298-303 :onsumption saturation. 223-226 Mayer. 20-22 Efficiency labor. 19. V. 155 T. 1U :onvexity. 83-86. problem of. 294 economic interpretation of. 276. 27 First variation. 307n. A. 165-166. 212. 259 Gould. 200 Equation of motion. 207 linear (in u ) . 115. 106. 95-97 Fixed-endpoint problem (see Horizontalterminal-line problem) Fixed-time (-horizon) problem (see Verticalterminal-line problem) Flexible accelerator. H. B.. 70n. 248. 125-126 2ontrol set.. 290n. 169. Mangasarian. 249n. 217n. E. 193. 152. Forster. 167.. 216 Intertemporal resource allocation. 34 for higher derivatives. 4-6 Natural boundary condition. 296-297 Infin~te horizon: convergence of functional. 19-20. Jacobian matrix. B. 280 constancy of. 278 Implicit-function rule. 88-90 :orner solution (see Boundary solution) 2ostate equation. E. 227 Euler equation. 135-138. 192 development of. A. 85 Iifferential equation: with constant coefficient and constant term. S. 300. 205-210 rationale of. 282-284 integral. 80 Leland. 139-143. 170 maximized. Functional. 214-217. 290 Maximum area under a curve. 167-171 for constrained problems (see Constraints in optimal control theory) for current-value Hamiltonian. 282 isoperimetric problem. 204. 269 Hurwicz. 100n. 190 Hamiltonian system.. C.. 20-22 Endogenous technological progress.. 290 optimal. 139-140 current-value. 276-277 inequality.113 methodological issues of. Kuhn-Tucker conditions. 234 Fulks. checking. 49-53. loon. 31... 19n. 257. M. 306n.294 hrrent-value Lagrange multiplier. 210 Lagrangian function (augmented Hamiltonian). 69-70. D. Knowles. 181 with second-order derivative. I... 135. L. 100n. 150-156 dynamic. W. Minimum surface of revolution. 303 Kurz... H. S. 291.. 309 Kamien.. 199 H (see H a m i l t o ~ a n function) H . 100n. Mangasarian sufficient conditions.291. Kuhn. 265 Newton. 240-243. 7-8 Gamkrelidze. 121. 106-110. 105. J. leterm~nantal test: for sign definiteness. 18. 219-221 for constrained problems.. 49. 162 Human capital. 54-58. 287-291. 194 Jacobian determinant.213. 210. 252.. 19. 265 Labor-demand adjustment. 262-263 Jorgenson model. 0. 152-153. 110 . Kemp.291. 243 Minimum-distance problem (see Shortestdistance problem) Minimum principle. 244-247 Hamermesh. 278-279. loon. 103-105 Jump discontinuity. 32. 279 current-value. 148 as a control problem. 265-267 Neutral technological progress.. 177-181 for several variables. 82. 3-6 lynamic production function. 253-262 with Harrod-neutral technological progress. 244-245 Iifferentiation of definite integral. G. 167 Modified golden rule. 134-135. D. 167 Golden rule. 63 Negative definiteness (see Quadratic form) Negative semidefiniteness (see Quadratic form) Neoclassical production function. F.. 124. Michel. 277-280 inequality integral.. 28. 137n. 244-247 Pitchford. 266 Eisner (see Eisner-Strotz model) lisner-Strotz model. 140 Learning by doing. 271. 15-16. 265. 236. 280-282 Isovote curve. 101-103. 267-274 Energy conservation. 87-88. L'HSp~tal'srule. 218. 245-247 Lew~s. 280-282 state-space. 75-78 Lagrange multipliers. 261-262 counterexamples against. P. 296 . 168n. 217n. 94 lynamic optimization. 167-168 current-value. 294 Lagrangian integrand function. 93 Mishchenko. R. 105n. E. 200 Expectations-augmented Phillips relation. 17 Nonnegativity restriction. 134. 295.. 181 :estate variable. 130n. 106. R. 210 hrrent-value Lagrangian.. 40-41.. 47-48 integral form of. 247.. 29-30 applied. I. C. 259 Monopoly: consequences of. 151-156 Environmental quality.. 187-190 2 (see Lagrangian function) dc(see Current-value Lagrangian) Labor-augmenting technological progress. 141-143 Maximum principle. 161-162 2onvergence of functional. 69. W. 244n... 192 Leibniz's rule. 287-290. 293 Neoclassical theory of optimal growth. 213. W. 155 Maximized Hamiltonian. 99-101. 243. 48 Evans. G. P. 205. 37-45 EuIer-Lagrange equation. 293. Hicks neutrality. 164 2ontrol variable.. Halkin. 265 Horizontal-terminal-line problem. 65. 185. 197 with separable variables. 300 Junction point. 214 L. 111 Inventory problem.. 298. 189 w~tb constant coefficient and variable term. 194 Extremal. 182.252. 49-53. 247n. 252 applied. (see Current-value Hamiltonian) H O (see Maximized Hamiltonian) Hadley. Linearity of F function. M. H. M. 250-251 Shell. M. 287-289.. 206-208 2ounterexamples against infinite-horizon transversality condition: Halkin. 138. 247-251 :ourant. 70-72 as a constrained problem. 91-94. 253-254. 98-103 transversality conditions. 211-212 economic interpretation of. 149. 19n. 307-310 Investment (see Capital accumulation) Isochrone. V..Zonstraints in optimal control theory: equality. R. 152-153. 102. 148. 141-142 Euler-Poisson equation. 145-147 Integration by parts. 55. N. .. 264-265 Iynamic programming..

200 as a stock. 11. 200-204.308 Turnovsky. J. 217n. E. 273 Stiglitz. 206-209 Shell.181.. 19n.. 66. 54 Taylor series. J . 19. 40 luadratic form. 99n. 260 and Harrod neutrality. 278n. 190.. 122-126 simplified. 28. 227. 184 with truncated vertical terminal line. G. D.. 278 Research success coefficient. 200 as a flow.. 290-291 Switching of control. 306n. 268 Shortest-distance problem: in calculus of variations: between a point and a line. H... 105.65-66. 270 Revenue-maximizing firm. 168n. 193-199. 182.. ~ ~ o b with truncated vertical terminal line. 166-167. 247-250 . 182-183 with vertical terminal line. 81n.. 290n. 83-86. 193 with infinite horizon. 229-231 two-variable. 244n. 292-298 Rival good. Turnpike behavior. 218-219 Signum function. L.247.. D. 192 in optimal control theory: economic interpretation of. m . Szego. 79-81. 73-74 with horizontal terminal line. 116 ' m k condition.. P.. 260-261 Stage variable. 6 'ath value. 171-173 sufficient conditions. R. F.. 130-132 in optimal control theory. Saddlepoint equilibrium. 143 'roduction function: Cobh-Douglas. 293 'ure competition and social optimum. 214-218. K.. 250n. 66 with terminal curve. 11 1-116 as a constrained problem. 181. 167. 303n. 251. G. 'olitical business cycle. 255 Solow. 264 endogenous.. Taylor. 113. 270 dynamic. 126n. 65-66 with vertical terminal line. 265-266. 247n. 269 Scheinkman. 44 between two points on a surface. 86-87. 252 applied.. 286 Social loss function. 240. 247-251. 300n. 18-20 features of. l e m b l . 6-7 optimal. 221n. 22 'rinciple of reciprocity. 144-145 phase-diagram analysis of. 265 Terminal-control problem. 15. 124.. 93 in optimal control theory: between a point and a line.227. 103.261 -- ~ ~ ~ ~ . 244n.. 235-236. sgn (see Signum function) Shadow price. 95-97 Seierstad. Schworm. 63-64.63.60-63. 269-274 .. 182 with truncated horizontal terminal line. 260-262 'helps. types of... 169. A. 91-94 Second variation. 126-129 Uzawa.thagoras9 theorem. 11-12.. J.. P. N.. Sato. 253-254. 248. 307-313 State variable.. 150-151 S. 188. 187-189. Truncated-horizontal-terminal-line problem. 4 Steady state. 290n.. 251 Schwartz.. 221n. 120-126 sufficient condition applied to.. 4 State equation. S. 64. 226. 63.. 232. N. 120. 300n. 269 Romer. 118. A. 18 Transversality conditions: in calculus of variations. 21 Iptimal value function. 167n. 96 Technological progress. A. N . 81-83 for infinite horizon. 162. 117-129. 38-39. 66-67 between two gwen points. 115 with terminal curve. 265 labor-augmenting. 250. M. 163-164 link with calculus of variations. 230-233 Sydsleter. 255. 250 hmsey model. Takayama.72 Truncated-vertid-terminal-line problem. horizontal terminal line.192. 163 'iecewise differentiability. 248.186-187. 182 Time-optimal problem. 115. 12-16 Iptimal control theory. Weierstrass-Erdmann corner condition. 174-176. 6-7 'erturbing curve. 165 State-space constraints. 191-193 with several state and control variables. 240-243. 180-182 'hase diagram: one-variable. 303n. 234-235 'ontryagin. 52 Tradeoff between inflation and unemployment (see Inflation-unemployment tradeom Trajectory. 267-274 neutral exogenous. 298-303 economic examples of. ~ ~ ~ i i g9. 163 'itchford. S.. K. 118-120. E. M. weitzman. 'ositive definiteness (see Quadratic form) 'ositive semidefiniteness (see Quadratic form) 'rinciple of optimality. 284-286 'ollution.$26 INDEX Ibjective functional. 119.. 9-11 Variations. 290n. 244n. 265 Stable branches. 278 Variable endpoint. J.. i95 Vote function. 310-313 Second-order conditions. 18. 124.. 192 Trirogoff. W. 177. S. 111. 120. 251 Yellow brick road. 244n. 244 Terminal-curve problem. K. 27 Transition equation.~164. L. 226-233 simplest problem of. 164-166 Iptimal policy functional. 120. A. 248n.. 286-287 Tintner. R. 11.182-183. 123-124. 128 h s e y device.... 221-226 example from Pontryagin. 99n. 209-210 with horizontal terminal line. 168n. 262 Samuelson. 131-132 h s e y rule. 9-10. 95-97 171. 101-103. 136-137 second-order condition. 248 Sollow neutrality. 221n.. 264-265 neoclassical. first and second. K.L. 151-155 Strotz. 20-21 'ath: admissible.. 193 with infinite horizon. 64-65 with trunca. H. P. 219-221 for constrained problems. 241n. 54-55 Social utility function.. 88-90 'amsey. P. (see Eisner-Strotz model) Sufficient conditions: in calculus of variations. 'iecewise continuity. 163n. E. 268. 193-194 Vousden.~ r ~ ..

? sf! "7: & :.. . re'' Dr. 1: 6. .:g$$ :. i $ &.+ i4 " I. p x Q$7 ! & .2 gn C1 CHIA .gS.

Sign up to vote on this title
UsefulNot useful