1. Course Description 1.1. Overview. The goal of this course is familiarize students with dynamic optimization techniques for both discrete and continuous time stochastic problems. In particular the course will present results in discrete time dynamic programming and continuous time optimal control. 1.2. Texts. I recommend that you purchase Ross (1983) and Stokey, Lucas, and Prescott (1989). 1.3. Grading. The grade will be based on a midterm exam (February 4), a final exam and homework assignments. 1.4. Office Hours. Wednesdays 1 - 2:30. 2. Course Outline (1) Introduction and warm-up (a) Finite horizon dynamic programming. Examples: Ross (1983), ch. 1 Reading: Ross (1983), ch. 1 (2) Mathematical preliminaries (a) Continuity, compactness and convexity of correspondences (b) Berge’s theorem of the maximum under convexity (c) Contraction mapping theorem, Blackwell’s sufficient conditions for a contraction Reading: Stokey, Lucas, and Prescott (1989), ch. 3 (3) Deterministic and countable state dynamic programming with discounting (a) Bellman’s functional equation and the principle of optimality (b) Value iteration and policy improvement (c) Linear programming solution (d) Contraction mapping theorem, Blackwell’s sufficient conditions for a contraction Ross (1983), Ch. 2

5 Deterministic Continuous Time Problems.7. ch. Furukawa (1972) (b) Dynamic programming under convexity. ch 4 and 5. linear programming. Blackwell (1965).7 Other Reading: Bersekas (2005).A consumption-savings problem . Santos (1991) and Milgrom and Segal (2002) Minimizing Cost .Positive Dynamic Programming (a) The Bellman equation.The classical one sector growth model . or Whittle (1980) Reading: Stokey.2 ALP E. ch 1 General analysis of discounted dynamic programming problem (a) Existence of stationary optimal policies and the principle of optimality. Strauch (1966). Lucas.Multi-armed bandit problems. Maitra (1968). ATAKAN (4) (5) (6) (7) (8) (e) Some examples: . ch. Benveniste and Scheinkman (1979). Kreps and Porteus (1977) (b) Examples: . ch. Ross (1983).Gambling Reading: Ross (1983). value iteration.4 Average Reward Criteria (a) Existence of stationary optimal policies (b) Relationship between the limit of discounted optima and the long-run average problem Dutta (1991) Reading: Ross (1983). Volume 2. policy improvement. Calculus of Variations and Optimal Control (a) Statement of the Continuous Time Optimization Problem .Optimal Stopping .Weitzman (1979) or Pandora’s problem .Search from an unknown distribution. and Prescott (1989). Ross (1983). Atakan (2003) (c) Envelope Theorems.Negative Dynamic Programming (a) The Bellman equation. Rothschild (1974) Reading: Ross (1983). ch.A seller with unknown demand .Bayesian Sequential Analysis . Measure theoretic issues Shapley (1953). Kreps and Porteus (1977) (b) Example: . value iteration and policy improvement. Differentiability and Monotonicity. ch.3 Maximizing Reward .

(9) Stochastic Differential Equations and Markov Diffusion Processes (a) Brownian Motion (b) Stochastic Integration and Ito’s rule (c) Stochastic Differential Equations Other Reading: Oksendal (2000).” Journal of Economic Theory. P. 11 References Atakan.5. 43(5).” The Annals of Mathematical Statistics. Benveniste. (2005): Dynamic Programming and Optimal Control. Porteus (1977): “On the Optimality of Structured Policies in Countable Stage Decision Processes. A. ch 1 . Fleming and Rishel (1975). Kreps. D. Kamien and Schwartz (1991) Part I.” SIAM Journal on Applied Mathematics. 64 – 94.” Econometrica. Scheinkman (1979): “On the Differentiability of the Value Function in Dynamic Models of Economics. 55. Furukawa. M. Bersekas. . Schwartz (1991): Dynamic Optimization. 1612 – 1622.” Economic Theory. L.” The Annals of Mathematical Statistics. Volume 1. free end points. Fleming. (2003): “Stochastic convexity in dynamic programming. Rishel (1975): Deterministic and Stochastic Optimal Control. 457 – 466. and R. derivation via dynamic programming and via variational ideas (d) The Maximum Principle in Discrete Time (e) Fixed end points. Kamien. 10. Dutta. D. Oksendal (2000).. ch 6. 47(3). 36(1). (1965): “Discounted Dynamic Programming. and E. 32(2). Athena Scientific. Blackwell. Springer-Verlang. Elsevier. L. M.3 (b) Calculus of Variation Problems and the Euler Equation (c) Pontryagin’s Maximum Principle. ch 5 (10) Optimal Control of Markov Diffusion Processes (a) Statement of the Problem (b) Hamilton-Jacobi-Bellman Equation (c) Filtering Problems and the Kalman-Bucy Filter (d) Optimal Stopping Problems (e) Stochastic Control with Terminal Conditions Other Reading: Fleming and Rishel (1975). ch 3. D. N. 226 – 235.. M... 22(2). and N. II: Positive and Negative Problems. W. and J. transversality conditions Reading: Rochet (2007) (on the class website) Other Reading: Bersekas (2005). (1991): “What Do Discounted Optima Converge to?: A Theory of Discount Rate Asymptotics in Economic Models. A. ch 6. (1972): “Markovian Decision Processes with Compact Action Spaces.

M. (1966): “Negative dynamic programming. A. B. Academic Press.edu .4 ALP E. 37.” Econometrica. 689 – 711.. (1983): Introduction to Stochastic Dynamic Programming. (1974): “Searching for the Lowest Price When the Distribution of Prices Is Unknown. Series B (Methodological).” The Journal of Political Economy. Whittle. (1991): “Smoothness of the Policy Function in Discrete Time Economic Models. Weitzman.northwestern. (1953): “Stochastic Games. Milgrom. L.” Proceedings of the National Academy of Sciences. 1095 – 1100. P. MEDS. 70(2). 42(2). 871 – 890.. Rothschild.-C. M. Northwestern University. Strauch. Segal (2002): “Envelope theorems for arbitrary choice sets. (2000): Stochastic Differential Equations.” Econometrica. S. 82(4).” Journal of the Royal Statistical Society. Ross. P. R. L. (1979): “Optimal Search for the Best Alternative. 39. Lucas. and I. N. Kellogg School of Management. Shapley. Springer. Stokey. IL 60208 E-mail address : a-atakan@kellogg. 47(3). 59(5). Harvard University Press. 30. (1968): “Discounted Dynamic Programming on Compact Metric Spaces. Oksendal. Santos. and E. 641 – 654. Prescott (1989): Recursive Methods in Economic Dynamics. 143 – 149. (1980): “Multi-Armed Bandits and the Gittins Index. Evanston. J. R. (2007): Dynamic Optimization in Continuous Time.” The Annals of Mathematical Statistics. Rochet. M.. 2001 Sheridan Road. ATAKAN Maitra.” Sankhya: Series A. 211 – 216.” Econometrica.