# ECON 4741H - Quantitative Analysis of the Macroeconomy

Amanda M Michaud

Numerical Methods: Value Function Iteration

We have spent many weeks working with local approximation methods to compute statistics from data generated by real business cycle models. Now we will conclude the course by learning numerical methods suitable for a larger range of problems. In particular, Value Function Iteration will allow us to characterize properties of the policy function in problems for which the policy function does not have a closed-form solution. In order to do this, we will need to apply our knowledge of recursive dynamic programming.

1

The Value Function

As you recall, the key feature of dynamic programming problems is that we are solving the same type of maximization problem every period, subject to a diﬀerent state variable. In our cake-eating problem, we solve how much cake to eat (ω − ω ) in the current period given how much to save for future periods (ω ). We don’t care why we have cake size ω, we only care about how much we have today and how much we will save for the future. We also take as given that, when faced with the similar problem tomorrow, we will behave optimally. Therefore, we don’t need to solve tomorrow’s decision problem today. In order to know how much to leave for the future, we only need to know the present discounted value of all cake sizes we could save. This is given by the value function V (ω).1 V (ω) = max u(ω − ω ) + βV (ω )
ω ∈[0,ω]

We have discussed how the above problem may be a little tricky in general because V (·) is an unknown function. In general we must prove this function exists and that the problem is well deﬁned. See previous notes for some suﬃcient conditions. We will now consider the functional equation as deﬁning an operator T in function space: Vn = TVn−1 . Explicitly2 : Vn (ω) = max u(ω − ω ) + βVn−1 (ω )
ω ∈[0,ω]

Just as a function is deﬁned as an operator that maps real numbers into real number, this operator maps functions into functions. We will consider problems that satisfy the following: • Constraint set is compact. • Current period return function u(·) is strictly concave, strictly monotone, and continuous.
1

The following are synonymous: Value Function, Bellman Equation, Recursive Dynamic Program, Functional

Equation 2 Don’t be confused by the n subscript denoting the iteration number and the time subscript t we used for ﬁnite horizon models. We will only consider inﬁnite-horizon models when dealing with dynamic programming. Alternatively you could put time in the state space and specify V(T)=0 for your ﬁnite horizon

1

This means if we choose some function Vn−1 that is monotone and continuous and then apply the operator T we will be left with a strictly monotone. 1) • Vn−1 is monotone. Let’s take the functional equation for the cake eating problem with utility u(ω) = ln(ω) and as an input function we will use V0 (ω ) = ln(ω ). we get the same function back: V = TV . for a large number of iterations we will ﬁnd a unique function such that when put into our operator.ω] ω ∈[0. strictly concave. • Starting guess V0 can be any monotone. strictly concave.Quantitative Analysis of the Macroeconomy Amanda M Michaud • Discounting: β ∈ [0. Substituting this into the functional equation leaves us with V1 (ω) as the value of a well-deﬁned maximization problem: V1 (ω) = V1 (ω) = max ln(ω − ω ) + βV0 (ω ) max ln(ω − ω ) + βln(ω ) (1) (2) ω ∈[0. continuous function and we will converge to the same point. Our ﬁrst order conditions give a policy function that holds for all ω: ω = βω β+1 . continuous function Vn . Contraction Mapping Theorem Under suitable assumptions. This means: • d(Vn − Vn−1 ) → 0 as n → ∞.ECON 4741H . continuous functions. It is a concave function. We can now substitute the policy function into the functional equation to get the value for all ω: V1 (ω) = ln(ω − βω βω ) + βln( ) β+1 β+1 (3) (4) 2 . the operator deﬁnes a contraction towards a unique ﬁxed point. continuous functions into strictly monotone. so we know the maximum is where the ﬁrst derivative with respect to the choice variable equals zero. In other words. the Bellman equation deﬁnes an operator Vn = T n V0 that: • Will converge to a ﬁxed point V ∗ = T V ∗ • Will converge to a the same ﬁxed point V ∗ given any initial value V0 These assumptions loosely serve as suﬃcient conditions to apply the Contraction Mapping Theorem which states the operator T maps monotone.ω] Now we solve the maximization problem to ﬁgure out the value of V1 (ω). Here is a simple illustration of what is going on. Furthermore.

but that it would take inﬁnite time to get there exactly. 1+β+β 2 We can see that this is quickly converging to the analytic solution: ω = (1 − β) ∗ ω.ω] max ln(ω − ω ) + β[(1 + β)ln(ω ) + A] (5) (6) Which gives new policy: ω = β+β 2 ω. i=1 ∀j > i. To see this. if we use enough points. Deﬁne β 1 A = ln( 1+β ) + βln( 1+β ): V2 (ω) = ω ∈[0. Compute V1 = T V0 3. 2 Value Function Iteration General Procedure for Value Function Iteration 1. Therefore. It is not going to understand how to deﬁne and store new functions. We can continue to repeat this process. xn] such that xi < xj i=1 call this collection of points S ≡ following considerations: {si }ns . 2. we will have a pretty good idea of the properties of the functions we are interested in. continuous function. Start with an arbitrary strictly increasing and strictly monotone function V0 2. Therefore. Asking Matlab to keep track of a new function G(x) = H(F (x)) isn’t going to go over so well.Quantitative Analysis of the Macroeconomy Amanda M Michaud Our operator has indeed transformed a monotone.1 Numeric Considerations Matlab is not a symbolic language. 2. A grid is a set of n points {xi }n ∈ [x1 . Iterate Vn = T Vn−1 4. Stop when d(Vn . Much like when we plot lines. You must ﬁrst deﬁne a vector x of n discrete points and then compute another set of n points deﬁned by yi (xi ) = F (xi ). For the state variables we will Which points should we select? We must make the 3 .1. Vn−1 ) < Where d(·) is some notion of distance between two functions and is a small positive number.ECON 4741H .1 1) Deﬁning Grids Our value functions are functions of state variables. strictly concave. continuous function into a strictly monotone. we will keep track of sets of points and the corresponding values of the functions we are interested in at those points. instead of keeping track of functions. we will need to deﬁne a grid points representing these state variables to be able to characterize the value function. consider what it takes to graph any function in Matlab.

we can deﬁne another set of i=1 points V0 ≡ {v0i }ns that correspond to the function we have in mind evaluated at each si : i=1 v0i = V0 (si ). How Many Points? The accuracy of our characterization of the value function is obviously increasing in the number of points we choose. however you will see our computational time is increasing in the number of points as well. computer scientists. 2. You be the judge. K] such that K ss ∈ [(K). This can be accomplished by deﬁning a new grid where points are deﬁned by a concave function over linearly spaced grid points. How do we know where the value function is likely to have curvature if we do not know what the value function is? A good rule of thumb would be to use your objective function to get a sense. we are no longer working with functions. ω ] where ω is the largest size of cake we think somebody ¯ ¯ facing this problem would start out with. At the minimum you should run your program for a given sized grid. then increase the grid size by 5% or so and make sure that your results do not change. We should deﬁne the grid for the state variable(s) over the range of values we are interested in! For Example: • Cake Eating Problem: [0. for problems with multiple state variables. A better idea is to put more points where the function has more curvature. We will want to compute a few iterations. The Matlab command linspace does just that: • xgrid = linspace(xlow. say the steady state value for a low and high shock. Economists. In fact.xhigh. and others in the know refer to this as the Curse of Dimensionality. Choosing the right n seems like a nice optimization problem in itself.2 2) Initial Guess V0 Now that we have a grid for our state variables S ≡ {si }ns .n) Where xlow and xhigh are the lower and upper bounds on the interval of choice and n is the number of grid points.Quantitative Analysis of the Macroeconomy Amanda M Michaud Which interval? This is rather obvious.ECON 4741H . ie: where the absolute value of the second derivative is very high. However. and then possibly change grid points around. Selecting higher n has diminishing returns in accuracy and increasing marginal computational burden. K] and the interval captures a reasonable range of business cycle ﬂuctuations. Where to put the points? The easiest choice of placement for grid points is to distribute them evenly across the interval you would like to study. computation time increases exponentially with grid size. What is the optimal choice? That requires a metric weighting accuracy and patience. We cannot prove a theorem that says these sets of points 4 . we are working with sets of points that approximate functions.1. graph our function. ¯ ¯ • Neoclassical Growth Model: [(K). How should we choose these points? We know by the contraction mapping theorem that the operator deﬁned by the Bellman equation converges to a unique function regardless of the initial guess.

for the cake eating problem we could deﬁne: v0i = si 1 u( 2) 1−β / where u() is a concave utility function and si ∈ [0. By dividing ¯ by 2 I hope to be in a reasonable place on the utility function in terms of curvature. Our initial guess should be guided by the problem itself. take a look after a few iterations and update your beliefs on what your guess should look like. This reduces to: i=1 gi = argmaxj u(si .1. ω ] is a point on the state grid. The most basic way to do this is the following: 3 We will do better later 5 . If you ﬁnd your initial guess is bad. Therefore we will want to be careful with our initial guess. To do this we need to solve for the policy given our current ”guess” {v0i }ns . we only know values of v0 for points we have chosen.ECON 4741H . For instance.3 3) Given {v0i }ns solve for the policy i=1 The next step is to compute the updated guess for the value function as deﬁned by the operator. sj ) + βv0j In other words. 2. What do we think are reasonable values? One good start is the present discounted value of the objective function given the current state. 3 Therefore we have to tell Matlab: given some current state si ﬁnd another point on the state grid sj that gives the highest value of the above problem.Quantitative Analysis of the Macroeconomy Amanda M Michaud will converge to a unique set of points because of approximation error. but to also still have enough diﬀerence between the guess for diﬀerent points on the grid. so our policy must yield a state variable for tomorrow that lies on those same grid points.

choosing s = sj yields higher value than choosing s = gi . j) and ﬁnd i∗ . However. Calculate R(i. s2 ) + βv02 for the ﬁrst possible policy choice sj = s2 • if R(i.. sj ) + βv0j . choosing s = s2 yields higher value than choosing s = s1 . you should note that this requires making ns × ns calculations and then searching one-by-one over the grid. 5.ns Now Matlab has a find command that will search over R(i. calculate R(i. gi ). so store gi = 2. . Calculate R(i. 1) = u(si . so we should stick with our current contender for the max: s = gi .Quantitative Analysis of the Macroeconomy Amanda M Michaud Exploiting Concavity 1. j ∗ . 2) > R(i. j) < R(i. gi ). j) = u(si .ECON 4741H . • if R(i. How can we do better? It can be shown that the operator deﬁned by the Bellman equation maps increasing and strictly concave functions into strictly increasing and strictly concave functions. 2) = u(si . that is already 400 calculations and points to sift through. Keep going: for each subsequent j. s1 ) + βv01 for the ﬁrst possible policy choice sj = s1 3. the policy that has yielded the highest value thus far. Let’s use these two properties. choosing s = sj yields lower value than choosing s = gi . 4. 2. 6 . Start with a given point on our state grid si 2. For a grid of size 20. 1).. • if R(i. Repeat for all j = 1. j) > R(i.

s1 ) + βv01 for the ﬁrst possible policy choice sj = s1 3. Repeat for si . 3. • Because the value function and objective functions are strictly increasing in the state variable. s2 ) + βv02 for the ﬁrst possible policy choice sj = s2 • if R(i. compare with R(i.. 1) = u(si .Quantitative Analysis of the Macroeconomy Amanda M Michaud Exploiting Concavity 1. Calculate R(i. ie: in the cake eating problem.. 5. • if R(i. 3). • Given s2 > s1 start the above procedure (”Exploiting Concavity”) with g1 as the ﬁrst point tested. 1).ECON 4741H . we eat more today and have more left over for tomorrow. Do the above (”Exploiting Concavity”) to ﬁnd the policy g1 for this point. keep going (calc R(i. 2) > R(i. n + 1) < R(i.. 2) and so on. Stop here and store g2 .n 7 . i = 4. n) STOP!!! Because of concavity. Repeat for s3 by starting with g2 4. • Continue testing subsequent points until one yields a lower value. we know the marginal return to increasing the state variable tomorrow will be smaller and smaller and so every point beyond sn will yield lower value. Start with a given point on our state grid si 2. Calculate R(i. 2) = u(si . we know that the policy function should also be strictly increasing. Start with the ﬁrst point on our state grid si 2. Exploiting Monotonicity 1. when we have a larger cake.

. deﬁne V0 = V1 and repeat this process until we get to an iteration where err¡ . sgi ) + βv0gi 2. Value function iteration is a powerful tool for problems with no analytic solution. so lets deﬁne our notion of distance as the following: err = abs(V1 − V2)I Where I is a vector of size ns ﬁlled completely with 1’s. . If we have time we will discuss how to do better here.ns Now we see if we have come ”close enough” to a ﬁxed point.Quantitative Analysis of the Macroeconomy Amanda M Michaud 2. We can go home. This gives is matrix notation for the sum of the distance between v0i and v1i for each point on our grid. Matlab loves matrices.1.4 4) Calculate updated guess V1 Here we update given our policy and deﬁne a new set of points using our policy rule gi for each i: v1i = u(si . 8 . we start the process over with V1 as our new guess for the set of points that characterize the true value function evaluated at the grid points we have chosen..1. It is widely used and there are many reﬁnements on it.ECON 4741H . If not. If err¡ . One problem you may have noticed in the above is that our choice of policy was restricted to lie on the state grid. 2. 3 Closing Remarks That is all I have for you for now.5 5) Check for convergence ∀i = 1. In general these operate by reducing computation time or better approximating the actual function.