ECON 706 HW 4

GARTH BAUGHMAN

Exercise (1). Is the process stationary? Invertible? Response to 1. Yes, the process is stationary because |φ| < 1. Yes, the system is invertible. i i i i i Write (yt − t )/λ = ft so that (yt − i )/λ = φ(yt−1 − i )/λ or yt = φ(yt−1 − φ i + t so t t−1 t−1 this is an arma(1,1) with parameters less than 1 and is, thus, invertible. Exercise (2). Simulate a sample of length 1000 using iid N (0, 1) shocks and λ = (1, .7, 1, 1) and φ = .9. Plot and discuss. Response to 2. We plot the values for y 1 , y 2 , y 3 and f . This is an obvious AR system. The y’s move closely with eachother.

Figure 1. Y-1 Exercise (3). Estimate the model by Gaussian ML using the Kalman filter. For the optimization, use and compare (a) a quasi-Newton algorithm like DFP and (b) the EM algorithm.
1

44 . with regard to part (a).2 GARTH BAUGHMAN Figure 2.14 .005  . So we restricted to the case of uncorrelated errors and maximized with respect to λ2 . these values are sensitive to starting values. It’s not clear why this occurs. and φ. With some starting values I would obtain convergence but with poor prameter values. So. With others I would not obtain convergence in any reasonable number of iterations. When trying to maximize with respect to a fully unconstrained model – i. For these we obtained estimates of .48 . calculate and plot the smooted (not filtered) extraction of the latent state sequaence.m. .19 −. If one starts with the true parameters then values much closer to the true value are obtained but that is a somewhat dishonest test as in reality one would not know the true values. and . together with appropriate confidence bands and compare it to the true state sequence.81. Response to 3. Conditional upon the estimated parameters. These are all low. The negative inverse of the hessian gives very small variances which is resonable with such a large sample. λ3 . code can be found in q2a. And yes. one where covariance matrices were unconstrianed – maximization failed.19 10−3  .50 −. The values are   .e.61.99. Y-2 Compare standard errors based on the Hessian and ”sandwich” estimators.14 .005 .

ECON 706 HW 4 3 Figure 3.27 .63 which is many orders of magnitude less than the hessian estimate and is proabably calculated erroneously.m. the algorithm did a poor job of calculating the covariances and also did poorly in calculating φ but was better at λ than the quasi-newton algorithm. code can be found in q2b.02 . As for the EM estimation. the estimated hessian is near .81. Indeed.27 −.32 .39  −.54 Obviously. λ3 = 1.33  −. This algorithm estimates λ2 = .0006 . however.86 −. and the measurement covariance matrix at   . Calculating the hessian. we get   .67 −.02  .6.02 . This procedure was more stable in the sense that it was able to feasibly estimate a fully unconstrained error covariance matrix for y.11.45 10−13  .45 .39 −. the state error variance at 8. which is H −1 (GG )H −1 where H is the hessian and G is the gradient.33 . Y-3 From the “sandwich” estimator.02 . is more difficult as now we have a system with 12 parameters and so the hessian has 144 elements.71. φ = .

we filter. j j given α we use OLS of αt on αt−1 to get s2 and use that to draw Q which follows (T − 1)s2 /χ2 (T − 1). Next. using the filtered j values and the recursive equations on pages 16. my implementation utterly failed to adequately reproduce the data. Use this population to estimate various parameters of interest. The estimation proceeds as follows: First. calculating the full covariance of the estimated parameters is not feasible. Iterate (we go 500 times) and then accumulate (we accumulate 200 points). Then. Gibbs sampling) to explore the posterior.4 GARTH BAUGHMAN Figure 4. Thus. the . Perform a Bayesian analysis of the model. Provide detailed yet concise discussion. While it’s somewhat difficult to see in the figure. given Q. we present that below in figure 5. Response to 4. and compare to your results in 3. draw φ from a normal with mean φ from OLS and variance Q. Provide a thorough characterization of the non-state parameter posterior distributions. using MCMC methods (specifically. For example. While in principle the system seems sound. Then. Exercise (4). Draw H and λ similarly from distirbutions whose parameters are given by OLS of y on α. Turning to plotting the smoothed values of the latent variable f . 17 of the bayesian notes. treating the state vector as an additional parameter to be estimated. f singular with Rcond on the order of 10−27 . Plot the estimated (posterior mean) state sequence together with appropriate posterior coverage interval. we draw αt .

m.09. Smoothed f mean of the posterior sample for λ2 was .02. and for φ it was . Thus.30 and . The true value was within the estimated 95% confidence bands .16. It is the error bounds which are impressive. for λ3 it was .955% of the time. . They do not track exactly but that would not be expected. What’s worse the maximum sampled values for these parameters were . The distribution for the latent variable. was excellent.01. A plot can be seen here in figure ?? with red being the lower bound. however. The code can be found in q4.ECON 706 HW 4 5 Figure 5. I don’t discuss the distribtions of the obviously atrocious estimated parameters.03. turqouise the upper. . blue the true value and green the estimated value. Obviously there was a serious limitation in my implementation.

Gibs estimation of latent variable .6 GARTH BAUGHMAN Figure 6.

Sign up to vote on this title
UsefulNot useful