This action might not be possible to undo. Are you sure you want to continue?

, Vol. 59, No. 6 (Nov., 1991), pp. 1551-1580 Published by: The Econometric Society Stable URL: http://www.jstor.org/stable/2938278 Accessed: 16/08/2010 08:48

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=econosoc. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

The Econometric Society is collaborating with JSTOR to digitize, preserve and extend access to Econometrica.

http://www.jstor.org

Econometrica,

Vol. 59, No. 6 (November, 1991), 1551-1580

**ESTIMATIONAND HYPOTHESISTESTINGOF COINTEGRATION VECTORSIN GAUSSIAN VECTORAUTOREGRESSIVEMODELS
**

BY S0REN JOHANSEN

The purpose of this paper is to present the likelihood methods for the analysis of cointegration in VAR models with Gaussian errors, seasonal dummies, and constant terms. We discuss likelihood ratio tests of cointegration rank and find the asymptotic distribution of the test statistics. We characterize the maximum likelihood estimator of the cointegrating relations and formulate tests of structural hypotheses about these relations. We show that the asymptotic distribution of the maximum likelihood estimator is mixed Gaussian. Once a certain eigenvalue problem is solved and the eigenvectors and eigenvalues calculated, one can conduct inference on the cointegrating rank using some nonstandard distributions, and test hypotheses about cointegrating relations using the x2 distribution.

KEYWORDS: Cointegration, error correction models, maximum likelihood estimation, likelihood ratio test, Gaussian VAR models.

AND 1. INTRODUCTION SUMMARY

A LARGE NUMBER OF PAPERS are devoted to the analysis of the concept of cointegration defined first by Granger (1981, 1983), Granger and Weiss (1983),

and studied furtherby Engle and Granger(1987). Under this headingthe topic has been studied by Stock (1987), Phillips and Ouliaris(1988), Phillips (1988, 1990),Johansen(1988b),Johansenand Juselius(1990, 1991).The main statistical technique that has been applied is regressionwith integratedregressors,which

has been studied by Phillips(1988), Phillipsand Park (1988), Park and Phillips (1988, 1989), Phillips and Hansen (1990), Park (1988), and Sims, Stock, and Watson (1990). Similarproblemshave been studied under the name common trends(see Stock and Watson(1988)). The purpose of this paper is to present some new results on maximum likelihood estimatorsand likelihood ratio tests for cointegrationin Gaussian vector autoregressive models which allow for constant term and seasonal dummies. This brings in the technique of reduced rank regression (see Anderson

(1951),Velu, Reinsel, and Wichern(1986),Ahn and Reinsel (1990),and Reinsel and Ahn (1990)), as well as the notion of canonical analysis (Box and Tiao (1981), Velu, Wichern,and Reinsel (1987), Pena and Box (1987), and the very elegant paper by Tso (1981)). In Johansen(1988b)the likelihoodbased theory was presented for such a model without constant term and seasonal dummies, but it turns out that the constant plays a crucial role for the interpretationof the model, as well as for the statisticaland the probabilistic analysis. A detailed statistical analysis illustratingthe techniques by data on money demand from Denmarkand Finland is given in Johansen and Juselius (1990), and the present paper deals mainlywith the underlyingprobability theory that allowsone to make asymptoticinference.

1551

1552

S0REN

JOHANSEN

The structureof the paper is the following:Section 2 describesthe cointegration model and the tests for cointegrationrank. The asymptoticdistribution of the likelihoodratio test statisticfor the hypothesisof r cointegrationvectors is given. In Section 3 it is shown that the cointegration model with linear restrictionson the cointegrating relationsand the adjustment coefficientsallows explicitestimation.The likelihoodratio test statisticof this hypothesisis given. Section 4 gives a simple proof of Granger's representationtheorem which clarifiesthe role of the constantterm and gives a conditionfor the processto be integratedof order 1. In Section 5 the asymptoticdistribution the maximum of likelihood estimator for the cointegratingrelations is given together with an estimate of its "variance"to be used in constructingWald tests. The presence of the trend gives rise to some new limit distributions. Section 6 containsa brief discussionof the relation of the present work to the results of Phillips, Stock, and Watsonand others, and the appendicescontain technicaldetails as well as results for inference concerningsmooth hypotheses on the cointegratingrelations.

2. THE STATISTICAL ANALYSIS OF THE VAR MODEL FOR COINTEGRATION THE TEST FOR COINTEGRATION RANK

AND

**Consider a general VAR model with Gaussian errors written in the error correctionform
**

k-1

(2.1)

AXt=

E

riAXt-i

+ Xt-k + PDt +

/ + Et

(

T),

where Dt are seasonal dummiesorthogonalto the constant term. Further, Et (t = 1,..., T) are independent p-dimensional Gaussian variables with mean

zero and variance matrix A. The first k data points Xlk The parameters Fl,...,

Tk

. .., XO are consid-

**ered fixed and the likelihood function is calculated for given values of these.
**

1, 0, ,u, and A are assumed to vary without

restrictions,and we formulatethe hypothesesof interest as restrictionson H. In this section we analyze the likelihood function conditionalon the initial values. There are two reasons for this. Firstlywe shall discuss nonstationary processes, for which only the conditionallikelihood can be defined, and secondlythe conditionallikelihoodfunctiongives the usual least squaresregression estimators in the unrestrictedmodel, and hence gives tractable estimators. When it comes to discussing the properties of the process Xt, as will be necessary for the asymptoticanalysis, it is convenient (see Theorem 4.1) to considersome linear combinations Xt as well as AXt as stationary of processes underthe conditionsstated there. Thus the likelihoodfunctiondescribedin this section is the conditionallikelihoodfunctionfor the observationsof X1,..., XT from the process, describedin detail in Theorem 4.1, conditionalon the initial values, that is, conditionalon the first k observations.

.are p x r vectors. distribution. .2) H2: H = af3. the matrices a6' -1 and . the adjustment where . the cointegrationspace sp(. and let F **.1.1 that analysisas well as for the probabilistic under certain conditions on the parameters the process given by (2. and the linear trend is absent. is the row space of 1H. and we then use the notation H2(r). the cointegrating matrices.1) is denoted by H1 and we formulatethe hypothesisof (at most) r vectors as cointegration (2.. It turns out that the role of the constant term is crucial for the statistical analysis.1) is integratedof order 1. where the (r x 1) vector -Bo has the interpretation as an intercept in the cointegrationrelations. j=0.). u which determinesa linear trend. Here al is a p x (p . the adjustment space sp (a).83 imply the same What can be determinedby the model is the space spannedby I.k). .1a'l.Sometimeswe compare models with differentnumbersof cointegration vectors.and allow the researcherto check interestingeconomichypotheses againstthe data. In this case clearly al. Tk 1P. The purpose of the analysis of this paper is to conduct inference on the number of cointegratingrelations as well as the structure of these without by relations. and Zkt =Xt-k. = 0. Zlt = (AXtl A Df.3) Zo = FZ1t + aP'Zkt + et With this notation define the productmomentmatrices T (2. (ala. In order to facilitate the presentationof the main result of this section we first introducesome notation.Xt-k+l. In this model the constant term . Then the model becomes (2. and then formulatingquestions concerningstructuraleconomic relations as hypotheses on parametersof the VAR model.9)).COINTEGRATION VECTORS 1553 Model (2.4) Mij = T 1 t=1 ZitZjt (i. These hypothesesare tested using likelihood ratio statistics.and the adjustmentspace is the columnspace of H. and the space spanned by a. since for any choice of an r X r matrix (. which is used to describethe variationof the data.l0.r) matrixof full rankconsistingof vectors orthogonalto the vectorsin a. .a(a'a) .. It is seen that the parametersa and . /.l).1). 1Y. are not identifiedin model H2.It is provedin Theorem4.and a. The presence of the linear trend changesthe analysisand it is therefore convenient to define a model H2* where the * indicates that apart from the restrictionimposedunder H2 we also impose the restriction. Let Z0t = AXt.whichcontributesto the interceptin the cointegrating relation (see (4. Note that the space spannedby . consist of the parameters(Fl.)-'al'. and a.This is accomplished fittingthe general imposinga prioristructural VAR model (2. coefficients.u can be decomposed into two parts.u = a.

.6) F( a..8) (2. and ... > Ap> 0.1 r). SooI This procedure is given in Johansen (1988b) for the model without constant term and seasonal dummies.. where V= (vQ. are A. i.B(P'SkkP)'. This functionis easily minimizedfor fixed f3 to give (2.Sk(1'Sk3) = ISoo0 I'(Skk -f'Sko. k). A) T IA Iexp T-1 E (Rot .aP'Rkt)'A -'(Rot .9) a(f) A() -Sok... This gives the likelihood function concentratedwith respect to the parametersFl.1554 the residuals Rit = Zitmilmi-l zit S0REN JOHANSEN ( i = 0. It is interestingto see that the maximumlikelihood estimation . Another possibilityis to calculate singular values.11) IASkk A - SkOSOOSOk1 A 0 AA normed by V'SkkV= I.(Vl.10) L-21T(f) SkoSoSok)f3I/ I'SkkI. 1/3)= (MO aWMkl1) M1 MoThus the residuals are found by regressing AXt and Xt-k on the lagged differences. 3..12) Lm/T(r)- I H (1 -Ai).u: (2.. The maximized likelihoodfunctionis found from r (2. Since the hypothesisof cointegration is the hypothesisof reduced rank of H. The estimate of F for fixedvalues of a. togetherwith (2. This is the approachof Fountis and Dickey (1989). To give an intuitionfor the above analysis. it is intuitivelyreasonableto calculate the eigenvaluesof the 11 and check whether they are close to zero. This again is minimized by the choice . and the residualsums of squares (2.e.considerthe estimate of H in the unrestrictedVAR model given by H = SOkSkk.5) Sij =M11-M 1M11M1j (i. eigenvalues of H'H. since they are real and positive.tvp) the eigenvectorsof the equation (2. and ordered by Al > .aP6'Rkt)}. Tk. .. .7) LTmax (a.the dummies and the constant. 3. k). j=0.B ..and consistsof well knownmultivariate techniques from the theory of partial canonicalcorrelationsand reduced rank regression (see Anderson(1951) and Tso (1981)).. = S00. and A is found to be ( 2.1' P.

For r = p. The main result that is given here and proved in Appendix B is that the asymptoticdistributionof the test statistic has a limit distributionthat only depends on the dimension of the problem (p .12) gives the maximizedlikelihoodfunctionfor all values of r. have to be tabulated by simulation. a13' 2. i=r+l1 whereas the likelihood ratio test statistic of H2(r) versus H2(r + 1) is given by (2.15) with F(t) = B(t) . so that the ratio Lmax(r)/Lmax(p) is the likelihood ratio test statistic of the hypothesis H2(r) in H1. r I + 1) are given as the trace and the maximal eigenvalue respectively of the matrix in (2.r)-dimensional Brownian motion B with i.i.rlr+ 1) =-Tln(1-Ar+l).17) F2(t)= t- 2 The test statistic . The relation(2. If in fact a'I.COINTEGRATION VECTORS 1555 involves solving (2.JB(u) du. H21H1) and r -2 In (Q.. F2).I = 0 or not. The normalizationof the problemgiven by the likelihood of method guaranteesa simple asymptoticdistribution the likelihoodratio tests.13) -21n (Q. a normalizedversion of the intuitively 'SOk/24o S-1'2=S-1"2 reasonablesolution. The relationof this resultto that of Stock and Watson(1988)is discussedin Section 6. and (2.15). and Lmax(r)/Lma(r + 1) is the likelihoodratio statisticfor H2(r) in H2(r + 1).p-r-1) (2.15) tr ((dB)F' [FF'du] F(dB)'}.2 n (Q. The statistic . H2H1)H -T E ln (1-A). rIr + 1) is asymptotically distributed as the maximum eigenvalue of the matrix in (2.16) and Fli(t)= Bi(t)-fBj(u) du (i=1.2 ln (Q.14) -21n(Q. where F' = (FM. then the asymptotic distributions of . H21H1) has a limit distribution which.u * 0. They are natural generalizationsof the Dickey-Fulleror unit root distributions.u = 0.2 ln (Q. and on whether a'.1: The likelihood ratio test statistic for hypothesis H2: H = THEOREM versus H1 is given by p (2.d. that is. which only depends on the dimension of the problem considered and not on and of The distributions the test statisticsare nonstandard nuisanceparameters.*. components as (2. . if al .r).. H2(r) = H1.11) which amounts to calculating the singular values of HSJ12. can be expressed in terms of a (p .

THE TEST OF HYPOTHESESON THE COINTEGRATING RELATIONS AND THE ADJUSTMENT COEFFICIENTS The purpose of fitting the VAR model and determiningthe cointegrating rank is that one gets the opportunity formulateand test interestinghypotheto ses about the cointegrating relationsand their adjustmentcoefficients.p0Yand Xt*k written k-1 (Xtk. (AXt-1.5)). Note that integrals of the form fFF'du are ordinaryRiemannintegralsof continuousfunctionsand the result is a matrixof stochastic variables.. see also Reinsel and Ahn (1990).1556 S0REN JOHANSEN Here. as well as the corresponding mo- ment matrices M: and S[. however.. The tables have been extended (p . is a matrix of stochastic integrals.where the Brownian motions are defined. This section is concludedby pointingout how one can analyzethe model H2*.r degrees of freedom.r = 1. = The analysis is now performed as above by defining Z* = AXt. test that the trend is absent under the assumptionthat there are r cointegrating relations. and in the following. Finallywe test the hypothesis H2 in H2 by a likelihood ratio test. H2(r)IH2*(r + 1)) are distributed as the trace and maximal eigenvalue respectively of the matrix in (2. 3. for f3* = (. First note that if jt = a80 then a'Xt-k + = a.. . H* H2) for the hypothesis H* given the hypothesis H2. In 1)Y. . and Zkt = Xt* k. D)'.3: The asymptotic distribution of the likelihood ratio test x2 -2n (Q.. with F= (B'..80.. THEOREM 2.4) and (2.the integralsare all on the unit interval. .e. Z* . i. (see (2. aflpu = 0.'X-k = + af30 = aI3* Xttk.. 10) by Osterwald-Lenum (1992).1Y.2: Under hypothesis H*: Hl= af3 and j. these new variables the model is = JXta*'XtEkri Xt-i + Li=1 + PDt + Et (t=1 . T). when there are r cointegration vectors. THEOREM 2. with The distributions derivedin this section have been tabulatedby simulationin Johansen and Juselius (1990) (p - r = 1. 5) in connection with an applica- tion of the methods to money demand in Denmark and Finland.. * * AXt-k+1.8'.15). The integral fF(dBY.defined as L2 limits of the corresponding Riemannsums. i. . H2f H1) and -21n(Q. . the likelihood ratio statistics -21n(Q. is asymptotically distributed as p ..e.t = a..

and introducethe notation Shh.3) (3. THEOREM etc.see Johansen and Juselius (1990). and Mosconi and Giannini(1992)). 1B'SOOA. There are of course many other relationsbut the ones chosen here are possible hypotheseson the cointegrating simple to analyze..Bderivedin AppendixC.1: Under hypothesis H3: /3 = H(p and a = A p where H is p X s and A is p x m.Bof the form h(.bf. Together with A (p x m) we consider a (p x (p . The likelihood ratio test of the restrictions H3 in the model H2 will be discussedbelow..b 5aa. are not identifiedwe can only test restrictions that cannot be satisfiedby normalization. We consider here in detail a simple but importantmodel for linear restrictions of the cointegratingspace and the adjustmentspace that allows explicit maximumlikelihoodestimation: (3. For more general hypotheseson .b I 0. .H'SkOB(B'SOOB) BSok H. Aaa. &=A(A'A) Sak.7) A ab = Sab A aff Skb. Under hypothesis H3 we transformthe matrices Sij some more.b = Saa -A' .q/) can be solvedwith similar methods (see Johansenand Juselius(1991). ( 3. The estimate of A is found from (3..3) = 0 one can of course not prove the existence and uniquenessof the maximumlikelihood estimator. and eigenvectors v (3. > > AS 0.b' Aab' Sbb' and similarlyfor Sha.6) (3.. such that BA = 0. Another class of hypothesesof the form . =ASOO A -A'Soo B(B'S00 B) sah. Hoffman and Rasche (1989).m)) matrix B =A1 of full rank. v Then for eigenvalues A1> .1) H3: f3=Hfp and a-=A+i.bSaa b Sah . and have a wide variety of applications.B= (H(p.5) Abb =Sbb. 3.COINTEGRATION VECTORS 1557 Since the parametersa and .b.vr). .b = H'SkkH .but such hypotheses can be tested by likelihood ratio or Wald tests using the of asymptoticdistribution . the maximum likelihood estimators are found as follows: First solve (3. A. and Kunst and Neusser (1990).4) 1=H( v.2) IA Shhb - Sha . Note that H3 is a submodelof H2.

4. which is asymptotically X2 distributed with (p .s)r degrees of freedom.E r(1 .2) 17( L)Xt = -IIXt + IF(L) AXt = Et + /. for others are nonstationary..1) 17(z) = (1 -Z) I .. can be removedby differencing.Ai(H3))/(1 . k-1 (4.We give a very simple proof here. The result that we want to prove is the fundamental result about error correction models of order 1 and their structure.1). GRANGER'S REPRESENTATION THEOREM When we want to investigatethe distributional propertiesof the estimators and the test statisticswe have to make more assumptionsabout the process in order to rule out varioustypes of nonstationarity.Ai(H2)). + PDt).9) . of whereas have stationarydistributions a suitablechoice of initial distribution. where we have introduced !(L) = (U(L) . Note that -1 = = 17(1) is the value of H7(z) for z = 1. This guaranteesthat the nonstationarity X. 170)).1) is rewrittenas (4. p. The model defined by (2.m)r + (p .8) LmaxT = ASolIH(1-Xi) = THEOREM 3. of If H has full rank it is well known that under the above condition the equations(2.2: The likelihood ratio test statistic of the restriction . T). and that O(1)is the derivative of lI(z) for z= 1.1) determine X. The proofs of these results are given in AppendixB and C.1558 S0REN JOHANSEN The estimate for F is found from (2.L). Xt =A . In additionwe provide an explicit condition for .z) z' i=l1 _7Z k.that is.l(L) (Et+. The basic assumptionis that for the characteristic polynomialderivedfrom model (2.21n (Q.1(1))/(1 . . If H has reducedrankwe want to prove that some linear combinations X.. it holds that IH(z) I = 0 impliesthat either Iz > 1 or z = 1. we startwith a doublyinfinitesequence we can represent the initial values and hence the whole process as {Et. as a stationaryprocess providedthe initial values If are given their invariantdistribution. . H3IH2) = T i=1 ln {(1 .6) and the maximized likelihood function is r (3. as an infinitelinear combinationof the Et's (see Anderson(1971.+ PDt ( t= 1. r Hp and a =A if versus H2 is given by (3.The basic result is due to Granger(1983) (see also Engle and Granger(1987) or Johansen (1988a)).

PROOF:If we multiply the equation (4. Let the processXt sat- isfy the equation (4. We define C = P_L .1a1 (4. If the initial distributions are expressed in terms of the doubly infinite sequence {Et). . and let (4.C(1))/(1 .1u (afa) -af 2X38 af -1F ( aIA EAXt =X. and we therefore introduce the new variables = ('Y)Pand Yt= (1l 13I).r) matrix 'Xt of full rank such that /3'.COINTEGRATION VECTORS 1559 the process to be integratedof order 1 (see (4. (4.L). with linear trend rt = C1ut.L)C1(L). Xt is nonstationary.10) apart from terms involving the seasonal dummies. 2.r. THEOREM 4. + + Dt). i=l where St = C1(L)Et.l3'Xt + a-(a) -at}. VI(L) To discussthe propertiesof the process Xt we solve the equationsfor Xt and express it in terms of the cE's.l31_).11) Xt=X0+CEi i=l +'7t+C(L)P EDi+St-So. the process X.2) by a' and a'. and let 1H=af3' (4. so that C(L) = C(1) + (1 .6) (4. If aIl . has the representation t t (4. t'Xt is stationary. (4. For C1(L) = (C(L) . respectively. we get the equations -a aa8Xt + a' V(L) AXt=a1(Et AXt = af(Et + lu + ODt).9) E. then r = 0 and the linear trend disappears.1: (Granger'sRepresentationTheorem)./3= 0.The problem is of course that since H is singular Zt the system is not invertible.4) afIw31 have full rank p . a'. It is also convenient with the notation a!= .7) (4. then AXt has a representation AXt = C(L)(Et + l + Dt) with C(1) = C..131AXt.8) Further AXt is stationary.3) for a and f of dimension p x r and rank r.5) (af Then AXt and I'Xt can be given initial distributions. such that (4.4) below) and we clarifythe role of the constantterm.2) for t = 1..u = 0. .. and /'X0 = f'So. where 83 is a p x (p .

see (4.13) I(L) a'. a. We write the equationsfor Zt and Yt as A( L) ( Zt.With this notation note that a'a = I and aa' = aa' = a(a'a)-'a' which is just the projectiononto the space spannedby the columnsof a. aj_) ( Et + 1 + Dt ) with d aff'8 + a'tV( z ).12) and (4. AZt + a' (L)f? Yt = a + t + PDt).13) is invertible. a. a For L = 1 we get (4. Hence multiplying the matrix(P3A.5).7). Yt)= (a. The process AX.3) and (4. Hence z = 1 is not a root. a . which gives the determinantas 1 (1-z 1) 1A(z) I = 1(a./Yt + p AZt is stationary apart from the contribution from the centered dummies. The idea of the proof is now to show that the equationsfor the processes Zt and Yt constitute an invertibleautoregressive model.and hence that Zt and Yt can be given such initial distributions that they become stationary. This shows that all roots of 14(z) I = 0 are outside the unit disk. Hence also AXt =.4). by the assumptionabout H(z). (Et + / + Dt)).12) .a'a/3'/3Zt + a'1i(L)t AZt + a'tV(L)/1Yt (4. The expectationof Zt and Yt can be found from A(1)-'(a. By summation of AXt we find that Xt has the . by (zt.1)..) IIHI(z) I 1(/3./3) 1(1_ z)(Pr). For z 0 1 we use the representation a A(z) = (a.. It follows that the systemdefinedby (4.l )'(E-t + A + ODt) .Y. If these initial distributions are expressed in terms of a doubly infinite sequence {Et}.c )1H(z)(3.1560 S0REN JOHANSEN a(a'a) -1 for any matrix a of full rank.81)A(L) '(a. can be recoveredfrom Zt and Yt: JXt = (X1X1 +A) JXt = 1yt + zt = a'(t This gives the equationsfor Zt and Yt (4.6) and (4. C(L) = (8A.8-Z1): -A '( A(z) - "(8 a'i '(z)/3(1 z) a'C i'( z )/3 For z = 1 this has determinant IIaI a(a= Ia'I 113'13 IP. then the process (Z.tuFrom the representationof the processes Zt and Yt we get a representationof AXt by /3). Yt') has the representation Yt')'=A(L) l(a.8 which is nonzero by assumptions(4.This proves (4.

If this matrixhas reduced rank. The asymptoticanalysisof such a model. The results are corollariesof Theorem C. .3 The asymptotic propertiesof the estimatorsare here given under the hypothesis H3 where restrictions are imposed on both a and /3.but we shall call such a process stationary.u enters the linear trend only through ca'. Note also that the condition (4. 1990. apart from 8 E sp (H). between the singuRelations (4. Such a term gives rise to a quadratic trend in general. and that the linear trend r is contained in the span of 3. Fk-l follow the usual results for stationaryprocesses..The resultswill be expressedin the naturalcoordinatesystem in sp (H).5) displayan interestingsymmetry representationand the larity of the "impact"matrix H for the autoregressive singularity the "impact"matrixfor the moving averagerepresentation.The of null space for C' is the rangespace for H and the rangespace for H' is the null space for C. We choose.e.COINTEGRATION VECTORS 1561 componentCE= lis representation (4. and f3'X-k equal a stationaryprocess plus the term involvingthe seasonal dummies. We therefore concentrateon the results for f3. The mathematicaland statistical theoryfor such processeshas been workedout in Johansen(1988a.1).becomes somewhatmore complicatedbecause there are more directionsthat will require special normalization... and hence cancels if we consider the components /3'X..1 which gives the asymptoticdistributionof the estimatorunder a smooth hypothesis on the parameters. Note that . It is this symmetrythat allows the results for I(1) process to be relativelysimple. however. such that 8'c has full rank.f3) .4) is what is needed for the process to be integratedof order 1. Strictlyspeakingthe processes AX. . will be integratedof higherorder then 1.1.3) and (4. ASYMPTOTIC PROPERTIES OF THE ESTIMATORS UNDER THE ASSUMPTION OF COINTEGRATION AND LINEAR RESTRICTIONS ON a AND . The maximum likelihoodestimatorsunder H3 are denoted by ^.ut. One can also make the seasonal dummies stationary by initial randomassignmentof a season.11) and hence containsthe nonstationary together with a linear trend rt = C. the process X. which proves (4. The seasonal dummies are so constructedthat they remain boundedeven after summationover t and hence do not contributeto the linear trend. It is of course easy at this point to give a representationof the process for the model with a linear term added to (2. 5.. f38c f3(c'. i.8) and completes the proof of Theorem4. . 1991). and vice versa. Thus inferenceconcerninga and the short term dynamicsrepresentedby rF. Thus a theoryof I(2) processesin the context of a VAR model can be based on the reduced rankof the first two matricesin the expansion of H(z) at the point z = 1. = The first result concerns p normalizedby c.An importantresult is that inference concerning /3 can be conducted as if the other parametersare fixed. the projection of r onto sp (H).

r) such that (/3.3rH=P PHr = .5) holds with YH(P X (p that (P3 YH) span sp (H)..G2).1) and (5. )G2 t = G12= G1 . Similarly let y(p x (p .B3 = 0..1])..2) is independent of G' = (GQ. 0 (5. Since the process contains a linear trend only p .r .6) T(I-f3PCC)HtA H'(I-cf3'c) ? (c'l''ftc) where V = (Vr+i'. and G1. y) span RP.6).. The stochasticintegralsare all taken on the unit interval. and apply the identity c'HAt Hc =c'f6(diag -A C.r . Since the constant term is included in the regressors the regressions.2). (t E=[0.. see (3. One can replace the matrix vv' by Shh in (5. The asymptotic conditional variance is (5. such that (P3.1 components (y'C) of the . vp)..YH. . T. Next supplement with s .1 vectors YH= (YH1" * * YHs-r-1) E sp(H).1562 TJ S0REN JOHANSEN = PH1r which is also orthogonal to /. 5. /3such The proof is given in Appendix C.5) (I CC')YH (Yv 0 (c'H7A f Gi2G du y G2 y1 (I -c/) . TH) consists of s mutuallyorthogonalvectors spanningsp (H). since I.This is seen process X.1]).2 replaced by G1.1: Suppose hypothesis H3: a =Ai and /3= Hp is satisfied.7 - .r .3) (5.1)) be chosen orthogonal to (/3.4) G1(t) G = (I-8cC) cYH y( yG2GG2 du Y'YH) YiyfG2(dVa)'(C3) i'C(W(t) -2 - W(u) du) (t e[0. defined as (5. If THEOREM the limit distribution of T(IC -/3) is given by #H O.1) where a'A-A'W Va= (a'A-a) (5. the Brownian motion corrected for level and trend. s)) orthogonal to If rH= 0 then (5.1Hc) -I which is consistently estimated by (5.(G1G2 du )(G2G2du that is. is correctedfor its mean in the preliminary to be reflected in the asymptotics by subtracting JW(u) du and u=fudu.

1 y(I I- c(' ) whichwe call the limitingconditionalvariance.e.that if we were interestedin the linear combination TTr(.3c -c3) then the limiting distributiondegeneratesto zero.1 in AppendixC. general theoryof locally asymptotically mixed normalmodels. /3 = H(.2: Suppose hypothesis H3: a =Aqp. G2 1= G*then (5. when = a/30. by . A consistent estimate of the asymptotic conditional variance is given by (5.)Vartv(0.1) with G12 replaced by G1. The resultcan be determinedfrom the proof of Theorem5. 1)'. i. The normalization by the p x r matrix c is now done as follows: /3*= ((3'c). Note that the limitingdistribution fixed G is Gaussianwith mean zero and for variance (5.1/3.O)'(I- X HA HO = The estimator of the constant term behaves differently:Let G* = y'CW.2) withSijb replaced Sl!*J. 0 G-G/3) *(fG*1G2 du) l fG* (dVa)'(c'/3) and a consistent estimator for the asymptotic conditional variance is (5./c) is a mixtureof Gaussiandistributions.30 is satisfied.1Hc) . and /i = a./30jY THEOREM 5. G*(t) and 1.8) T(I-PCc')(H' ? V (H. = H>p.(/'8c)-1/) = ('81. Introduce YH such that / and YH span H and /i define YH= (YH.*) are the eigenvectors from (3. but will not be explicitlyformulatedhere.Thus the limitingdistribution of See Jeganathan(1988) for a TQoc. however.Note.7) (I-( CC (YHYJ )YH G12G12 du Y ? (c'H'A .9) T2(/3 G*G* du (G*G* du) G1*.3. then the limit distribution of T(c3C -oc3) is given by (5.COINTEGRATION VECTORS 1563 process W enter the result. Without proof we give the correspondingresult for model H3*. The trend is describedby definingthe last component of G by t.10) T(0. A different normalization by T3/2 is needed in this case. and a =A if.b. 1)e v fAro (3) (Ct Here A.OYand f = (0. since 4H(I /3(c'/3 -1c')YH= 0. The result shows that if the 3's are normalizedby c one can find the limiting distribution any of the coefficientsand hence of any smooth functionof the of coefficients.

.3: If only 1 cointegration vector /3 is present (r = 1). Here A1 is the maximal eigenvalue and /3 the correspondingeigenvector of the equation |ASkk - SkOSOO5SOk =0. Then the set {pI Iw(p)I <xX2lj will be an asymptotic1 .1.11) is very easy to calculate once the basic eigenvalueproblem(2.1) for as =0. A simpler interval can be obtained by insertingthe estimate P = f2/f31 for po in the denominatorwhich gives the interval (5.. Al -2j/82 T . 0.. Note that the test statistic (5. COROLLARY 5.l 1)vv'/T as giving an estimate of the asymptotic "variance" of /3. .11) T(K3) ((Al1 -1)(K K) is asymptotically X2with 1 degree of freedom. If we want to derive a confidence intervalfor the parameterp=132/31 define K' = (po.. then the test statistic (5.. BP).. Theorem5..11) has been solved. .. 0).E confidenceset for the parameterp. and where we want to test a linear constraint K'/3 = 0 on the cointegration relation /3' = (. As an example of an applicationof the results given above consider the following simple situation where r = 1. . The remaining eigenvectorsform v. Thus if there is only one cointegrationvector /3 one can think of the matrix (A.12) p 1 +X1. After havingpicked out the eigenvector that estimates the cointegratingrelation one can apply the remaining eigenvectorsto estimate the "variance"of the coefficientsof the cointegrating relations.E quantile in the distributionof X2(1).1 yields the result that wO(po)= T(p1O /2) ((Al - 1) E (POt1-V2j) - is asymptotically X2(1) if p = po.B1. and if we want to test the hypothesis K'/3 = 0. A similar result holds for the model with no trend.) has the same limit distribution that givenby (5. where v 2_ is the 1 . such that K'/3 = po/3 32' we = which is zero if p po.1564 S0REN JOHANSEN Note that T(13]-/3. We formulatethe result as a Corollary.

and one can get rid of the unit root distributioncomponentby analyzingthe full system rather than single equations.COINTEGRATION VECTORS 1565 6. that if the data are generatedby an error correction model. Commonto all these methods is that they give asymptoticinference for the cointegratingrelations that have the same problem with the asymptoticinference as indicated above for the regression estimator. Phillips (1990) contains a discussionof maximumlikelihood estimationin a numberof .(X. Other methods give mixed Gaussian limit distributions. .r smallest of eigenvaluescontaininformationabout the "variance" the estimators.but no economic structureis imposed on the model in the initial analysis. He finds.The originalmethod of Engle and Grangerfor estimatingthe long-run parametersconsisted of regressingsome of the variableson the others. the inferencecan be based entirelyon the eigenvaluesand eigenvectors found. assumingthat the cointegrating rankis known. and there are a numberof single equationmethods. but the asymptoticdistribution theory is complicated. once the eigenvalue problem (2. DISCUSSION This paper addressesthree issues:first the problemof findingthe numberof cointegrating relationsin nonstationary data. next the problemof estimatingthe cointegrating relations. which derive methods by analyzingthe model. For any value r of the cointegrating rankthe estimate of the cointegrating relations is the subset of the eigenvectorscorrespondingto the r largest to eigenvalues.and finallythat of testing interestingeconomichypotheses about their structure.The VAR model is analyzedusing likelihoodmethods in order to answerthe above problems. not surprisingly. and X. like the one we have analyzed here.One can get rid of the constantby includingthe lags.Phillips (1988) has suggested a nonparametric spectral regressionmethod which permits the estimation of long-runequilibriumrelationshipsin the frequencydomain.which makes inference on structuralhypotheses difficult(see Phillips(1990)for a discussionof these problems).The approachis model based in the sense that we assume that a VAR model describes the data adequately. -XY as estimatorof the cointegrating relations.a unit root distribution.and the orthogonal complement as estimator of the common trends. the likelihood methods.In view of this summarylet us now consider some of the methods that have been proposedbefore. A numberof other methods have been proposedfor estimatingcointegration relations:Stock and Watson (1988) have suggestedthe smallest principalcomponents of E. This gives consistentestimatorsas shown by Stock (1987).11).11) is solved.and a constant. The method has the advantagethat. have a better performance. We first considerthe estimationof /.Very brieflyone can say that the simple regressionestimatorhas a limiting distributionthat is composed of a mixed Gaussian distribution. The successivetests for the rank are all based on the eigenvaluesfrom (2. Bossaerts (1988) has suggestedcanonicalvariatesbetween X.A simulationof a numberof these methods has been performedby Gonzalo (1989)..Finallythe remainingeigenvectorscorresponding p .X)(X.-.

in that the present approachis model dependent throughthe use of the VAR model and the likelihoodanalysis.see Gonzalo (1989). in this Throughout paperwe have assumedthe Gaussiandistribution order to be able to analyzethe likelihoodfunction. we present in an informalmannertheir calculationsin our notation.and major departuresfrom these assumptions would requirenew models. A systematicapproachto findingthe cointegrationrank is proposedby Stock and Watson (1988). Thus the final matrixthat they analyzeis analogousto II SOkIL(IL S kkt)-8 . that is (JBB'du)-1fB(dB)' for a (p . however. An investigationof eigenvaluesin this matrixis analoof gous to an investigation eigenvaluesof 1 = SokSk. by fittingan autoregressive process. It is the advantageof the model based inferencepresentedhere that one can checkwhetherthe model fits the data. The principlecomponents are almost the same as the smallest eigenvectorsin Skk. is similar to the autoregressions performedin this paper. and one can give a precise formulation of the economic hypotheses to be tested. Thus many of the calculations are similarto those based on the likelihoodmethodsfor the errorcorrectionmodel. and the subsequent fitting of an autoregressivemodel to Af31X. Next consider the problem of determining the cointegration rank r. and then performsone step in a Newton-Raphsonalgorithmin order to approachthe maximumof the likelihoodfunction. Park(1988) and Phillipsand Hansen (1990)suggest a regression estimator where the regressors are corrected using a spectral estimate of the long-runvariance matrix. It would. which starts with the originalregressionestimator. The assumptionof a Gaussian distributionis not so serious.but simulationsindicate that for moderate departures (which would not be detected in the initial statistical analysis)the inference does not seem to change too much.be interesting to see how robust the methods derived are to minor departures from the assumptions.The estimated coefficientmatrixis then investigatedfor unit roots. then filtering the commontrends ilS AX. includingthe VAR.with the purposeof developingnew methods that are presumablyoptimal for this distribution.Finally Engle and Yoo (1989) have suggested a three stage estimatorfor the error correctionmodel. They determine the rank of the cointegratingspace by first estimating /3 by a given number of principal components.r)-dimensionalBrownianmotion B.This estimatoris asymptotically equivalentto the maximumlikelihoodestimatorin the VAR context.calculates the remainingparametersby OLS. The procedure is repeated until the correct relationsis found.1566 S0REN JOHANSEN models. The limitingdistribution of the matrixis of the form encounteredhere.These methods clearly depend on the VAR model assumptions. The choice of lag length is more important. but the setup is different. but the methods are clearly model . numberof cointegrating In order to facilitate the comparisonbetween their method and the method derived from the likelihood function in the VAR model. and finallyregressing the residualson the summed residuals. since it is not difficult can to see that the asymptoticanalysisgives the same results. as long as the by process EY2Ei be approximated a Brownianmotion.

1). 1989.5) (a' . then multiply by 40a.) af =A-' (A..3). . one finds immediately the results (A. (A. '1kkI- PROOF:From equation (2.4) has been proved.Since the process Xt -k is nonstationary. To prove (A.a new model should be formulatedand analyzed. D. . 1) are stationary. DK-2100 Copenhagen 0. in the model H2. (A. final revision received March.) has full rank the relation (A. (a'. it follows that the stochastic components of Z't = (AX1'_1. LEMMA A. It is importantto note that for VAR models that allow integrationof higher order. APPENDIX A. this variance clearly depends on t. the likelihood analysisis more complicated. These relations imply that (A. 1991.2).1: The following relations hold: 43=aI3XkO+A.4) (A. but since f3'Xt-k is stationary. ..2) and hence (A. and B'X. Aa.4) multiply first by a from the right. We shall indicate this by leaving out the dependence on t and defining Var ('X x A tk IZ. 1 (a'L4aa ) 1. Aa.6) 'kk( kOX'ok) - A-1a(aA-1a) kkP = ('-1 a)1 a'A1.1 = (afA -la) -aA - -. that the present methods can be applied to processes that are I(2) with only minor see modifications. ) = oo ( '4 'XkO V B kk OkP The first result concerns the relations between these variance-covariance matrices and the parameters a and . AXt-k-l. however.l5a) -a a . and both sides become the identity.. Johansen(1991). and (A. Denmark Manuscript receivedJanuary. can be considered stationary.1) (A.3). Since the p xp matrix (a.3) voo= a(I3'-kkk3)af + A. . Universitetsparken5. X -I _ '( a'(a av 'a) = = a.Jt turns out. AXt =FZlt+HZkt+Et.COINTEGRATION VECTORS 1567 dependent. Let Xkk = Var (Xt -kIZl). then 'hktkP does not depend on t. If major departuresfrom the model assumptionsunderlyingthe present analysisare relevant. SOME TECHNICAL RESULTS Since we have proved in Theorem 4.1 that under certain conditions AX. Ok P = atXkkI3. = Aa1 and both sides reduce to zero. Institute of Mathematical Statistics.

are describedby a Brownianmotion W in p dimensionson the unit interval.whichare based on the resultsin Phillipsand Durlauf(1986). Finally if we take the linear combinations f3'X1.5)).3) since a'X 00 = aA.11) and can be found by rescalingthe time axis and LTi=06j. Mko = T-1 t= 1 Xt-k .6) becomes (aZ ) -18'kk = (a'1a)l. Using these resultsone can describethe asymptotic propertiesof the productmomentmatrices and Sij definedin Section2.r . 1])- Fromthe representation (4. This relationcan be provedby multiplying (A. whichare basicfor the propertiesof the estimatorsand tests. This completesthe proof of LemmaA.2 i=O E iW(U) (U E [0. the variablesas follows: [TuI T. is composedof a randomwalk. [Tu] (A. LEMMA A.6) is provedby insertinga = XokI3(P'XYkkP)-' such that (A. by The second and the third is proved as the first. it is clearthat the processis dominated of by the linear trend. [Tu] (A. We also need the productmomentmatriceswhen only the interceptis correctedfor T Mkk = T t= 1 T E (Xt-k -X-k)(Xt-k X_k)f. Thus let y(p x (p . such that to of in (. 400a_L).and a stationaryprocess.This Brownianmotion is the limit of the randomwalk which appearsin the representation (4. r) span all of RP. whereas if we take vectors y which are orthogonal to r and linearly independentof f3.11).1.2: Let T -.If we considerr'X.u.but we summarize resultsin two lemmas.7) T- Y'X[TU] = T.5) follows from (A.oo and u E [0. by multiplying (a.1568 S0REN JOHANSEN equality in (A.1)) be chosenorthogonal T and . .3'C= 0.3.Vk ) (?-t- TY We do not give the asymptotic resultsin detailhere since the proofsare similarto those in Johansen (1988b). and the processbecomesstationary. Y.Iy'C S i=o E? + w op(l) y'CW(u). The asymptoticpropertiesof the process therefore depend on the linear combination the processwe consider.5) is proved the same way. Finally(A. The asymptotic propertiesof the nonstationary process X.8) T-'rT'X[TU] = T. Note that the limitingbehaviorof the nonstationary of the processis completelydescribed part by the matrixC (see (4.and the variancematrixof the errorsA.and are simpleconsequences of the representation the (4.1]. The properties the processare then summarized the followinglemma.r'C E i=o + 'r[Tu]T- + P op(l) T Tu.a lineartrend.f. then the dominatingterm is the randomwalk.11) it followsthat X. then both the trend and the randomwalk are multipliedby .4) by lv00a(da) -= (aP'XkkI3a' + A)a('a)1. The first relation in (A. the directionr = C.

e.10) hold with G replaced by G1. choose 5' = (0. and -k 2. part The two cases require a differentnormalization. and we define .1. as in Theorem 5. APPENDIXB.-AP (B.r) + 1 =p + 1 linearly independent vectors spanning R ( the followinglemma.2) ISASkk-SkOSO'SOk are the smallestsolutionsto the equation ?0.2.G.1 The likelihoodratio test statisticof H2 in H1 is given in the form p (B.11) (A. t30).21n(Q. We apply Lemma A. A JG*(dW) p S BT'(S . f GG'du. LEMMA T 1ETlP'Xlk- A.1BTSkkBT and define G' = (G21.0 and 00- By movingthe constant term to the vector Xt-k.The reasonfor this is that to important as quantitieslike T.*' = (. delete the terms involving r. T = 0 or A = a. whichis appliedto see this. we get differentasymptotic results. As . Under hypothesisH2*. remainbounded. The crucialproperty.3 to investigate the asymptotic of functionsof properties S(A) and applythe fact that the orderedsolutionsof (B. and define = G* (G1*'.Skk* pa') Finally P*'S*P* 2 Xkk P'X.or becauseit enters the cointegration vector.12) T.either because it impliesa term plays an important of lineartrendfor the nonstationary of the process. T 2) then (A. has full rank the vectors We then have are r+ (p .r)) = /3k.1B*'S* B* wJG*G* du. is that the partialsumsof D.9) and (A.4: Let BT=(Y*.11). The same results hold if Skk is replaced by Mkk and Sko is replaced by Mko. k : P'OXkoand S00 X00 Finally P'Skkp 'Sk If r = 0.. as in Theorem 5.3: For r = CA - VECTORS 1569 0 define BT = (.1) .1ET= D1 AXt and T-ET DtXIDk remainboundedin probability T -moo..Estimatesare calculatedusingthe matricesS J (see Section2).COINTEGRATION LEMMAA. Let S(A)1=SASkk-SkOSoSOk. w (A-10) B+(Sko . It is seen that the constant role for the formulation the limitingresults. The seasonal dummiesdo not play an equally role once they havebeen orthogonalized the constantterm.i. and then (A. and y*' = (f3L .f'.SkkI3a') fG(dW)'. we no longer correct for the mean in the processW and the added 1 gives an extradimensionto the matrixS*k. 0).1). PROOF OF THE RESULTS IN SECTION 2 AND SECTION3 Proof of Theorem 2.9) T.H2IHi) = -T E i=r+l ln(1 -Ai) where the eigenvaluesAr+1.2) are continuous the coefficientmatrices. . then (A.. we choose y(p x (p .f0. see (2.G*).whichalso restrictsA to have no trendcomponent.T .

r .2) normalizedby T convergeto those of the equation V XOO-12OOXOk4k13X8Xk0OXOk- (B. The . and into orthonormal components by supplementing to which is proportional (a'IAa) -L 2ca.2 we can turn these these vectorswith an extraorthonormal vector.r)-dimensionalBrownianmotion U = 2af W. BT SkOa.Aa?) (U(t)' . applying the representation (4. = B'T(SkO .fU'(u)du. where N is a notationfor the matrix 3 '] Xko0X&which shows that the limit distributionof By LemmaA.1 linearlyindependent of of by combinations the components U .6) p fFF' du-fF(dBY f(dB)F'| = 0.r + 1)-dimensional process F(t) = (a'.8l).4) as (B.3.8 -kk 8 [pzz-[VkO-VO~O~ (B.3..1 this matrixequals a.4) where G1(t) pJGG' du . r) span RP.3) and that the rest convergeto zero.5) L(PFF' du - f(dU)'f (dU)F)L = 0. (a' IP. (aIlAa-)-lal.3) A[ JGGdu oj6 = IA13'Xkk13-3'XkoZ OXOk81I AJGG'du which has r positiveroots and (p .3 we let y be orthogonalto 13 ) and AT 0 = (. T IA' (S(A)) AI S0REN JOHANSEN and r.0 such that p = TA is fixed.r) x (p . BT)'S(A)(18 and let T -X 00 BT) I = 1I8'S(k)8I I|BT{JS(A) -S(A)P['8S(A)P] -. whichshowsthat in the limit the firstfactorhas no roots. From Lemma A.y. G2).(t4))' We can then write(B.2BT) we get ] .3: B+S(A)13 = pT 'BTSkk16 BTSkOSOO'SOk1 = 0BSko&'Ok1 +op(l).r smallestsolutionsof (B. By multiplying (L1IL'l ). In order to simplifythis expressionintroducethe (p . Next considerthe decomposition 1(3.1570 from Lemma A. and G' = (G.JW(u) du) and G2(t) = t .3 it follows that - -13'SkoSj7Sokk3 13'S(A)13 = pT 16'Skk13 -1'koX&Xok1.2) convergeto the roots of (B.fG(dWa)' (a' Aa? lafl f(dW)G' = 0.r) zero roots.3'S(A)}Bk = pT 'B'TSkkBT BTSkoNSokBT + Op(l). and the (p .r + 1) matrix L has the form L Lll ? and L1 process enters into the integralswith the factor L1l which are p . where the (p . and B'T{S(A) - S(A)183[p'S(A)13] '. F = 'l3. In orderto investigatethe next factorwe note the followingconsequencesof LemmaA. This shows that the r largestsolutionsof (B. that for BT = (7. We then find As in LemmaA. = yC(W(t) .Skk13a')a? can be found from LemmaA.PfS(A)}BT| and A -. we can transformthe process U by an orthornormal matrix0 to the process B = OU.3.1. The above resultsimplythat the p . which has variance matrix I.JU(u) du. such that (. Then the equationcan be writtenas (B. T.4.'(a'(cAaL) .5) for the matrix C.

1 and Theorem 2. H21H.5) so that the test of H2 in H1 is distributed as (B.JU(u)du.r smallest roots of (B. T. This equation has p . then again applying Lemma A.7) and (B.8) AT IAS*k-S*oSo S-kI = 0.2.r) x (p .9)).9) T2* f tr(Id) ( ) U f J )(U du J(U)(d)) The result for the maximal eigenvalue follows similarly.6).16) and (2.2. see (2. Again we can introduce the p . W and cancel the matrix -3'(a' (af Aa. is (p .17).fG*(dWyaC (a1Aa1) - la1f(dW)G*' =0.3)).2) decrease to zero at the rate T. it follows that T2*= U(1)'U(1) T2 (see (B.8) converge to the roots of the equation [A13'XkkI3 PXkOXOOOkP 0 AJG*G*' du] ] 0.r roots.COINTEGRATION VECTORS 1571 where F is given by (2. but . Furthermore the test statistics have the same variables entering the asymptotic expansions and hence the distribution can be found by subtracting the above random variables T2 is and T2*.) 2 to see that the test statistic has a limit distribution which is given by (B.7) with T2 = tr (f(dU)F [JFF' duI fF(dU)'} the proof of Theorem 2.3 From the relation () [f( )()duI (1) -1 + (U-U) f(U- U)(U- U)'duI (U- U) + where U = JU(u) du. the linear trend is missing. Now multiply instead by BT and its transpose and let p = TA and A -O 0.1 and that TA converge to the roots of (B.8) converge to the roots of the same limiting equation as before (see (B.3 we can choose y = f3L.1.f*.) = T E i=r+ 1 Ai+ op(l) - L E i=r?1l i = tr ((dB)F [fFF' du f F(dB)') Note that if r = 0. and the results have to be modified by leaving out the terms containing ir.e. From the expression for the likelihood ratio test statistics we find that p -21n (Q. and let T m-*o.AaYa). U(1YU(1) x2(p . Let A* = (. F(t)= U(t). that in the limit the p .2 The estimation under H2* involved the solution of the equation (B.r dimensional process U = (a'.r). Proof of Theorem2.r) and cancels in (B. The likelihood ratio test statistic of H2* in H2 is the difference of the two test statistics considered in Theorem 2.8) by and its transpose (see Lemma A. i.1.11) with the S11replaced by S!*. This completes the proof of Theorem 2. by an argument similar to that given in the proof of Theorem 2.4). Thus we have seen that the p .r + 1 smallest roots normalized by T will converge in distribution to the roots of the equation pfG*G*' du . then we obtain.2B*) and multiply the matrix in (B. The matrix Ll. The roots of (B.faI. [ L?0 This shows that the r largest solutions of (B. This completes Proof of Theorem2.

1 We multiply (2. The contribution from (B.b A ab Abb -1)= ( tt7s7.11) A'Zot =A'IZ. It was proved by Stock (1987) that the regression estimate for .8). after the initial maximization. ASYMPTOTIC INFERENCE Proof of Theorem C. For m = s = p we get the result for H2 and the difference is the degrees of freedom for the test.12).4aa.P.6) and gives the new residuals Ra. (' and Aaa.1.bt. Then there are rm + r(s . proved in Appendix C.2 The limit result follows from Theorem C. JA .1 There is a qualitative difference between inference for . was superconsistent.bi Rh.t +A'AtP1p'H'Zkt +A'Et.b. t Aaa. B'Zot = B'FZt1 + B'EtX since B'Atf = 0. What remains is to calculate the degrees of freedom. which proves (3.11) is.3 = Hep and obtain (B. This has consequences for the usual proof for asymptotic normality. where Rbt = B'Rot and Abb = Sbb = Abb = B'AB. APPENDIX C.11) after having maximized with respect to the parameters (Fl. and that for the other parameters. These equations are analyzed by considering the contribution to the likelihood function from (B. Maximizing we find B'SOOB.r). The matrix H = a.6). Minimizing with respect to the parameter yet another regression of Rat and Rht on Rbt.. This proves (3. as later exploited by Phillips (1990). (2.5).1572 S0REN JOHANSEN Proof of Theorem3.. Hence the solution can be found by solving (3.1. Proof of Theorem3.r) free parameters under the assumptions of H3.b Iexp )|aa ( T T. and (2.7) in terms of Ra..11).9).A)is L-21 = (Abb) Abb exp {T ER bbtAbb Rbt} /IB'Bi. .b'.10) given (B.bt = Rat -SabS 'Rbt and Rhbt = Rht -ShbSbb Rbt.Fk -1.2) and using a =A(A'A). and the estimate AabAbb( gives rise to p.1' together with the relations (2.b t)/lA where Rt =Rat-AabAb'Rbt -A'At/(p'RhtX AabAbb and Rat =A'Rot and Rht = H'Rkt. Xpf) with Spo of dimension r x (s . Lmax 2/ T qi.. tf) = (Sab -AA AtIp'Shb)S. the likelihood function is reduced to the form (2. The relevant part of the maximized likelihood function is Lm2/T= ISbbI/IB'BI . which completes the proof of Theorem 3.10) given (B. For fi =A'At.11) and then the contribution from (B. as is the matrix S?p'. Now normalize Sp to be of the form 'p'= (I.10) (B .3) by A' and B' respectively and insert a =A i and .f3' =Aqip'H' is identified. (2.p I: t=1 'A-1Rt)/lA'Al. The contribution from (B.

e.3)u = (i = 1. 11 and #2. O .e.etc.3(u) 0 for all u. and that .8(u) denote the derivative 8(0) with respectto e in the direction Let parameters with elementsEk=1Ju that D. by in parameters . We can prove the above property about the second derivativesof the likelihood function cP. and q1. etc. then it is convenientto normalizeas follows: (C. r. A) inference can be conductedfor fixed . #02) denote all the parameters.. The same expansionwill show that the likelihood ratio test statistic for a simple hypothesisabout e is -21nQ~ TI 1'1 H | 2 ( qll q21 q12 q221 1 . .3 denote the pr X k matrixwith element ((i.. j =1.. If one can now prove that q11. The naturalcoordinatesystem in RP is given by (p. k. vec(D. Let q denote the log likelihoodfunctionnormalized T. the equation(C. p. q21are Op(l) whereas q22 is Op(Tv) for some v > 0. s =1. The usual asymptoticrepresentation the the derivatives likelihoodestimatoris maximum [qll q12 [1 -a 2 122] 1 kqi q2- Lq21 q22 q12. where . D. r) and the . by considering 8(0)(.6 cRk.1) E T q1I 2q21 2q2 Tvq22 ] [T +1)( 2 - t2) ] [T 2(V)q2] Tq Since by assumptionTql. The asymptoticdistributionof A is somewhatmore complicated.. is the value of the parameterfor whichthe resultsare derived. and The asymptoticpropertiesof estimatorsand test statistics are discussed here for a general smooth hypothesison the cointegratingrelations: p =.1) splits into and [T-q2j ] T I +1)(e (V ) 22) - l inferenceabout 11 for fixed#2 and #2 These equationsare the ones we wouldget when conducting for fixed #1.This last conditioncan alwaysbe achievedby normalizing 8(0) by 8.q12.. j).3.3'D. k).83(0))-' We also let D. i.3f..1) LT2l'1)(e2-2) ^2-12 T2( 1 -1) LT 2^ -~2 q2 (11) T-Vq22 1)(52 qjjT2 + T2(v + l)(~2 2-2)'[T vq22]T](T+2)( - This showsthat the test statisticdecomposesinto a test for #1 and an independenttest for #2* The above argumentindicatesthat inferenceabout #2 can be conductedas if 11 were known.3(u)). denote for with respect to 11. leaving the remaining of unrestricted. concentrated with respectto A andwe thereforededucethat inferenceabout(Il. See Dunsmuir and Hannan (1976) for a general treatment of smooth hypotheses for stationaryprocesses. will not be treatedin detail here.11 [22- 2-12I qll T-2 q121 T-[ - 1-1) - l T2 T24 (e1 . = has full rankfor u # 0.8(u) u E Rk.. hence one can applythe well knownresultsfor asymptotic for the stationaryprocesses AX. We assumethroughout ap11ij(M)/m?k. and vice versa. y.1'a.83(0). the p x r matrix.. i.COINTEGRATION VECTORS 1573 and #2 the The idea can be illustratedas follows:Let e = (01. s) equal to dVij(e)/ots so that (D.T 2( 1-1 Yq12 P ) =T 0.3. and 8f'X.

if i = 1.4) (C.s degrees of freedom.k1 and k. The behaviorof the estimateof e depends on which linear combinationu'(? .. THEOREM C. see Theorem 5. k+l. .2). k). kl).. . T3/2ui(uu i)-1. Here V= distribution of (Ty.1: Under the assumption H = a. as well as the parameters iy = E(Yt) and T _ _ n = lim T. -)(E..a Fk the asymptotic distribution of P)) 0 (Fl--. s < k is asymptotically The likelihood ratio test statistic of a smooth hypothesis e = distributed as X2 with k . is Gaussian with mean zero and variance matrix A 9 (n-l A 2A o ) e The asymptotic distribution of (C.r)r x k matrixD.1?)is considered and we define NT as the k x k matrixwith ith columngiven by Tu (uiui)1. O)'D3(ui)} Dp3 = vec {(O. if i = k1 + 1.5) D31 = vec {(y.1. Gaussian for fixed G with variance (Dfi'(fGG'du ? (aA-*a))D13 'AA-1W. D )'.3) is given by (C..3) r'Dp(ui) # 0 r'D3(ui) = orthogonal Rk suchthat in (i =1 k1) 0 (i = k1 + 1.. ~ 1E RS.3 with ith columngivenby (C. we Furthermore need a (p . that is if 'r'Df3(u) = 0 for all u. Xtk1 -.7) D3f(D Bf(JGG' du ? (a'A -a))D13} Df3' vec (JG(dVY}- If k1 = k. k). = We also define the variableY1 (AX-l. only G should be replaced by G* (see Theorem 5.E( Et P) ( Et T--oo t=1 likelihoodestimatorfor e that the maximum The resultsbelow are derivedunder the assumption exists and is consistent. .2 tr{A T1X t(E. corresponding (C. PROOF: The likelihoodfunctionconcentrated with respectto A and dividedby T is givenby q(i?k a. r) Df3(ui)} (i=1 (i = k1 + .'( JGG' du 0 (a'A la))D that is.8(Y T2(( F1.i = 0. Fk. T312r)'( 6() . It follows that the limit which we call the asymptotic conditional variance. then the results should be modified by replacing G by G1.A) = -2 lnIA I ..la.2) and (C. A similar result holds if a'. .1574 S0REN JOHANSEN Uk coordinatesystemin Rk is definedby choosingu1.6) NT(# it) is given by } D(3' vec (fG(dV)'}X Df.)} ..

t(t qA(l) = 2 .(u) denotes the derivative of q with respect to tY in the direction u. A) + ql.3.tr{A -l'aD.la). qA) are asymptotically Gaussian with mean zero and variance matrix which is also the limit of the matrix of the second derivatives with opposite sign with respect to these parameters.(u)'MkkD..(a) = tr A -'T' 1(#t q.g) = -tr{A'gT'X.(u)'MkkD.t(Et -D)'f'}. qe(u.4) now takes the form tr { D.D). It follows from Lemma A.lt(Xt-k-tr {A-lf T-1. we get since B'D. u)a'} tr {A -aD.7). Thus we apply the above general argument. see also Lutkepohl and Reimers (1989). It is not difficult to see by the central limit theorem for stationary ergodic processes (see White (1984)) that the derivatives T2(qrF.u) = -tr - {A 1T lkt(Et- D2P(u.. even though we have two different normalizations. To find the asymptotic distribution of the estimate of t# we expand the likelihood function around the point t#. The relation (C.COINTEGRATION VECTORS 1575 for k-1 Et-?- AXt - X- E i(AXt- -i-jx-) .lA1T- 1.p(f) = tr{ A -'T-1..-)(Dt 1.D) f'}.t(Dt .a)= q4. and vice versa.t(Et .(u) + 0( u 2).)(et )(Xt-k-X-k) + 2 tr {A .8(t# - tY)a'A a) =tr { D/(u)'MkOA -1a) for all u E Rk. .8(u) = 0 that for u. ). T2) B'TMkkBT(y.t(Et -)(AXt1 . D(U)'MkkDP(v) = D(u)'( y.(f. a..qa. since the columns of D/3(u) are orthogonal to /8.except the expression .(u)'MkkD(u)') together with similar expressions for mixed second derivatives..(Dt. In vectorized form this becomes (C. Thus if q. such that the other parameters are kept fixed.) ZD. .8) [Dp'( AMkk ?9 a(A a) D ] (t . The derivatives are most easily found by a Taylor expansion. )'} qAA(l. we can find the derivative from the expansion q(t# + u. and that this tends to infinity.3: From the identity BT(Y. i=l Here the bar denotes average. q. I) =-tr { A -1lA .. qo(u) = tr { A .11A 11}.D)(Dt -k)(Xtk-k)- -k) pa'}.f) = -tr{A-la'PT-1X. a. and second derivatives qrr(g. v). that all in terms in the above derivatives are Op(M).4. qrk_.8(u)a'} q.(X. T 2Ty = Ry' + Ti' = I P. v E Rk. qo. The limiting behavior of the various matrices is given by Lemma A. T2) DP8(v) = M(u.X0 Xt-Xk )( -AX-ji)'g k) pa').' vec I MOk A . A) = q(a._ - qaa(a. and have shown that inference for /8 can be conducted for fixed value of the other parameters.E)(-Et tr (A -1.aj(0)(Xt-k -'Vk) . 2 tr {A -lA - 'T .1Mo0kD(u)a'). such that the first conclusion of the Theorem holds and the variance is given by (C. We then get derivatives qr(g) = tr {A TX.

X-k.s degrees of freedom and as this distribution does not depend on G the result holds unconditionally.T } P The first term converges towards a Gaussian distribution with mean zero and variance matrix A(y 2 . Now if t# = one finds the same resultexceptthat Df3 is replacedby (D/3)(DM). Similarly T2D(uf)'MkoA'a = T2DP(uY(y. .1a.'Ay + 1) and the second is normalized just right. This completes the proof of Theorem C. wj) is weakly convergent towards T.1576 S0REN JOHANSEN say. wi = T.t#) MkDpl(t#-t#)at'A-lca (vec ( (dV)) D(DP( fGG' du a'A-1a)D} D/3 x (vec ( G(dV)')} which for given G is x2 distributed with k degrees of freedom. By a Taylor's expansion we find (C. u DP' [JGG' dIjD3u (see (C. but let us just see how these results can be applied to find the asymptotic distribution of .3). Now introduce the coordinate system wi = k. By expanding the likelihood function around a fixed value of t# one finds that the test statistic for a simple hypothesis for tY is -21n Q = T tr fDp. 1.e.l]( . For fixed G the variable Q is Gaussian with covariance matrix V(Q IG).D.8) by the result (C.4) and (C. kl. i = k1 + 1.Dt ( DtDfi V(Q IG)D.5)). The estimating equation is k-1 AO= E flAX i+aPX. With this notation we can replace (C. i=l1 which together with the identity k-1 X0 shows that T2(4 EFlAX i=l i + ap'X-k + OP +D+ ? -R. aA where Q = vec (f G(dV)') and V(Q IG) = JGG' duX( . but the component which goes into the cointegrating relation has a more complicated limit distribution..7).6).8} 1D. by matrix of derivatives The likelihood ratio test statistic for the hypothesis tY= t(1) is then asymptotically distributed as DE Q' ( D. a^-a - Y. but of course the factor T fi has to be taken care of. so that the distribution can be found from the second part of Theorem C. i =1. The matrix Mkk should be normalized by T. Note that the asymptotic distribution of a' 4 is Gaussian..k+'D+/.Dt ) DtDR)Q.t) = -(T2(r-T1 -acT2( . . DRV(Q IG)D. the pr x k matrixDf3 is multiplied the k x s i. hence this statistic is x2 distributed with k .lui.2Ui.TlT)B'TMkOA-a converges weakly for u = w1 towards UIDP3'JG(dV)'. k-l -k_. Then M(wi.1 and by BT to get convergence (see Lemma A.1.

)JG(dVY(a'A 1a) (c'P)1.TH)oe} s .9) gives the result(5.H TH) (Y. Thus in this case k = (s .COINTEGRATION VECTORS 1577 Proof of Theorem5. i = 1.. We can simplifythe expression (YH.j= This shows that Dfi3'(JGG'du 09(aA-1 a))DI3= P 0a'A-la.yHrH) are orthogonal and span sp(H).3)).TH)P1(YH.(YH. j=1. r). j =1.yf (i = 1. 0)P 1(fH TH)(. can be solved by choosing uij = oiej.P(. r) and tr (ejo!(O.s . and where 1= s .r.1. vec {(0.) has the expansionaroundi# = 0: The normalized pC(#) -c =) (I8c)l'#c)+ 0. is given by vec {(y. s .j=1.(. j)th column of D13 (see (C.1).1. (i= r). )P '(Y. The matrixD.T)fG(dVY(aA-1a) estimate.9) T( 13PC ) C(I-Pc)(YH. It followsthat D. j= 1.r. . (see (C.r. TH) U = (0.B =6. r)JG(dVY} = H G (i = s .O)(Y.r.1 givenin TheoremC. THTH) U = 0 r.7)becomes(tr(u'iu 1))-X 1 and T3/2 if i = s . which insertedinto (C. TH)U and that the equation j i = 1.r . j = 1. A point in sp(H) can be the The proof consistsof simplifying expressions represented as p + (YH' TH)t~.s - s .5)).4) and (C. O)'(H )e} (i =1 . are unit vectors in RS-r and ej unit vectors in Rr. r).2) and (C. r}.1. 12 whichshows that (C.3(u) = (YH.r)r and k1 = (k .1. 1.) = t?jYmultiplied T if i= by The left handside of (C.THY(Y.r- Gl(dVj) r . where (p.r . Collectingthese resultswe obtain (TyH.O)fG(dV)} = y. j)th element equal to tr (ejo.l)r. where P is a notationfor I'(H GjG' du Y'YH YJGIG2 du TITH J fYYJG2G' du Y'YH T4Tf G2G2 du 'TH T 1 tr(u'1j. The (i.T3/2TH (YH.B' vec{JG(dVY} has (i. rH(O. where oi s . T'Dp (u) = T'(YH.T) (0. r)G = YHP1YWyG1 + P = THTG2 TG2) YHpl(YHYGl -P12P22T = YH(YHYJG1:2Gl:2du Y'YH) yjyG1:2.

it even follows that P From the normalization f E Op(T. TH) span sp(H).rH) { (YH. Since (I . f ) are OP(TiYShh.6) from Lemma A.13) TK'(YHg + rHf ){(YHg + THf) Skk(YHg + T'Hf)} K = TK'(YH. which satisfies K'/3= 0.'rH )Skk (YH. and TH * 0.Pcc'YHM is Op(T. TH)(g'.rH ) K = K'BT(T-'B'TSkkBT) B K. we have I (YHg + rHf )'Skk. Consistent Estimates of the Asymptotic Conditional Variance The next results are needed for the consistent estimation of the limiting conditional variance in the limiting distribution of 8c. Finally we note that the normalization V'ShhbV= I (see (3.1) and (I .2 below.TH) where YH(P X (p . U Shh.2G'2 du y'YH) YK.YH.1578 S0REN JOHANSEN The consistent estimator (5. We let u = (r+ .11) and (C. YH)span sp (H) and G1.6) for the asymptotic conditional variance is found from Lemma C. Ar variance. for BT = (YH. it follows that VA and hence also the coordinates (e. since e -* 0.'H(I - cP' ) = T(I-Pc')'HSi bH(IcP1') + op(l) +op(l).1)and that.2) with the normalization chosen such that (13.Pcc')P is Op(T. We take the coordinates (13.) (see (3.A s) E Op(T-1) 1/2).2: If K'/3 = 0. The relation (A. = T(I-/ccc')'HiWiH'(I-c.1 gives the identity (G'A a 1). that K'Hu = K'(YHg + rHf ) (C. such that T(I -pcc')'HSj-. PROOF: We first expand Hu =.2TH are of smaller order of magnitude than the .8cc')P = 0.f3Bc'. for the choice K = I .2)) implies that HSh-h bH' = HVV'H' = /3/3'+ Huu'H'. (YHg + rHf)Skk(YHg +THf -*-> Note finally that from (C. T- 2TH).b= LEMMA I) C.b(YHg + THf ) and hence that P (C.12) = K'(yH.12) into TK'Huu'H'K (C.bU= I.1)) is .3') Hence one can apply either of these in the consistent estimation of the asymptotic conditional = diag(Al . The terms involving T.11) I.2 is replaced by G1. If TH = 0 then this results holds with YH(P X (p .3e + yHg + rHf (C-10) and then note that from U ShaobSia' bSahbi = diag(Ar+1-.1. since K'/3 = 0.s .10) we have. then TK'Huu'H'K -* K'YH (YJf G1. f')' and we get (YHg + THf) Now insert (C.YH.1.s)) chosen such that (13. (I . g. This completes the proof of Theorem 5.2).rH))} (YH.

" The University of Michigan. AND G. GRANGER. C. Goodman. JEGANATHAN. 121-130. and L. Karlin. (1988): "Some Aspects of Asymptotic Theory with Applications to Time Series Models. 17.. R. FOUNTIS. W. AND H. A. W. AND C.. J. If TH = 0 we can drop the terms involving sp (H) and apply Lemma A. W.. REINSEL (1990): "Estimation for Partially Non-stationary Multivariate Autoregressive Models. S. 64. (1988): "Common Nonstationary Components of Asset Prices. RASCHE (1989): "Long-run Income and Interest Elasticities of Money Demand in the United States. (1981): "Some Properties of Time Series Data and their Use in Econometric Model Specification. GONZALO. 55. 5. K." National Bureau of Economic Research Discussion Paper No.. ENGLE. (1991): "Some Structural Hypotheses in a Multivariate Cointegration Analysis of the Purchasing Power Parity and the Uncovered Interest Parity for UK. 12. N. 8.. P. GRANGER (1987): "Co-integration and Error Correction: Representation. C. Estimation. . A. AND B. WEISS Studies in Econometrics.H. D. (1951): "Estimating Linear Restrictions on Regression Coefficients for Multivari- ate Normal Distributions. P. S." Journal of KUNST. AND E. G." Biometrika. Amemiya. W.COINTEGRATION VECTORS 1579 terms involving yH and hence (C." Discussion Paper 89-55. (1989): "Comparison of Five Alternative Methods of Estimating Long-Run Equilibrium Relationships. New York: Wiley." Discussion Paper. 2949. JOHANSEN. by S. AND D. 22." Christian-Albrechts Universitat Kiel." in GRANGER. ANDERSON.." Econometrica. LUTKEPOHL. 255-278. J." Journal of Economic Dynamics and Control." Advances in Applied Probability. New Yofk: Academic Press. 351-365. (1988b): "Statistical Analysis of Cointegration Vectors." Annals of Statistics. P. C. H. ENGLE. ed. San Diego. (1983): "Cointegrated Variables and Error Correction Models." Discussion Paper 89-38. 339-364. NEUSSER (1990): "Cointegration Applied Econometrics. University of California. TIAO (1981): "A Canonical Analysis of Multiple Time Series with Applications. (1991): "The Statistical Analysis of I(2) Variables. 231-254. 813-823. 419-428.3 again. E." Annals of Mathematical Statistics. AND K." University of Copenhagen. S. Yoo (1989): "Cointegrated Economic Time Series: A Survey with New Results. 169-210. JUSELIUS (1990): "Maximum Likelihood Estimation and Inference on Cointegration-with Applications to the Demand for Money." Journal of Economic Dynamics and Control. 52. JOHANSEN. 259-386. 251-276. Box. T. in a Macro-economic System. Time Series and Multivariate Statistics. S. AND G. 327-351. F. 16. DicKEY (1989): "Testing for Unit Root Nonstationarity in Multivariate Autoregressive Time Series. and Testing.. BOSSAERT. J. DUNSMUIR.. F. AND K. 355-365. 83-13a.. R. (1990): "A Representation of Vector Autoregressive Processes Integrated of Order 2. R. University of California. AND A. T. 80." Contemporary Mathematics. (1988a): "The Mathematical Structure of Error Correction Models. 85. 347-364. G." to appear in Econometric Theory. HANNAN (1976): "Vector Linear Time Series Models..13) converges to K'YH ( y (J GIG' du - (fGIG2 du) (fG2G2 TH dU) (G2G' du)})'yH) HK. A. University of California." Journal of Econometrics." Journal of the American Statistical Association. (1971): The Statistical Analysis of Time Series. J. AND R. J." Oxford Bulletin of Economics and Statistics." to appear in Journal of Econometrics..-E. REIMERS (1989): "Impulse Response Analysis of Cointegrated Systems with an Investigation of German Money Demand. 12. San Diego. W. HOFFMAN. C. and choose yH orthogonal to /3 such that they span REFERENCES AHN. (1983): "Time Series Analysis of Error Correcting Models. San Diego.

4. 73. 8." in SIMS. 1035-1056. M. REINSEL. AND D. Time Series." Least SquaresEstimatesof Cointegration Econometrica..P. 105-118. 82. PHILLIPS(1988): "Statistical Processes: Part 1. 872. AND S. in a Structure Time Series. WATSON (1990): "Inference LinearTime Series Modelswith some Unit Roots. VELU. 836-843." to appear in Oxford Bulletin of Economics and Statistics. New York: Academic Press. Time Series. and Forecasting. B.P." Econometrica.1580 S0REN JOHANSEN M. C. (1990): "Optimal Inference in Cointegrated Systems." Journal of the Royal Statistical Society. P.." Review of PHILLIPS. AND G. 99-124.P." Journal of Economic Dynamics and Control. AND P. WATSON (1988): "Testing for Common Trends. CornellUniversity. 111-115.. PHILLIPS. WICHERN (1986): "Reduced Rank Models for Multiple VELU. C. N. D..P. P. Propertiesof STOCK. R. 83.AND M. with Integrated Processes: Part 2. H. 283-306. B... 468-497. C. 12." Review of Economic Studies. R. GIANNINI (1992):"Non-Causality Cointegrated Estimation and Testing. G." Econometrica. AND M. DURLAUF (1986):"MultipleTime Series Regressionwith Integrated Processes. PHILLIPS. C."Journal of the American Statistical Association. J... 57. C. W." PARK. H." Journal of Time Series Analysis. AND Y. AND B. WICHERN. J. J. B.. R. A. (1987):"Asymptotic Vectors. AND C. 43.. (1984): Asymptotic Theoryfor Econometricians. Structure: Estimation. Inferencewith I(1) Processes. STOCK. 1097-1107. H. C. J. B." Journal of the American Statistical Association."CowlesFoundation Regressionfor Cointegrated PHILLIPS. . Series B. W. REINSEL. (1988):"Spectral No. B. AND S. Tso. in MOSCONI. D. B. J. C."Biometrika. Box (1987):"Identifying Simplifying the American Statistical Association. AHN (1990): "VectorsAR Modelswith Unit Roots and Reduced Rank of University Wisconsin. G. 53. 95-131. E. P. Y." Econometric Theory. STOCK. OULIARIS (1988): "Testing for Cointegration using Principal Components Methods. J.P. (1981): "Reduced-Rank Regression and Canonical Analysis. HANSEN (1990): "Statistical Economic Studies. (1988):"Canonical Inference in Regressionswith Integrated PARK. W. K. AND S. OSTERWALD-LENUM. REINSEL (1987): "A Note on Non-stationary and Canonical Analysis of Multiple Time Series Models.. WHITE. (1992): "A Note with Fractiles of the Asymptotic Distribution of the LikelihoodCointegrationRank Test Statistics:Four Cases. 1-26. Y. PARK (1988): "Asymptotic Equivalence of OLS and GLS in Regression with Integrated Regressors. 59." Econometric Inferencein Regressions (1989):"Statistical Theory. C." Journal of PENA.. Systems:Representation." to appear in OxfordBulletinof Economics and Statistics. 183-189. W. AND G.K. LikelihoodRatio Test. 83. 473-495.-S. C. 55. 113-144. PHILLIPS. 479-487. E. 58. Cointegrating Regressions.C. H. 5.

Sign up to vote on this title

UsefulNot useful by Menaz Ali

- Johansen, S. (1988) - Statistical Analysis of Cointegration Vectors
- Unit Root Cointegration
- Applied Econometrics _ a Moder - Dimitrios Asteriou & Stephen G_1361
- IE-Appleyard, Field and Cobb
- [Cheng Hsiao] Analysis of Panel Data(BookFi.org)
- Time Series Analysis.hamilton
- Ts Econometric s
- Random Matrix Theory
- Tutorial FreeProbabilityTheory
- Simultaneous Diagonal
- MA1506Lab3(MATLAB)
- 1409893853_400__316hw00_14
- Yousef-Saad-Iterative-Methods-for-Sparse-Linear-Systems.pdf
- ma2031-classnotes
- Large Deviations and Stochastic Calculus for Large Random Matrices
- quiz1sol_f08
- report
- Parallel Computation of the Singular Value Decomposition
- Cs421 Cheat Sheet
- Spectral Analysis of the Discrete Helmholtz Operator Preconditione - Gijzen Et Al - 2007
- Iterative Methods for Computing Eigenvalues and Eigenvectors
- 03 Linear Algebra Copy
- Mixed_states.pdf
- lecture15.ppt
- Algebraic Eigenvalue Ch5
- Lecture4.Handout
- 7076 Fnl
- Eigenvalues
- Olver P.J., Shakiban C. Applied mathematics Book.pdf
- Chap04_8up- Eigenvalue Problems
- Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Auto Regressive Models

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd