Professional Documents
Culture Documents
Winter/Spring 2023
Overview
▪ Multivariate strong vs. weak stationarity
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 4
Multivariate Weak vs. Strong Stationarity
▪ The object that appears in the definition is new but it collects
familiar objects: given a sample , the cross-covariance matrix can be
estimated by ‘ h0
o Here is
the vector of sample means
o When h = 0 we have the
sample covariance matrix
o The cross-correlation
matrix is
where D is the diagonal
matrix that collects sample standard deviations on its main diagonal
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 5
Multivariate Weak vs. Strong Stationarity
Cross-sample
correlogram (i.e., off-
diagonal element of
0, 1, …, 24)
o tr(A) is the trace of a matrix, simply the sum of the elements on its
main diagonal
o Q(m) has an asymptotic (large sample) 𝜒𝑁2 2𝑚 (which may be poor in
small samples)
o Note that the null hypothesis corresponds to:
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 7
Vector Autoregressions: Reduced-Form vs. Structural
▪ A VAR is a system regression model that treats all the N variables as
endogenous and allows each of them to depend on p lagged values
of itself and of all the other variables
(serially)
▪ For instance, when N = 2, yt = [xt zt ]’ or [R1,t R2,t]’, one example
concerning two asset returns may be:
u1,t
u2,t – Prof. Guidolin
Lecture 24: Multivariate Linear Time Series (VARs) 8
Vector Autoregressions: Reduced-Form vs. Structural
o Var[ut] is the covariance matrix of the shocks
o When it is a full matrix, it implies that contemporaneous shocks to
one variable may produce effects on others that are not captured by
the VAR structure
▪ If the variables included on the RHS of each equation in the VAR
are the same then the VAR is called unrestricted and OLS can be
used equation-by-equation to estimate the parameters
o This means that estimation is very simple
▪ When the VAR includes restrictions, then one should use MLE,
which in this case often takes the form of Generalized Least
Squares (GLS), Seemingly Unrelated Regressions (SUR), or MLE
▪ Because the VAR(p) model, yt = a0 + A1yt-1 + A2yt-2 + ... + Apyt-p+ ut
does not include contemporaneous effects, it is said to be in
standard or reduced form, to be opposed to a structural VAR
▪ In a structural VAR(p), the contemporaneous effects do not need to
go through the covariance matrix of the residuals, ut
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 9
Vector Autoregressions: Reduced-Form vs. Structural
▪ What is the difference? Consider the simple N = 2, p = 1 case of
where both xt and zt are stationary, xt and zt are uncorrelated
white noise processes, also called structural errors
▪ Using matrices, this VAR(1) model may be re-written as:
Structural VAR
▪ What are the properties of the reduced form errors? Recall that xt
and zt were uncorrelated, white noise processes, then:
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 11
Vector Autoregressions: Reduced-Form vs. Structural
▪ The reduced-form shocks uxt and uzt will be correlated even though
the structural shocks are not:
a0 6 mean parameters + 3
a0
o 9 vs. 10: unless one is willing to restrict one of the parameters, it is
not possible to identify the primitive system and the structural VAR
is under-identified
▪ One way to identify the model is to use the type of recursive
system proposed by Sims (1980): we speak of triangularizations
▪ In our example, it consists of imposing a restriction on the
primitive system such as, for example, b21 = 0
▪ As a result, while zt has a contemporaneous impact on xt, the
opposite is not true
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 14
Identifying Structural from Reduced-Form VARs
▪ In a sense, shocks to zt are more primitive, enjoy a higher rank, and
move the system also through a contemporaneous impact on xt
▪ The VAR(1) now acquires a triangular structure:
a0
a0
▪ This corresponds to imposing a Choleski decomposition on the
covariance matrix of the residuals of the VAR in its reduced form
▪ Indeed, now we can re-write the relationship between the pure
shocks (from the structural VAR) and the regression residuals as
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 15
Recursive Choleski Identification
▪ Working out the full algebra,
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 16
Recursive Choleski Identification
▪ In fact, this method is quite general and extends well beyond this
VAR(1), N = 2 example: in a N-variable VAR(p), B is a NxN matrix
because there are N residuals and N structural shocks
▪ Exact identification requires (N2 N)/2 restrictions placed on the
relationship btw. regression residuals and structural innovations
▪ Because a Choleski decomposition is triangular, it forces exactly
(N2 -N)/2 values of the B matrix to equal zero
o Because with N = 2, (22 -2)/2 = 1, you see that b21 = 0 was sufficient
in our example
▪ There are as many Choleski decompositions as all the possible
orderings of the variables, a combinatorial factor of N
o A Choleski identification scheme results in a specific ordering, we are
introducing a number of (potentially arbitrary) assumptions on the
contemporaneous relationships among variables
o Choleski decompositions are deliberate in the restrictions they place
but tend not to be based on theoretical assumptions
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 17
Stationarity of VAR Processes
▪ Alternative identification schemes are possible (but they are more
popular in macroeconomics than in finance)
▪ For a general VAR(p), algebra shows that
which is the vector moving average (VMA) infinite representation
o The vectors of coefficients 1, 2, 3, … are complex functions of the
original (reduced-form) coefficients
o = E[yt] is the unconditional mean of the VAR process
▪ In a VAR(1), we have
vec of
= N
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 19
Conditional vs. Unconditional Moments
o While the unconditional covariance matrix is a function of both the
covariance matrix of the residuals, u, and of the matrix A1,
conditioning on past information, the covariance matrix of yt is the
same as the covariance matrix of the residuals, u
▪ For instance, in the case of the US monthly interest rate data on 1-
month and 10-year Treasuries, we have (t-statistics in […]):
=
o The conditional moments are obviously different, e.g.,
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 20
Generalizations to VAR(p) Models
▪ At some cost of algebra complexity, these findings generalize
o If is non-singular,
assuming the series is weakly stationary
o The conditional mean differs from the unconditional one, as
where K = N2p + N
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 23
Forecasting with VAR Models
o The various
criteria suggest
it would be pru-
dent to estimate
larger VAR models
▪ Loss functions that
lead to the minimi-
zation of the mean
squared forecast
error (MSFE) are the
most widely used
▪ The minimum time t
MSFE prediction at
a forecast horizon h
is the conditional
expected value:
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 24
Impulse Response Functions
o The formula can be used recursively to compute h-step-ahead
predictions starting with h = 1:
25
Impulse Response Functions
o The two error processes, 𝑢1,𝑡 and 𝑢2,𝑡 can be represented in
terms of the two sequences 𝜖1,𝑡 and 𝜖2,𝑡 , i.e., the structural
innovations:
The
experiment is
a tightening
on short-term
rates by the
FED
Variance Decompositions
▪ Understanding the properties of forecast errors from VARs is
helpful in order to assess the interrelationships among variables
▪ Using the VMA representation of the errors, the h-step-ahead
forecast error is
Choleski ordering:
__ 1M Yield
__ 1Y Yield
__ 5Y Yield
__ 10Y Yield
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi
Variance Decompositions and Granger Causality
Choleski ordering:
__ 10Y Yield
__ 1M Yield
__ 1Y Yield
__ 5Y Yield
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi
Granger-Sims Causality
Lecture 4: Multivariate Linear Time Series (VARs) – Prof. Guidolin & Dr. Rotondi 34