Professional Documents
Culture Documents
This paper uses a methodology similar to Devpura et al. (2018) by first using a
historical monthly time-series data of the indices from the selected countries. This large time
period gives the best possible chance to pick up any time-variation in predictability. Second,
9 monthly predictor variables from Rapach et al. (2005), which constitute a set of popular
macro-economic predictors from the literature, are taken into consideration. Specifically, we
include the following predictors:
• relative money market rate (difference between the money market interest rate and a 12-
month backward-looking moving average, RMM);
• relative 3-month Treasury bill rate (difference between the 3-month Treasury bill rate and
a 12-month backward-looking moving average, RTB);
• relative long-term government bond yield (difference between the long-term government
bond yield and a 12-month backward-looking moving average, RGB);
• term spread (difference between the long-term government bond yield and the 3-month
Treasury bill rate, TSP);
• inflation rate (first difference in the log-levels of the consumer price index, INF);
• industrial production growth (first difference in the log-levels of the industrial production
index, IPG);
• narrow money growth (first difference in the log-levels of the narrowly defined money
stock, NMG);
•broad money growth (first difference in the log-levels of the broadly defined money stock,
BMG);
• change in the unemployment rate (DUN).
Third, the Westerlund and Narayan (2015) time-series predictive regression model is
used to generate time-varying evidence of predictability using different estimation windows.
To address for robustness, the results Westerlund and Narayan in-sample predictive
regression are compared with the Lewellen (2004) model, as this model addresses problems
of biasness and persistency using the ordinary least squares (OLS) method, but does not
account for heteroskedasticity. Therefore control for data issues that can potentially bias
interpretation of the null hypothesis of no predictability must be ensured. The null-
hypothesis of no predictability is tested based on (1) where the predictor variable follows a
first-order autoregressive process (2)
rt= α + βxt-1 + εt (1)
xt = φ + ρxt-1 + μt (2)
Where rt is excess stock market return, xt is a predictor and εt is the error term. Given
the fact that 9 predictor variables are used, the predictability regression model is estimated
9 times per country. This framework allows the persistency and endogeneity of the predictor
variable and any autoregressive conditional heteroskedasticity (ARCH) effects to be
modeled. Therefore, assuming that εt is statistically significantly correlated with μt, we have
the following relationship between the errors:
εt = γμt + ηt (3)
where εt and ηt are independent and identically distributed and symmetric with mean zero
and γ=cov(ε,μ)/var(μ). Lewellen (2004) assumes ρ̂ and ρ0 ≈ 1, and ρ0 with a guess value; the
OLS bias-adjusted estimator has the following form:
β̂adjLew = β̂ - γ(ρ̂ - ρ0) (4)
where β̂adjLew represents the β coefficient estimated to test the null of no predictability.
Westerlund and Narayan (2015) test the alternative hypothesis, assuming β is local-to-zero
as T→∞, β=b/T. Here, b is a constant and does not depend on T. Similarly, assuming most of
the predictor variables are persistent, ρ=1+c/T, where the parameter c⩽0 measures the
degree of persistency in xt.
β̂adjWN = β̂ - γ(ρ-1) (5)
denotes the β coefficient generated using the Westerlund and Narayan (2015) feasible
quasi-generalized least squares (FQGLS)-based test. The feasible version of the regression
equation can be written as follows:
rt=α+ β̂adjWN xt-1+γ(xt-ρxt-1)+ηt (6)
The FQGLS includes the information contained in the ARCH structure of the error terms,
which is not accounted for by the Lewellen estimator. The approach taken by WN to control
for ARCH is simple, and is based on the following variance equation of ηt:
q
var(ηt|It-1)= σ2ηt =λ0+∑ λjη2t-j (7)
j=1
var(εt|It-1)=σ2εt=γ2σ2μt+σ2ηt (8)
(9)
where πt =1/σεt is the FQGLS weight and xdt = xt – ΣTs=2 xs/T, and rdt have similar definition. T is
sample size and q = max{qx , qr,x}. The end result is that the FQGLS-based model tends to
outperform the adjusted OLS-based predictive regression model, as shown in simulation
results in Westerlund and Narayan (2015). The asymptotic t-FQGLS uses asymptotic critical
values in testing the null of “no predictability,” whereas t-FQGLS uses subsampling-based
critical values.
3.3 Expanding window method
To assess whether the predictability is time-varying, the time-series predictability model is
estimated as specified in Eq. (1), but based on a 9-year expanding (recursive) window. This
time window is chosen as it is comparable to the window of Devpura et al. (2018) in regards
to percentage of the total sample. The initial sample is set to the first 9 years of data. This
implies that the first estimation of the predictive regression model is over the period 1983:02–
1991:01. The model is then re-estimated using an expanding window approach by adding one
more observation (month) at a time. In other words, the next predictive regression model is
estimated over the period 1983:02–1991:02, 1983:02–1991:03, and so on. This process of
estimating Eq. (1) concludes when the last sample date (2019:12) is absorbed.
They found that the coefficients of the model were stable over time and therefore interpret
the fitted value from the AR(1) model estimated over the entire sample period as the level of
expected disaster risk, and the residual from the AR (1) model is the level of unexpected
disaster risk. Therefore the following augmented AR(1) model is proposed for each of the
predictor variables to compute expected and unexpected risks of the predictor variables:
Xt = α0 + α1Xt-1 + βiDit + εt (10)
In total, these regression models are estimated 9 times (one for each predictor variable,
denoted by X in the above model) using the OLS estimator. The standard errors of the
regression are corrected for heteroskedasticity and autocorrelation by using eight lags.
These dummy variables D capture the statistically significant phases of time-varying
predictability of stock returns, and i represents the number of significant time-varying
phases for each predictor. A dummy will only be created for phases with at least 12
consecutive months showing significant predictability. These significant time-varying
predictability phases are tested by Eq. (9) The time-varying t-statistics are obtained via the
expanding window approach. The initial window is set to be the first 9 years of data. This
test is chosen as according to Westerlund and Narayan (2015) the choice of which test to use
is dictated primarily by the extent of predictor persistency and endogeneity. They found that
in their samples of a large number of financial ratios and two of the most popular
macroeconomic predictors are affected by both predictor persistency and endogeneity. This
means that existing tests, such as t-LS and t–LEW are likely to be misleading, and that
subsample FQGLS is more reliable. They also showed that accounting for heteroskedasticity
is important for power, and that this feature is present in their samples. They found the
subsample FQGLS most suitable to test the significance of these financial ratios and
macroeconomic predictors.
In the final step, the hypothesis that expected and unexpected risks determine time-
varying predictability is tested with the following OLS regression :