You are on page 1of 2

#      

Granger Causality
It is a statistical hypothesis test for determining whether one time series is useful in forecasting another, taking the lead-lag effect. Regression reflect “Mere” correlation but Clive Granger (Nobel Prize Winner) argued that there is an interpretation of a set of test as revealing something about causality (i.e. cause and effect relation). It tests the cause-effect relation from both sides. If X granger causes Y, then past values of X should contain information that helps predict Y above and beyond the information contained in past values of Y alone. Assumption: o Mean and variance of data does not change over time (data is stationary one) o The data must be explained by linear model. Limitation: if both X and Y are driven by a common 3rd factor, one might still accept the alternative hypothesis of granger causality.

    

Schwarz Criterion
Also known as “Bayesian Information Criterion” (BIC). It is a criterion for model selection among a finite set of models. It is based on likelihood function. When fitting a model, it is possible to increase the likelihood by adding parameters, by doing so may result in overfitting. The BIC resolves this problem by introducing a penalty term for the number of parameters in the model. It is developed by Gideon E. Schwarz. The lower it is, the better it is.

   

Akaike Information Criterion (AIC)
It is a measure of relative goodness of fits of a statistical model. It is developed by Hirotsugu Akaike. It provides a means for model selection but it can tell nothing about how well a model fits the data in the absolute sense. Given a set of candidate models for the data, the preferred model is the one with minimum AIC value.

  