Table of Contents

SETTING UP STATA SETTING UP A PANEL HOW TO GENERATE VARIABLES GENERATING VARIABLES GENERATING DATES HOW TO GENERATE DUMMIES GENERATING GENERAL DUMMIES GENERATING TIME DUMMIES TIMES-SERIES ANALYSES 1. ASSUMPTIONS OF THE OLS ESTIMATOR 2. CHECK THE INTERNAL AND EXTERNAL VALIDITY A. THREATS TO INTERNAL VALIDITY B. THREATS TO EXTERNAL VALIDITY 3. THE LINEAR REGRESSION MODEL 4. LINEAR REGRESSION WITH MULTIPLE REGRESSORS ASSUMPTIONS OF THE OLS ESTIMATOR 5. NONLINEAR REGRESSION FUNCTIONS A. EXAMPLES OF NONLINEAR REGRESSIONS 1) POLYNOMIAL REGRESSION MODEL OF A SINGLE INDEPENDENT VARIABLE 2) LOGARITHMS B. INTERACTIONS BETWEEN TWO BINARY VARIABLES C. INTERACTIONS BETWEEN A CONTINUOUS AND A BINARY VARIABLE D. INTERACTIONS BETWEEN TWO CONTINUOUS VARIABLES RUNNING TIME-SERIES ANALYSES IN STATA A TIME SERIES REGRESSION REGRESSION DIAGNOSTICS: NON NORMALITY

5 5 6 6 7 8 8 9 10 10 11 12 12 13 13 13 14 15 15 15 16 16 16 16 16 21

STATA STEP-BY-STEP Thierry Warin

© Thierry Warin

1

© Thierry Warin

2

REGRESSION DIAGNOSTICS: NON-LINEARITY REGRESSION DIAGNOSTICS: HETEROSCEDASTICITY REGRESSION DIAGNOSTICS: OUTLIERS REGRESSION DIAGNOTICS: MULTICOLLINEARITY REGRESSION DIAGNOSTICS: NON-INDEPENDENCE TIME-SERIES CROSS-SECTION ANALYSES (TSCS) OR PANEL DATA MODELS A. THE FIXED EFFECTS REGRESSION MODEL B. REGRESSION WITH TIME FIXED EFFECTS RUNNING POOLED OLS REGRESSIONS IN STATA THE FIXED AND RANDOM EFFECTS MODELS CHOICE OF ESTIMATOR TESTING PANEL MODELS ROBUST STANDARD ERRORS RUNNING PANEL REGRESSIONS IN STATA

31 36 40 52 55

1. PROBIT AND LOGIT REGRESSIONS PROBIT REGRESSION LOGIT REGRESSION LINEAR PROBABILITY MODEL EXAMPLES

75 75 75 76 77 77 91

57 57 58 59 59 60 61 61 63

HEALTH CARE APPENDIX 1

IT IS ABSOLUTELY FUNDAMENTAL THAT THE ERROR TERM IS NOT CORRELATED WITH THE INDEPENDENT VARIABLES. 63 CHOOSING BETWEEN FIXED EFFECTS AND RANDOM EFFECTS? THE HAUSMAN TEST 64 IF YOU QUALIFY FOR A FIXED EFFECTS MODEL, SHOULD YOU INCLUDE TIME EFFECTS? 66 FIXED EFFECTS OR RANDOM EFFECTS WHEN TIME DUMMIES ARE INVOLVED: A TEST 66 DYNAMIC PANELS AND GMM ESTIMATIONS HOW DOES IT WORK? TESTS IN NEED FOR A CAUSALITY TEST? MAXIMUM LIKELIHOOD ESTIMATION 68 69 71 71 75

© Thierry Warin

3

© Thierry Warin

4

Setting up Stata
We are going to allocate 10 megabites to the dataset. You do not want to allocate to much memory to the dataset because the more memory you allocate to the dataset, the less memory will be available to perform the commands. You could reduce the speed of Stata or even kill it. set mem 10m we can also decide to have the “more” separation line on the screen or not when the software displays results: set more on set more off

You should describe and summarize the dataset as usually before you perform estimations. Stata has specific commands for describing and summarizing panel datasets. xtdes xtsum xtdes permits you to observe the pattern of the data, like the number of individuals with different patterns of observations across time periods. In our case, we have an unbalanced panel because not all individuals have observations to all years. The xtsum command gives you general descriptive statistics of the variables in the dataset, considering the overall, the between and the within variations. Overall refers to the whole dataset. Between refers to the variation of the means to each individual (across time periods). Within refers to the variation of the deviation from the respective mean to each individual. You may be interested in applying the panel data tabulate command to a variable. For instance, to the variable south, in order to obtain a one-way table. xttab south As in the previous commands, Stata will report the tabulation for the overall variation, the within and the between variation.

Setting up a panel
Now, we have to instruct Stata that we have a panel dataset. We do it with the command tsset, or iis and tis iis idcode tis year or tsset idcode year In the previous command, idcode is the variable that identifies individuals in our dataset. Year is the variable that identifies time periods. This is always the rule. The commands refering to panel data in Stata almost always start with the prefix xt. You can check for these commands by calling the help file for xt. help xt

How to generate variables
Generating variables
gen age2=age^2 gen ttl_exp2=ttl_exp^2 gen tenure2=tenure^2

© Thierry Warin

5

© Thierry Warin

6

And format: Now, let's compute the average wage for each individual (across time periods). Format varname2 %d bysort idcode: egen meanw=mean(ln_wage) In this case, we did not apply the sort command previously and then the by prefix command. We could have done it, but with this only command, you can always abreviate the implementation of the by prefix command. The command egen is an extension of the gen command to generate new variables. The general rule to apply egen is when you want to generate a new variable that is created using a function inside Stata. In our case, we used the function mean. You can apply the command list to list the first 10 observations of the new variable mwage. list meanw in 1/10 And then apply the xtsum command to summarize the new variable. xtsum meanw You may want to obtain the average of the logarithm of wages to each year in the panel. bysort year: egen meanw1=mean(ln_wage) And then you can apply the xttab command. xttab meanw1 sort idcode year by idcode: gen tenure1=l.tenure If you were interested in generating a new variable tenure3 equal to one difference of the variable tenure, you would use the time series d operator. by idcode: gen tenure3=d.tenure If you would like to generate a new variable tenure4 equal to two lags of the variable tenure, you would type: by idcode: gen tenure4=l2.tenure The same principle would apply to the operator d. Let’s generate dates: Gen varname2 = date(varname1, “dmy”) Let's just save our data file with the changes that we made to it. Suppose you want to generate a new variable called tenure1 that is equal to the variable tenure lagged one period. Than you would use a time series operator (l). First, you would need to sort the dataset according to idcode and year, and then generate the new variable with the "by" prefix on the variable idcode.

How to generate dummies
Generating general dummies
Let's generate the dummy variable black, which is not in our dataset. gen black=1 if race==2 replace black=0 if black==.

Generating dates

© Thierry Warin

7

© Thierry Warin

8

You need to change the base anyway: char _dta[omit] “prevalent” xi: i. and similarly for the remaining years. We will name the time dummies as "y"... Generating time dummies In order to do this.category tabulate category The OLS estimator chooses the regression coefficient so that the estimated regression line is as close as possible to the observed data. Related to the first assumption: if the variance of this conditional distribution of ui does not depend on X i . • and we will get a first time dummy called "y1" which takes the value 1 if year=1980.save. g(y) 1. The correlation between X i and ui should be nil: corr ( xi . Assumptions of the OLS estimator 1. let's first generate our time dummies. a second time dummy "y2" which assumes the value 1 if year=1982.. ui ) = 0 . This is the most important assumption in practice.. then it is likely because there is an omitted variable bias. You could give any other name to your time dummies. 0 otherwise. It takes the items (string of letters. where closeness is measured by the sum of the squared mistakes made in predicting Y given X: ∑ (Y − b i i =1 n 0 − b1 X i ) 2 (1) With b0 and b1 being estimators of β 0 and β1 . 0 otherwise. replace Times-series analyses Another way would be to use the xi command. then the errors are said to be homoskedastic. If this assumption does not hold. One should test for omitted variables using (Ramsey and Braithwaite. the conditional distribution of ui given X i has a mean of zero. for instance) of a designated variable (category. We use the "tabulate" command with the option "gen" in order to generate time dummies for each year of our dataset. This means that the other factors captured in the error term are unrelated to X i . 1931)’s test. The error term ui is homoskedastic if the variance of the conditional distribution of ui given X i is constant for i = 1. n and in • © Thierry Warin 9 © Thierry Warin 10 . tab year. for instance) and create a dummy variable for each item. 2.

Whether the errors are homoskedastic or heteroskedastic. To test for heteroskedasticity. Causal effects are estimated using the estimated regression function. This is to be sure that there is no selection bias in the sample. Yi ) . and confidence intervals should have the desired confidence level. This second assumption holds in many cross-sectional data sets. imprecise measurement of the independent variables.particular does not depend on X i . If the standard errors are heteroskedastic. 1979)’s test. hypothesis test should have the desired significance level. 2. the error term is heteroskedastic..). b. the external validity of both studies can be checked by comparing their results. heteroskedasticity. © Thierry Warin 11 © Thierry Warin 12 . Threats to external validity External validity must be judged using specific knowledge of the populations and settings studied and those of interest.. sample selection. If so. a. X i and ui have four moments. The estimator of the causal effect should be unbiased and consistent. Otherwise. consistent.i. Threats to internal validity Internal validity has two components: 1. Internal and external validity distinguish between population and setting studied and the population and setting to which the results are generalized. 4.. The analysis is externally valid if its inferences and conclusions can be generalized from the population and setting studied to other populations and settings. Hypothesis tests are performed using the estimated regression coefficients and their standard errors. i = 1.. n are Independently and Identically Distributed. and asymptotically normal. Reasons why the OLS estimator of the multiple regression coefficients might be biased are sevenfold: omitted variables. misspecification of the functional form of the regression function. but it is inappropriate for time series data. All seven sources of ( X i . Important differences between the two will cast doubt on the external validity of the study. and correlation of the error term across observations (sample not i. we use (Breusch and Pagan. Check the internal and external validity A statistical analysis is internally valid if the statistical inferences about causal effects are valid for the population being studied. the OLS estimator is unbiased. 3.d. A. bias arise because the regressor is correlated with the error term violating the first least squares assumption. simultaneous causality. Sometimes. Studies based on regression analysis are internally valid if the estimated regression coefficients are unbiased and consistent. and if their standard errors yield confidence intervals with the desired confidence level. one should use hetereoskedastic-robust standard errors. there are two or more studies on different but related populations. B. The fourth assumption is that the fourth moments of X i and ui are nonzero and finite: 0 < E ( X i 4 ) < ∞ and 0 < E ( ui 4 ) < ∞ 2.

X 2i ... the error term is heteroskedastic. In case of perfect multicollinearity. 1979)’s test. X ki is 3. X ki and ui should be nil.. hetereoskedastic-robust standard errors. X ki has a mean of zero. To test for heteroskedasticity.. (2) a. then the errors are impossible to compute the OLS estimator. but it is (3) inappropriate for time series data. then it is likely because there is an omitted variable bias.. X ki . X 2i . X1i . No perfect multicollinearity. 1931)’s test. The error term ui is homoskedastic if the © Thierry Warin 13 © Thierry Warin 14 ...variance of the conditional distribution of ui given X1i ... X ki ) + ui . ( X 1i . Linear regression with multiple regressors Yi = β 0 + β1 X1i + β 2 X 2i + ui Distributed. X ki .. This is to be sure that there is no selection bias in the sample.. it is 1. 5... X 2i . X 2i ... One should test for omitted variables using (Ramsey and Braithwaite... consistent... X 2i . 4... The regressors are said to be perfectly multicollinear if one of regressors is a perfect linear function of one of the other regressors.. X ki and ui have four moments... Otherwise... If the standard errors are heteroskedastic... The correlation between X1i .. X 2i .. 2. and asymptotically normal.. Yi ) . n and in particular does not depend on X1i .. n are Independently and Identically 4... 3... for instance test scores and class sizes in 1998 in 420 California school districts.. b. i = 1.. X 2i ... X ki . the conditional distribution of ui given X1i . we use (Breusch and Pagan.. Whether the errors are homoskedastic or heteroskedastic. This means that the other factors captured in the error term are unrelated to X1i . This is the most important assumption in practice. This second assumption holds in many cross-sectional data sets. Related to the first assumption: if the variance of this conditional distribution of ui does not depend on X1i .... n (4) said to be homoskedastic. X ki and ui are nonzero and finite. Nonlinear regression functions Yi = f ( X 1i . one should use This can be a time series analysis or not..... If this assumption does not hold. X ki . X 2i . The fourth assumption is that Assumptions of the OLS estimator the fourth moments of X1i . 5. the OLS estimator is unbiased. The linear regression model Yi = β 0 + β1 X i + ui constant for i = 1.... i = 1. X 2i . X 2i ..

ats.. Interactions between two binary variables Yi = β 0 + β1 X i + β 2 X i 2 + . B.edu/stat/stata/modules/reg/ok. Logarithms convert changes in variables into percentage changes. Examples of nonlinear regressions ln ( Y )i = β 0 + β1 ln ( X i ) + ui A 1% change in X is associated with a β1 % change in Y.ucla. Running time-series analyses in Stata A time series regression use http://www. This percentage change in demand is called the price elasticity. In the economic analysis of consumer demand.01 β1 .. Log-lin model D. Interactions between two continuous variables ln ( Yi ) = β 0 + β1 X i + ui A change in X by 1 unit is associated with a 100 β1 %. it is often assumed that a 1% increase in price leads to a certain percentage decrease in the quantity demanded. clear © Thierry Warin 15 © Thierry Warin 16 .A. (10) 2. so β1 is the elasticity of (8) 1) Polynomial regression model of a single independent variable Y with respect to X. Log-log model. + β r X i r + ui (5) Yi = β 0 + β1 D1i + β 2 D2i + β 3 ( D1i × D2i ) + ui 2) Logarithms (9) 1. Interactions between a continuous and a binary variable Yi = β 0 + β1 ln ( X i ) + ui (6) Yi = β 0 + β1 X i + β 2 Di + β 3 ( X i × Di ) + ui A 1% change in X is associated with a change in Y of 0. The regressor coefficients will then measure the elasticity in a log-log model. Lin-log model C. (7) Yi = β 0 + β1 X 1i + β 2 X 2 i + β 3 ( X 1i × X 2i ) + ui (11) 3.

4527284 . graph y x1 x2 x3.134 0. and checking for heteroscedasticity.014 . which is somewhat large but not extremely large. .434343 Root MSE = 9. but there is nothing seriously wrong in this scatterplot.298443 .864654 x3 | 1.00 99 148. This can be useful in checking for outliers.17 We can use the predict command (with the rstudent option) to make studentized residuals.000 .679 0. summarize rstud. These values all seem to be fine. predict rstud.2084695 .69 Model | 5936. The We can use the vif command to check for multicollinearity. Std.859 0. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.73977 Prob > F = 0. vif Variable | VIF 1/VIF ---------+---------------------x1 | 1. so no serious problems seem evident from this plot.5341981 x2 | .0626879 .9587921 32. 96) = 21.892234 ---------+---------------------Mean VIF | 1. or 1/VIF values (tolerances) lower than 0.000 29. matrix distribution of points seems fairly even and random.513 0.4040 ---------+-----------------------------Adj R-squared = 0.5130679 _cons | 31. t P>|t| [95% Conf. detail © Thierry Warin 17 © Thierry Warin 18 .3853 Total | 14695.05 may merit further investigation.3466306 .88.5518 -----------------------------------------------------------------------------y| Coef. There are a few observations that could be outliers.First.0000 Residual | 8758. checking for nonnormality.812781 x2 | 1.40831 -----------------------------------------------------------------------------We can use the rvfplot command to display the residuals by the fitted (predicted) values. .23 0. VIF values in excess of 20. and then use the summarize command to check the distribution of the residuals. The residuals do not seem to be seriously skewed (although they do have a higher than expected kurtosis).1801934 . rvfplot Let's use the regress command to run a regression predicting y from x1 x2 and x3. Interval] ---------+-------------------------------------------------------------------x1 | . Err.50512 .78069 96 91. .12 0.1230534 3.16 0.60194 33. let's look at a scatterplot of all variables. .0838481 4.6969874 x3 | . The largest studentized residual (in absolute value) is -2. rstudent . As a rule of thumb.21931 3 1978.000 . checking for non-linearity.1187692 2.2372989 R-squared = 0. .

6562946 2.52447 -2. A perfect normal distribution would be an exact diagonal line (as shown in red).253903 2. Dev. The largest residual (-2.96464 2. . pnorm rstud Below we show a boxplot of the residuals.020196 We can use the kdensity command (with the normal option) to show the distribution of the residuals (in yellow).013153 1. box © Thierry Warin 19 © Thierry Warin 20 .032194 1. . kdensity rstud. and a normal overlay (in red).096213 Skewness . graph rstud.409712 2. The actual data is plotted in yellow and is fairly close to the diagonal. 1.Studentized residuals ------------------------------------------------------------Percentiles Smallest 1% -2. this is not seriously non-normal. While not perfectly normal. 100 50% 75% 90% 95% 99% -.217658 -2.015969 . The results look pretty close to normal. .411592 Kurtosis 3. calling that to our attention.763776 Sum of Wgt.0016893 Largest Std.0789393 Mean .043349 Obs 100 25% -.0950425 2.5294 -2.019451 Variance 1.173733 10% -1.694324 -1.885066 5% -1. normal We can use the pnorm command to make a normal probability plot.88) is plotted as a residual.

100 50% 75% 90% 95% 99% Mean . detail y ------------------------------------------------------------Percentiles Smallest 1% 44.5 -28 5% -15. Dev.38949 15.5 19 Skewness .7 2864.5 -17 10% -11 -16 Obs 100 25% -5 -16 Sum of Wgt.18 Largest Std.5 7 10. Dev. 8.84 Largest Std.e.2514963 20 Kurtosis 3.edu/stat/stata/modules/reg/nonnorm.5 31 Kurtosis 3.0358513 99% 26.5 -16 Obs 100 25% -4. .5 19. clear Let's start by using the summarize command to look at the distribution of x1 x2 x3 and y looking for evidence of non-normality.5 Mean .5 -15 Sum of Wgt.17527 © Thierry Warin 21 © Thierry Warin 22 .Regression Diagnostics: Non normality use http://www.5 20 Kurtosis 2.8479 1600 3136 2162. 8. The skewness for x1 x2 and x3 all look fine.5 22 Skewness .5 4096 Skewness 1.. Dev.5 -23 10% -15.5 3364 Variance 715458.965568 18 19 Variance 80. Dev.5 -22 Obs 100 25% -7 -20 Sum of Wgt. i.68 Largest Std.5 16.12092 75% 8 22 90% 18 22 Variance 146.085636 We use the kdensity command below to show the distribution of y (in yellow) and a normal overlay (in red).ats. it has a long tail to the right.ucla.9168 95% 19. normal -1 x1 ------------------------------------------------------------Percentiles Smallest 1% -22.5 121 Obs 100 25% 576 121 Sum of Wgt.711806 0 x3 ------------------------------------------------------------Percentiles Smallest 1% -30. summarize y x1 x2 x3. 100 50% 75% 90% 95% 99% 961 Mean 1151.389845 5 17 10 17 Variance 70. The skewness for y suggests that y might be skewed. 12. kdensity y. 100 50% 75% 90% 95% 99% 1. 100 50% Mean -.45566 4226 4356 Kurtosis 5.5 25 5% 173 64 10% 306.12 Largest Std.38141 19 Skewness -.374975 x2 ------------------------------------------------------------Percentiles Smallest 1% -18 -18 5% -13 -18 10% -11. We can see that y is positively skewed.0378385 19.5 -38 5% -18. 845. .

842875 4. For example.57 Prob > F = 0. Looking at these plots show the data points seem to be more densely packed below the regression line.3995 ---------+-----------------------------Adj R-squared = 0. Interval] Below we use the avplots command to produce added variable plots.7 3 9432997. . We are looking for a nice even distribution of the residuals across the levels of the fitted value.54624 5.019 3.59041 x3 | 25.---------+-------------------------------------------------------------------x1 | 19. 96) = 21.14426 _cons | 1139. We can see that the observed values of y depart from this line.448 0. the plot in the top left shows the relationship between x1 and y. Err.0000 Residual | 42531420.81248 17. and the dependent variable after adjusting for all other predictors. let's run a regression predicting y from x1 x2 and x3 and look at some of the regression diagnostics. We see that the points are more densely packed together at the bottom part of the graph (for the negative residuals). the yellow points would be a diagonal line (right atop the red line).38622 36. another possible indicator that the residuals are not normally distributed.000 13.794 1272. . t P>|t| [95% Conf.001 12.7 96 443035.56946 8.722 Root MSE = 665. If y were normal.394 0. avplots © Thierry Warin 23 © Thierry Warin 24 . .24294 x2 | 29.000 1006. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.94823 37.3808 Total | 70830413.54852 46. pnorm y Even though y is skewed.038 -----------------------------------------------------------------------------The rvfplot command gives us a graph of the residual value by the fitted (predicted) value. indicating that there could be a problem of non-normally distributed residuals. These plots show the relationship between each predictor.372 0.416 66.61 -----------------------------------------------------------------------------y| Coef.574851 3.054 0.29 Model | 28298992. after y has been adjusted for x2 and x3. The plot in the top right shows x2 on the bottom axis. rvfplot We can make a normal probability plot of y using the pnorm command.276317 2.4 99 715458. Std.633 R-squared = 0. and the plot in the bottom left shows x3 on the bottom axis. We would expect the points to be normally distributed around these regression lines. .81458 8.

. 100 50% -. see below. pnorm rstud © Thierry Warin 25 © Thierry Warin 26 .Below we create studentized residuals using the predict command. normal A boxplot. We then use the summarize command to examine the residuals for normality. creating a variable called rstud containing the studentized residuals. We see that the residuals are positively skewed. kdensity rstud.022305 75% .44 to 3. a normal probability plot can be used to examine the normality of the residuals. .515767 Sum of Wgt.740343 10% -1. also shows the skew in the residuals. summarize rstud.933529 The kdensity command is used below to display the distribution of the residuals (in yellow) as compared to a normal distribution (in red).9225476 99% 3.6660804 -1. rstudent .5167466 2.106763 2.42.089911 3.363443 2.587549 Obs 100 25% -.0068091 Largest Std.1976601 Mean .78. .44973 -1. . 1. Stata knew we wanted studentized residuals because we used the rstudent option after the comma. and that the 5 smallest values go as low as -1. predict rstud. Dev.764866 -1. graph box rstud Finally.753908 Skewness . another indicator of the positive skew.115274 -1. while the five highest values go from 2. detail Studentized residuals ------------------------------------------------------------Percentiles Smallest 1% -1.789388 5% -1. We can see the skew in the residuals below.59326 Variance 1.045107 95% 2.425914 Kurtosis 3.446876 90% 1.

91722 75% 40 56 90% 46. detail sqy ------------------------------------------------------------Percentiles Smallest 1% 6.4318 3 1908.4513345 99% 65 66 Kurtosis 3.0202 95% 53.1200437 3. 96) = 21.806 0. summarize sqy. Std.0817973 4.219272 Looking at the distribution using kdensity we can see that although the distribution of sqy is not completely normal.9353415 33. t P>|t| [95% Conf. rvfplot © Thierry Warin 27 © Thierry Warin 28 .3182 -----------------------------------------------------------------------------sqy | Coef. generate sqy = sqrt(y) .61973 .Let us try using a square root transformation on y creating sqy.14393 Prob > F = 0. 100 Mean 31. sqy below.4465366 . and examine the distribution of sqy.8 Largest Std. 11.8288354 R-squared = 0.5111456 _cons | 31.2082518 .00 99 142.76309 33.5682 96 86. normal 50% 31 We run the regression again using the transformed value.018 .1864127 . Dev. it is much improved. Err.3487791 . .6848214 x3 | . .98 Model | 5724.264 0.000 29.2786301 .3886 Total | 14060. regress sqy x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.000 .5 58 Variance 142. The residuals are more evenly distributed. .47637 -----------------------------------------------------------------------------The distribution of the residuals in the rvfplot below look much better.5 5 5% 13 8 10% 17.5 11 Obs 100 25% 24 11 Sum of Wgt. Interval] ---------+-------------------------------------------------------------------x1 | .1158643 2.5 64 Skewness .4071 ---------+-----------------------------Adj R-squared = 0.0000 Residual | 8335. .405 0.000 .720 0.508619 x2 | . kdensity sqy. The transformation has considerably reduced (but not totally eliminated) the skewness.0486412 .020202 Root MSE = 9.

028647 95% 2. The skewness is much better (.010917 2. predict rstud2. and outliers in the residuals. graph rstud2. and there are no outliers in the plot.718882 The distribution of the residuals below look nearly normal. Dev. box We create studentized residuals (called rstud2) and look at their distribution using summarize. Had we tried © Thierry Warin 29 © Thierry Warin 30 .262745 -1.117415 10% -1.451008 Kurtosis 2. kdensity rstud2.033796 90% 1. .180267 -2.825144 Obs 100 25% -. normal The avplots below also look improved (not perfect.014223 75% .6425427 2.0782854 Mean . . avplots The boxplot of the residuals looks symmetrical. 1.50% -.180998 Skewness .316003 2.789484 Sum of Wgt.085326 Variance 1. detail Studentized residuals ------------------------------------------------------------Percentiles Smallest 1% -2.243119 5% -1. . summarize rstud2.2682509 99% 2.7082379 -1.450073 2.550404 -2.27) and the 5 smallest and 5 largest values are nearly symmetric. . a square root transformation of the dependent variable addressed both problems in skewness in the residuals. rstudent .0026017 Largest Std. 100 In this case. but much improved).

755 -.0000 A scatterplot matrix is used below to look for higher order trends. 96) = 2.to address these problems via dealing with the outliers. generate x2sq = x2*x2 .00 Root MSE = 29. it can be useful to assess whether the residuals are skewed.09967 11. When there are outliers in the residuals. Std. 87) = 67.00 99 882.716146 R-squared = 0.587 ------------------------------------------------------------------------------ © Thierry Warin 31 © Thierry Warin 32 .3757505 -0. Interval] ---------+-------------------------------------------------------------------x1 | .3763 4 19167. is that there are no omitted variables (no significant higher order trends).981 -.1134093 . cubed) are present but omitted from the regression model. .7368946 x3 | .0647 ---------+-----------------------------Adj R-squared = 0.6064824 .087 -2.090776 R-squared = 0.61974 1.101495 _cons | 20.75 96 850. Err.8729 Total | 87318.g. 95) = 171. t P>|t| [95% Conf.edu/stat/stata/modules/reg/nonlin. the problem of the skewness of the residuals would have remained.730 0. as shown below. rhs Ramsey RESET test using powers of the independent variables Ho: model has no omitted variables F(9.0089643 . clear .3441 Prob > F = 0.0000 Residual | 10648.8780 ---------+-----------------------------Adj R-squared = 0. regress y x1 x2 x2sq x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 4.167 -----------------------------------------------------------------------------y| Coef.08334 Prob > F = 0.6237 95 112.023 .317 0. this suggest there are higher order trends in the data that we have overlooked. avplots Below we create x2sq and add it to the regression equation to account for the curvilinear relationship between x2 and y. .0850439 1.86 Prob > F = 0. Because the test is significant. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.21 Model | 5649. The null hypothesis. squared. .024 0.0355 Total | 87318.ats. the avplot for x2 (top right) exhibits a distinct curved pattern.25003 3 1883.00 Root MSE = 10. If so. Consistent with the scatterplot.313 0. . addressing the skewness may also solve the outliers at the same time.16468 -----------------------------------------------------------------------------The ovtest command with the rhs option tests whether higher order trend effects (e. graph matrix y x1 x2 x3 We can likewise use avplots to look for non-linear trend patterns.833301 x2 | -. We can see that there is a very clear curvilinear relationship between x2 and y.ucla.00 99 882.0915 Residual | 81668.3626687 0.5932696 .7548232 .00 Model | 76669.965335 43. ovtest. Regression diagnostics: Non-linearity use http://www.2560351 2.

and then x2sq was discarded since it was the same as the term Stata created. regress y x1 x2cent x2centsq x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 4.171 0. 95) = 171. rhs (note: x2cent^2 dropped due to collinearity) (note: x2cent^3 dropped due to collinearity) Ramsey RESET test using powers of the independent variables Ho: model has no omitted variables F(10.45 © Thierry Warin 33 © Thierry Warin 34 .8589132 1. .874328 x2centsq | 1.011738 25.4448688 _cons | 267. .39391 x2sq | . 85) = 0.71298 25.6237 95 112.2478 -----------------------------------------------------------------------------We use the ovtest again below.0907586 .x2mean .524 -3.2721586 .2954615 .e.2584843 . . t P>|t| [95% Conf.9799 10.0938846 2. 85) = 54.526496 1.812539 x3 | 1.0907586 . the results indicate that there are no more significant higher order terms that have been omitted from the model.015 0.2721586 .030834 x2 | 32.864632 x3 | 1.753 0.090775 R-squared = 0.0938846 2. Err.23 0.y| Coef.43 0.05) may merit further investigation.808669 -----------------------------------------------------------------------------As we expected.8780 ---------+-----------------------------Adj R-squared = 0. vif Variable | VIF 1/VIF ---------+---------------------x1 | 1. Below. subtract its mean) before squaring it.82489 . Stata created x2sq^2 which duplicated x2sq.14 0.753 0. and this time it does not drop any of the terms that we placed into the regression model. Err. egen x2mean = mean(x2) .00 99 882. ovtest.1706278 .812539 x2cent | 1. Interval] ---------+-------------------------------------------------------------------x1 | . rhs (note: x2sq dropped due to collinearity) (note: x2sq^2 dropped due to collinearity) Ramsey RESET test using powers of the independent variables Ho: model has no omitted variables F(11. the results of the ovtest will no longer be misleading.193 0. and the VIF values for x2 and x2sq will get much better.4448688 _cons | -.007 . Std.3437 -0. Stata gave us a note saying that x2sq was dropped due to collinearity. .1363948 -0.2954615 . We see that the VIF for x2 and x2sq are over 32.011738 25.0720997 . Interval] ---------+-------------------------------------------------------------------x1 | . we center x2 (called x2cent) and then square that value (creating x2centsq).14 0.2970677 .16 0.8729 Total | 87318. In testing for higher order trends.296 0. The reason for this is that x2 and x2sq are very highly correlated.3187645 x3 | .244488 x2centsq | .3763 4 19167. If we "center" x2 (i.25588 -16.1706277 .729 0.198 -.14 We try the ovtest again.78 We can solve both of these problems with one solution. generate x2cent = x2 .7208091 -24.0000 There is another minor problem.30 0.3441 Prob > F = 0.712 289. Std.0262898 .02 0. vif Variable | VIF 1/VIF ---------+---------------------x2sq | 32.587 -----------------------------------------------------------------------------y| Coef.1316641 1.00 Model | 76669.0000 Residual | 10648.000 246.4320141 x2 | -17. generate x2centsq = x2cent^2 We now run the regression using x2cent and x2centsq in the equation. A general rule of thumb is that a VIF in excess of 20 (or a 1/VIF or tolerance of less than 0.23 0.00 Root MSE = 10.57 Prob > F = 0.3187645 x3 | .639 0.000 .874328 ---------+---------------------Mean VIF | 16.198 -.4320141 x2cent | -. the VIF values are much better. but it has discarded the higher trend we just included. .0720997 .848 -. ovtest. however the results are misleading.2584843 . This time.000 -19.000 . . t P>|t| [95% Conf.1316641 1.007 . We use the vif command below to look for problems of multicollinearity.171 0.296 0.030959 x1 | 1. The resulting ovtest misleads us into thinking there may be higher order trends.978676 ---------+---------------------Mean VIF | 1.

96) = 65. The variability of the residuals at the left side of the graph is much smaller than the variability of the residuals at the right side of the graph. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.000 .180 0.90791 Prob > F = 0. it shows no curvilinear trend. Interval] ---------+-------------------------------------------------------------------x1 | .89807 34. The test indicates that the regression results are indeed heteroscedastic. . we can clearly see evidence for heteroscedasticity.3820447 x2 | .7334 -----------------------------------------------------------------------------y| Coef.72373 3 2977.ucla.6622 Total | 13286. .5813 -----------------------------------------------------------------------------We can use the hettest command to test for heteroscedasticity. rvfplot Note that if we examine the avplot for x2cent.314 0. we would have ignored the significant curvilinear component between x2 and y.6758811 49.011 .0000 Residual | 4352. .000 31.490543 _cons | 33.6724 ---------+-----------------------------Adj R-squared = 0.7559357 .715 0.203939 Root MSE = 6. .000 . so after adjusting for the other terms (including x2centsq) there is no longer any curved trend between x2cent and the adjusted value of y.5837503 .9281211 x3 | . Err. clear We try running a regression predicting y from x1 x2 and x3.46627 96 45. avplot x2cent Had we simply run the regression and reported the initial results.2158539 .19 99 134.edu/stat/stata/modules/reg/hetsc.3732164 .30 Prob > chi2 = 0. .0000 Looking at the rvfplot below that shows the residual by fitted (predicted) value.083724 2.68 Model | 8933. This is because the avplot adjusts for all other terms in the model.23969 .2558898 . avplots Regression Diagnostics: Heteroscedasticity use http://www.3381903 R-squared = 0.9178 We create avplots below and no longer see any substantial non-linear trends in the data.ats. so we need to further understand this problem and try to address it.578 0. Std.0591071 6.Prob > F = 0. © Thierry Warin 35 © Thierry Warin 36 .086744 8. t P>|t| [95% Conf.0496631 . hettest Cook-Weisberg test for heteroscedasticity using fitted values of y Ho: Constant variance chi2(1) = 21.

0796397 x3 | .0003 Looking at the rvfplot below indeed shows that the results are still heteroscedastic.000 3.0103432 x2 | . generate sqy = y^.640 0.003129 .000 5.03902155 R-squared = 0. generate lny = ln(y) .0118223 . The square root transformation was not successful.001734 6.06 We next try a natural log transformation.0280817 x3 | .5 . . but the test is still significant.0230141 .0000 Residual | 30. hettest Cook-Weisberg test for heteroscedasticity using fitted values of lny Ho: Constant variance © Thierry Warin 37 © Thierry Warin 38 .484862 -----------------------------------------------------------------------------We again try the hettest and the results are much improved.406144 3.017 .0070027 2.0309297 x2 | .445503 . rvfplot We will try to stabilize the variance by using a square root transformation.0170293 .000 .570379 5. .6858 ---------+-----------------------------Adj R-squared = 0.17710164 3 2. 96) = 69.56318 -----------------------------------------------------------------------------sqy | Coef.028 .0000 Residual | 3. t P>|t| [95% Conf.992 0.0040132 3 22. .4489829 96 .000 .0054677 .0426407 _cons | 5. .19754 -----------------------------------------------------------------------------lny | Coef.000 .74606877 96 .6744 Total | 96.317176905 R-squared = 0.226 0. Std.0230303 .0152643 _cons | 3. and then run the regression again.765 0.050 0.9231704 99 .0005921 . Err.0508362 . Err.794807 -----------------------------------------------------------------------------Using the hettest again.6843 ---------+-----------------------------Adj R-squared = 0.4529961 99 . and run the regression.682593 .0083803 . Std.0198285 173.0025448 9. 96) = 69.818 0. the chi-square value is somewhat reduced. Interval] ---------+-------------------------------------------------------------------x1 | .72570055 Prob > F = 0. t P>|t| [95% Conf.0013377 Prob > F = 0.85 Model | 8.Prob > chi2 = 0. .0024562 2.0072553 8.6760 Total | 11.974272688 Root MSE = .0565313 100.0049438 6.0328274 .37 Model | 66. regress lny x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.0652379 . regress sqy x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.120436065 Root MSE = . Interval] ---------+-------------------------------------------------------------------x1 | .432 0.000 . hettest Cook-Weisberg test for heteroscedasticity using fitted values of sqy Ho: Constant variance chi2(1) = 13. but the test for heteroscedasticity is still quite significant.521 0.0179788 .

229643 26.226 0.33932 1.dta .0179 While these results are not perfect. 96) = 69.0078081 .2635928 .ucla.8985 34.1986327 . clear Below we run an ordinary least squares (OLS) regression predicting y from x1. Err.0023746 .050 0.0000 Residual | 14406. .96 99 209. .765 0.304 0. but it is much improved. graph matrix y x1 x2 x3 © Thierry Warin 39 © Thierry Warin 40 .706552237 96 . 96) = 14.0007531 6. but x1 is not significant.0066292 _cons | 1.479269 1.0100019 . we will be content for now that this has substantially reduced the heteroscedasticity as compared to the original data.chi2(1) = 5.1037212 .022715651 Root MSE = . Std.64512 3 2119.08579 -----------------------------------------------------------------------------log10y | Coef.286 0.2845 Total | 20764.1399371 .1523206 1.1578149 3.0000 Residual | . t P>|t| [95% Conf. regress y x1 x2 x3 Perhaps you might want to try a log (to the base 10) transformation. The results suggest that x2 and x3 are significant.000 . x2.24884946 99 . . x2.0036395 . hettest Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.0011052 9.60 Prob > chi2 = 0. generate log10y = log10(y) .300 0.ats.0010667 2.06578 R-squared = 0.0179 Below we see that the rvfplot does not look perfect.007359919 R-squared = 0.195 -.3062 ---------+-----------------------------Adj R-squared = 0.818 0.000 . x1.12 Model | 6358. .576853 . rvfplot Cook-Weisberg test for heteroscedasticity using fitted values of log10y Ho: Constant variance chi2(1) = 5. We show that below.655 0.0086114 173. Interval] ---------+-------------------------------------------------------------------x1 | .496363 . Although we cannot see a great deal of detail in these plots (especially since we have reduced their size for faster web access) we can see that there is a single point that stands out from the rest. t P>|t| [95% Conf. Std.3149 96 150.513456 -----------------------------------------------------------------------------The results for the hettest are the same as before.6760 Total | 2.78014 -----------------------------------------------------------------------------Let's start an examination for outliers by looking at a scatterplot matrix showing scatterplots among y.514099074 Prob > F = 0.85 Model | 1.54837 Prob > F = 0.60 Prob > chi2 = 0.0051344 . .3533915 .54229722 3 . Regression Diagnostics: Outliers use http://www.8901132 x3 | .5009867 x2 | .edu/stat/stata/modules/reg/outlier.000 29.25 -----------------------------------------------------------------------------y| Coef. Whether we chose a log to the base e or a log to the base 10.0121957 x3 | . the effect in reducing heteroscedasticity (as measured by hettest) was the same. This looks like a potential outlier.000 1.6858 ---------+-----------------------------Adj R-squared = 0.000 .1075346 3.028 .747071 Root MSE = 12. and x3. Interval] ---------+-------------------------------------------------------------------x1 | . and x3.004492 x2 | . Err.0002571 .001 . regress log10y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.5668459 _cons | 32.

using the symbol([case]) option that indicates to make the symbols the value of the variable case. The plot in the top left shows x1 on the horizontal axis. ranging from 1 to 100. Likewise. and the residual value of y after using x1 and x2 as predictors. The variable case is the case id of the observation. graph matrix y x1 x2 x3. If you run this in Stata yourself. symbol([case]) We can use the lvr2plot command to obtain a plot of the leverage by normalized residual squared plot. It is difficult to see below.We repeat the scatterplot matrix below. . after adjusting y for x2 and x3. Returning to the top left plot. and the residual value of y after using x1 and x3 as predictors. the top right plot shows x2 on the horizontal axis. this shows us the relationship between x1 and y. we see that case 100 appears to be an outlier in each plot. these plots allow you to view each of the scatterplots much like you would look at a scatterplot from a simple regression analysis with one predictor. The most problematic outliers would be in the top right of the plot. and the residual value of y after using x2 and x3 as predictors. symbol([case]) The avplots command gives added variable plots (sometimes called partial regression plots). © Thierry Warin 41 © Thierry Warin 42 . symbol([case]) The rvfplot command shows us residuals by fitted (predicted) values. rvfplot. In short. In looking at these plots. lvr2plot. the case numbers will be much easier to see. . but the case for the outlier is 100. The line plotted has the slope of the coefficient for x1 and is the least squares regression line for the data in the scatterplot. and the bottom plot shows x3 on the horizontal axis. This plot shows us that case 100 has a very large residual (compared to the others) but does not have exceptionally high leverage. indicating both high leverage and a large residual. and also indicates that case 100 has the largest residual. .

graph box d. . By contrast. showing that you can obtain an avplot for a single variable at a time. . . As we would expect. avplot x2. We make a boxplot of that below. symbol([case]) graph command to make a boxplot looking at the studentized residuals. for x3 the outlier is right in the center and seems to have no influence on the slope (but would pull the entire line up influencing the intercept). cooksd . but not exceptionally high leverage. the outlier for x2 seems to be tugging the line up at the right giving the line a greater slope. symbol([case]) Below we repeat the avplot just for the variable x2. symbol([case]) © Thierry Warin 43 © Thierry Warin 44 . looking for outliers. predict l. leverage . symbol([case]) We use the predict command below to create a variable containing the studentized residuals called rstu. we can see the type of influence that it has on each of the regression lines. The boxplot shows some observations that might be outliers based on their leverage. . Note that observation 100 is not among them. and 100 shows to have the highest value for Cooks D. predict d. rstudent . symbol([case]) Below we use the predict command to create a variable called l that will contain the leverage for each observation. avplots. graph box rstu. observation 100 stands out as an outlier. We can then use the Below use use the predict command to compute Cooks D for each observation. Finally. Also. For x1 we see that x1 seems to be tugging the line up at the left giving the line a smaller slope. graph box l. Stata knew we wanted leverages because we used the leverage option after the comma. we can better see the influence of observation 100 tugging the regression line up at the right. Stata knew we wanted studentized residuals because we used the rstudent option after the comma. This is consistent with the lvr2plot (see above) that showed us that observation 100 had a high residual. predict rstu. possibly increasing the overall slope for x2. .Beyond noting that x1 is an outlier.

the larger the symbol will be. . graph rstu l. As we would expect. graph rstu DFx1. dfbeta DFx1: DFbeta(x1) DFx2: DFbeta(x2) DFx3: DFbeta(x3) Below we make a graph of the studentized residual by DFx1.We can make a plot that shows us the studentized residual. a very large value of Cook's D. and DFx3. From our examination of the avplots above. . DFx2. symbol([case]) We repeat the graph above. and the size of the bubble reflects the size of Cook's D (d). The dfbeta value shows the degree the coefficient will change when that single observation is omitted. DFx1. . and cooks D all in one plot. it appeared that the outlier for case 100 influences x1 and x2 much more than it influences x3. graph rstu l [w=d] The leverage gives us an overall idea of how influential an observation is. The [w=d] tells Stata to weight the size of the symbol by the variable d so the higher the value of Cook's D. We can use the dfbeta command to generate dfbeta values for observation. but does not have a very large leverage. symbol([case]) © Thierry Warin 45 © Thierry Warin 46 . how influential a single observation can be. and for each predictor. This indicates that the presence of observation x1 decreases the value of the coefficient for x1 and if it was removed. except using the symbol([case]) option to show us the variable case as the symbol. leverage. and that it has a large negative DFBeta. the plot below shows an observation that has a very large residual. The output below shows that three variables were created. We see that observation 100 has a very high residual value. leverage (l) on the horizontal axis. which shows us that the observation we identified above to be case = 100. the coefficient for x1 would get larger. . This allows you to see. The graph command below puts the studentized residual (rstud) on the vertical axis. for a given predictor.

we could look at the DFbeta values right in the added variable plots. generate rDFx2 = round(DFx2. .154.1578 =. graph rstu DFx3. This suggests that the exclusion of observation 100 would have little impact on the coefficient for x3. symbol([rDFx2]) Finally. enhances the coefficient for x2. As a rule of thumb. it indicates that the coefficient will be . Instead of looking at this information separately. symbol([case]) © Thierry Warin 47 © Thierry Warin 48 . and has little impact on the coefficient for x3. . creating rDFx2.98 standard errors smaller. graph rstu DFx2. but it has a small DFBeta (small DFx2). It looks like observation 100 diminishes the coefficient for x1. We can see that the outlier at the top right has the largest DFbeta value and that observation enhances the coefficient (. we make a plot showing the studentized residual and DFx3.98 * . the value of the DFbeta tells us exactly how much smaller. Like above. We can see that the information provided by the avplots and the values provided by dfbeta are related. This shows that observation 100 has a large residual. We then include rDFx2 as a symbol in the added variable plot below.01) . that coefficient would get smaller.0.576 to .Below we make a graph of the studentized residual by the value of DFx2. Below we take the DFbeta value for x2 (DFx2) and round it to 2 decimal places.422. In fact. This suggests that the presence of observation 100 enhances the coefficient for x2 and its removal would lower the coefficient for x2. a DFbeta value of 1 or larger is considered worthy of attention. we see that x2 has a very large residual. avplot x2. Removing this observation will make the coefficient for x2 go from .576) and if this value were omitted. or . symbol([case]) The results of looking at the DFbeta values is consistent with our observations looking at the avplots. but instead DFx2 is a large positive value. .

The coefficient for x1 increased (from . .325315 . we look at the data for observation 100 and see that it has a value of 110.000 0.4092442 R-squared = 0.0784904 rDFx2 .670397 x3 | . the coefficient for x3 was changed very little (from .315 0.56752 Prob > F = 0. the coefficient for x2 decreased by the exact amount the DFbeta indicated (from .665 0. list in 100 Observation 100 case 100 x1 -5 x2 8 x3 0 y 110 rstu 7. The value really should have been 11.19 to . Interval] ---------+-------------------------------------------------------------------x1 | .23692 -----------------------------------------------------------------------------We repeat the regression diagnostic plots below.42).3448104 .9819155 DFx3 .000 . The coefficients change just as the regression diagnostics indicated. lvr2plot The plot below is the same as above. Note that for x2 the coefficient went from being non-significant to being significant. .126493 3.0861919 4. allowing us to see that observation 100 is the outlying case.28053 . Std.1220892 2.8180526 DFx2 . but shows us the case numbers (the variable case) as the symbol.34). We checked the original data.1737208 .35 to . regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.5676601 x2 | .32415 33. 96) = 20.Having fixed the value of y we run the regression again.4193103 . Err. avplot x2.738 0.009 . We need to look carefully at the scale of these plots since the axes will be rescaled with the omission of the large outlier. replace y = 11 if (case == 100) (1 real change made) © Thierry Warin 49 © Thierry Warin 50 .3878 ---------+-----------------------------Adj R-squared = 0.27 Model | 5863.1682236 . symbol([case]) Below.70256 3 1954. and 3 points with larger leverage than most.99 99 152.28744 96 96. t P>|t| [95% Conf.3687 Total | 15118.57 to .001 . As we expected.98 We change the value of y to be 11.5159 _cons | 31. .717071 Root MSE = 9.2893599 DFx1 -. The lvr2plot shows a point with a larger residual than most but low leverage.325).9855929 31.0000 Residual | 9255.829672 l . .0298236 d .8188 -----------------------------------------------------------------------------y| Coef. the correct value. and found that this was a data entry error. . but a small residual.08297 .000 29.

(We first drop the variables l rstu and d because the predict command will not replace existing values). studentized residuals. leverage . .91563 Prob > F = 0. yet none of them are significant.0000) and these predictors account for 40% of the variance in y (R-squared = 0. we note that the test of all four predictors is significant (F = 16. If we look more carefully. we would have used the original results which would have underestimated the impact of x1 and overstated the impact of x2. and Cook's D. Had we skipped checking the residuals. y. Regression Diagnotics: Multicollinearity use http://www. regress y x1 x2 x3 x4 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 4. clear Below we run a regression predicting y from x1 x2 x3 and x4.The rvfplot below shows a fairly even distribution of residuals without any points that dramatically stand out from the crowd.4080 We create values of leverage. rvfplot . Let us investigate further. rstudent © Thierry Warin 51 © Thierry Warin 52 . It seems like a contradiction that the combination of these 4 predictors should be so strongly related to y. we can tentatively state that these revised results are good. . predict d. predict rstu.40). None of the points really jump out as having an exceptionally high residual.33747 95 91.5719733 R-squared = 0. graph rstu l [w=d] The avplots show a couple of points here and there that stand out from the crowd. p = 0.ats. . Looking at an avplot with the DFbeta value in each plot would be a useful followup to assess the influence of these points on the regression coefficients.37.66253 4 1498. cooksd We then make the plot that shows the residual. predict l. drop l rstu d . .edu/stat/stata/modules/reg/multico. 95) = 16. .37 Model | 5995. avplots Although we could scrutinize the data a bit more closely. leverage and Cook's D value. If we were to report these results without any further checking. we will leave this up to the reader.ucla. For now. leverage and Cook's D all in one graph. we would conclude that none of these predictors are significant predictors of the dependent variable.0000 Residual | 8699.

679 0.5693 -----------------------------------------------------------------------------y| Coef. we see that the standard errors in the table above were much larger. Err. 96) = 21.0000 x4 | 0.78069 96 91. t P>|t| [95% Conf.3831 Root MSE = 9.225535 _cons | 31. t P>|t| [95% Conf.286694 1.3853 Total | 14695.356131 x3 | 1.005687 x1 | 91. Use x1 as a dependent variable.001869 x3 | 175.05215 1. and use x2 x3 and x4 as predictors and compute the Rsquared (the proportion of variance that x2 x3 and x4 explain in x1) and then take 1-Rsquared. If we look at x4.513 0.17 If we examine the correlations among the variables. This makes sense.8370988 1. The solutions are often driven by the nature of your study © Thierry Warin 53 © Thierry Warin 54 .3136 0.8971469 3. corr x1 x2 x3 x4 (obs=100) | x1 x2 x3 x4 ---------+-----------------------------------x1 | 1. because when a variable has a low tolerance. and much better than the prior results. .16 0.69 Model | 5936. A tolerance (1/VIF) can be described in this way. vif Variable | VIF 1/VIF ---------+---------------------x1 | 1.2372989 R-squared = 0.97 0.092 0.134 0.0838481 4.12 0.0000 x2 | 0.6969874 x3 | .7827429 3. Std.9155806 3.152135 x2 | 1.864654 x3 | 1. also called tolerances).234 0.0000 x3 | 0. Note that the variance explained is about the same (still about 40%) but now the predictors x1 x2 and x3 are now significant. With these improved tolerances.40831 -----------------------------------------------------------------------------Below we look at the VIF and tolerances and see that they are very good.7790 1.434343 Adj R-squared = 0.2021 1.434343 Root MSE = 9.9709127 32.3553 1.298443 . the standard errors in the table above are reduced. .024484 1.4527284 .000 29.1187692 2.2084695 .422 -2.010964 (the value of 1/VIF for x1).0626879 .21 0. In this example. it seems that x4 is most strongly related to the other predictors. .83 0. Interval] ---------+-------------------------------------------------------------------x1 | 1.2% of the variance in x4 is not explained by x1 x2 and x3. so we try removing x4 from the regression equation.806 0.278 -.000 . Err.9587921 32.892234 ---------+---------------------Mean VIF | 1.21931 3 1978. 1-Rsquared equals 0. A general rule of thumb is that a VIF in excess of 20.7281 0.---------+-----------------------------Total | 14695. we see that less than . using x1 as an example. or a tolerance of 0.000 29. You can see that these results indicate that there is a problem of multicollinearity.038979 -0.4040 ---------+-----------------------------Adj R-squared = 0. its standard error will be increased.60194 33.6516 0.5341981 x2 | .0000 Residual | 8758. Std. This means that only about 1% of the variance in x1 is not explained by the other predictors.118277 1.00 99 148. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.014 .133 0.5518 -----------------------------------------------------------------------------y| Coef.05 or less may be worthy of further investigation.3466306 .00 99 148. If you compare the standard errors in the table above with the standard errors below.73977 Prob > F = 0.000 .61912 .899733 1.260 -.1230534 3. vif Variable | VIF 1/VIF ---------+---------------------x4 | 534.566 0.042406 1.280417 x4 | -. Interval] ---------+-------------------------------------------------------------------x1 | .220 -.17 We should emphasize that dropping variables is not the only solution to problems of multicollinearity.50512 .1801934 .69161 33.0000 We might conclude that x4 is redundant.010964 x2 | 82.812781 x2 | 1. .23 0.69 0.191635 1.012093 ---------+---------------------Mean VIF | 221. and is really not needed in the model.5130679 _cons | 31.54662 -----------------------------------------------------------------------------We use the vif command to examine the VIF values (and 1/VIF values.859 0.

14569 (switching optimization to BFGS) Iteration 5: log likelihood = -148. 100) = .91020202 Root MSE = 3.902034 -----------------------------------------------------------------------------Let's create and examine the residuals for this analysis.0118 -----------------------------------------------------------------------------y| Coef. 96) = 0. . we might have concluded that none of these predictors were related to the dependent variable. with a first order autocorrelation.0711941 R-squared = 0. This model would permit correlations between observations across time.0374498 0. regress y x1 x2 x3 Source | SS df MS Number of obs = 100 ---------+-----------------------------F( 3.94194 Iteration 2: log likelihood = -148.edu/stat/stata/modules/reg/nonind. . . tsset time time variable: time.0386904 . If there were no autocorrelation. but not a test of the significance of "d". t P>|t| [95% Conf.04442 Iteration 8: log likelihood = -148. dwstat Durbin-Watson d-statistic( 4. and decide how you might combine the variables.952 0. suggesting the results are not independent over time.000 .468 -. After dropping x4.16849 Iteration 4: log likelihood = -148.834634 96 9.14 is sufficiently close to 0 to indicate a strong autocorrelation (see Chatterjee. Hadi and Price.0030059 . the value of "d" would be 2. showing the residuals over time. We first need to tell Stata the name of the time variable using the tsset command.1099843 x2 | .ats.75845547 Prob > F = 0. These results suggest that none of the predictors are related to y. the greater the autocorrelation.04442 © Thierry Warin 55 © Thierry Warin 56 .1389823 Let's run the analysis using the arima command. You may decide to combine variables that are very highly correlated because you realize that the measures are really tapping the exact same thing.0181 Total | 882.19301 Iteration 3: log likelihood = -148.938 -.306 0. Std. predict rstud.344 -. section 8.0356469 . Interval] ---------+-------------------------------------------------------------------x1 | . and use the factor scores as predictors. .301929 . rstud . Stata replies back that the time values range from 1 to 100. arima y x1 x2 x3. 1 to 100 The dwstat command gives a value of "d". the value of . Had we not investigated further.7018232 1. the results were dramatically different showing x1 x2 and x3 all significantly related to the dependent variable.077 0. .0388007 0.7431 Residual | 870. In this instance.04916 Iteration 6: log likelihood = -148. ar(1) (setting optimization to BHHH) Iteration 0: log likelihood = -151.3 for more information). Below we see the residuals are clearly not distributed evenly across time.41 Model | 11. and the closer the value is to 0.0264387 -0.0331981 _cons | 1.67536 Iteration 1: log likelihood = -148.729 0.and the nature of your variables.ucla.0128 ---------+-----------------------------Adj R-squared = -0.04445 Iteration 7: log likelihood = -148. You might decide to use principal component analysis or factor analysis to study the structure of your variables.0740128 .2753664 3 3. clear Below we run a regression predicting y from x1 x2 and x3.3023225 4.11 99 8.0800247 x3 | -.0717626 .0192823 . Err. Or. graph rstud time We can use the dwstat command to test to see if the results are independent over time. Regression Diagnostics: Non-Independence use http://www. you might choose to generate factor scores from a principal component analysis or factor analysis.

X 2i . Whether the errors are homoskedastic or heteroskedastic.. and asymptotically normal.. the conditional distribution of ui given X1i . 6. Yi ) . X ki and ui have four moments. d. Yit = β 0 + β1 X it + β 2 Zi + uit Where Z i is an unobserved variable that varies from one state to the next but does not change over time. This is to be sure that there is no selection bias in the sample.. X ki has a mean of zero... The regressors are said to be perfectly multicollinear if one of regressors is a perfect linear function of one of the other regressors... so can time fixed effects control for variables that said to be homoskedastic. ( X 1i . Regression with time fixed effects Just as fixed effects for each entity can control for variables that are constant over time but differ across entities.. 2. X 2i .. Assumptions of the OLS estimator 1..... The fourth assumption is that the fourth moments of X1i .... Otherwise.. X1i . then the errors are 5. we use (Breusch and Pagan. The fixed effects regression model 3. This means that the other factors captured in the error term are unrelated to X1i . If the standard errors are heteroskedastic. X 2i . then it is likely because there is an omitted variable bias. 1979)’s test.. constant for i = 1.. This is the most important assumption in practice. This second assumption holds in many cross-sectional data sets. and here errors are uncorrelated across time as well as entities. X 2i . it is impossible to compute the OLS estimator. We can rewrite equation (12): Yit = β1 X it + α i + uit Where α i = β 0 + β 2 Z i . If this assumption does not hold....... X ki is Time-Series Cross-Section Analyses (TSCS) or Panel data models A balanced panel has all its observations. The correlation between X1i ... that is..variance of the conditional distribution of ui given X1i . consistent. X ki ... the error term is heteroskedastic. No perfect multicollinearity. © Thierry Warin 57 © Thierry Warin 58 . c... One should test for omitted variables using (Ramsey and Braithwaite. X 2i .. conditional on regressors.. X ki . the OLS estimator is unbiased. the variables are observed for each entity and each time period. The error term ui is homoskedastic if the are constant across entities but evolve over time. X 2i . X 2i . the errors are uncorrelated across entities. i = 1. 4. X ki and ui should be nil. Related to the first assumption: if the variance of this conditional distribution of ui does not depend on X1i . but it is (13) inappropriate for time series data. In cross-sectional data. In case of perfect multicollinearity. A panel that has some missing data for at least one time period for at least one entity is called an unbalanced panel data. X ki and ui are nonzero and finite. To test for A.. X ki . X 2i .. B... conditional on regressors... one should use hetereoskedastic-robust standard errors..... X ki . heteroskedasticity. X 2i . 1931)’s test. n are Independently and Identically (12) Distributed. n and in particular does not depend on X1i .

If the panel comprises observations on a fixed and relatively small set of units of interest (say. variables from which we subtract a fraction of their average. unsurprisingly. however. Suppose we have observations on n units or individuals and there are k independent variables of interest. individual effects are negligible. In contrast to the fixed effects model. When using the approach of subtracting the group means. efficiency and consistency. we decompose uit into a unit-specific and time-invariant component.Yit = β 0 + β1 X it + β 2 Zi + β 3 St + uit Where St is unobserved. that is. the choice between fixed effects and random effects may be expressed in terms of the two econometric desiderata. The celebrated Gauss–Markov theorem. there is a presumption in favor of random effects. In the fixed effects approach. and an observation specific error. From a purely statistical viewpoint. depends on the assumption that the error term is independently and identically distributed (IID). If one does not fall foul of one or other of the prohibitions mentioned above. which are to be estimated. then pooled OLS turns out. A somewhat analogous prohibition applies to the random effects estimator. That is. As a consequence. When the fixed effects approach is implemented using dummy variables. © Thierry Warin 59 © Thierry Warin 60 . once these effects are swept out by taking deviations from the group means. uit . uncorrelated with the regressors. 1. the vis are not treated as fixed parameters. on the other hand. On the other hand. we could say that there is a tradeoff between robustness and efficiency. For the random effects model. This means that if all the variance is attributable to the individual effects.1 The _is are then treated as fixed parameters (in effect. that is. This requires that individual effects are representable as a legitimate part of the disturbance term. the remaining parameters can be estimated. Some panel data sets contain variables whose values are specific to the crosssectional unit but which do not vary over time. Running Pooled OLS regressions in Stata The simplest estimator for panel data is pooled OLS. 2. the member states of the European Union). If it comprises observations on a large number of randomly selected individuals (as in many epidemiological and other longitudinal studies). according to which OLS is the best linear unbiased estimator (BLUE). If k > n. If you want to include such variables in the model. there is a presumption in favor of fixed effects. the issue is that after de-meaning these variables are nothing but zeros. if. However. GLS estimation is equivalent to OLS using “quasi-demeaned” variables. the “between” estimator is undefined — since we have only n effective observations — and hence so is the random effects estimator. the time-invariant differences in mean between the groups) beyond the fact that they exist — and that can be tested. _i. fixed effects or random effects? One way of answering this question is in relation to the nature of the data set. "it . Greater efficiency may be gained using generalized least squares (GLS). various statistical issues must be taken into account. see below. but as random drawings from a given probability distribution. the random effects approach attempts to model the group effects as drawings from a probability distribution instead of removing them. to be the optimal estimator. The fixed and random effects models The fixed and random effects models have in common that they decompose the unitary pooled error term. This is sometimes called the Least Squares Dummy Variables (LSDV) method or “de-meaned” variables method. we do not make any hypotheses on the “group effects” (that is. the problem is that the time-invariant variables are perfectly collinear with the per-unit dummies. This can be done by including a dummy variable for each cross-sectional unit (and suppressing the global constant). If these assumptions are not met — and they are unlikely to be met in the context of panel data — OLS is not the most efficient estimator. Besides this general heuristic. zero-mean random variables. then the fixed effects estimator is optimal. where the subscript “t” emphasizes that the variable S changes over time but is constant across states. This estimator is in effect a matrix-weighted average of pooled OLS and the “between” estimator. unit-specific y-intercepts). (14) Choice of estimator Which panel method should one use. taking into account the covariance structure of the error term. the fixed effects option is simply not available. In most cases this is unlikely to be adequate.

robust covariance matrix estimators are available for the pooled and fixed effects model but not currently for random effects. you would type: ereturn list Testing panel models Panel models carry certain complications that make it difficult to implement all of the tests one expects to see for models estimated on straight time-series or crosssectional data. you are instructing Stata to include all the variables starting with the expression age to be included in the regression. Suppose you want to observe the internal results saved in Stata associated with the last estimation. be Robust standard errors For most estimators. If. there is no reason to think the additional hypotheses invalid. It is precisely on this principle that the Hausman test is built: if the fixed.time Let's perform a regression where only the variation of the means across individuals is considered. The Hausman test probes the consistency of the GLS estimates. then again we conclude that the simple pooled model is adequate. and so. you automatically get an F-test for the null hypothesis that the cross-sectional units all have a common intercept. while fixed-effects estimates would still be valid. then the random-effects estimator would be inconsistent. The richer hypothesis set of the random-effects estimator ensures that parameters for timeinvariant regressors can be estimated. These advantages. When you do this. the Breusch–Pagan and Hausman tests are presented automatically. autocorrelation (and hence also robust standard errors). Stata offers the option of computing an estimate of the covariance matrix that is robust with respect to heteroskedasticity and/or © Thierry Warin 61 © Thierry Warin 62 .and random effects estimates agree. to within the usual statistical margin of error. which is simply an OLS regression applied to the whole dataset. This is the between regression. This regression is not considering that you have different individuals across time periods. are tied to the validity of the additional hypotheses. The null hypothesis is that these estimates are consistent — that is. of the “distance” between the fixed-effects and random-effects estimates. When you estimate a model using fixed effects. you do not need to type age1 or age2. The test is based on a measure. The first type of regression that you may run is a pooled OLS regression. H. xtreg ln_wage grade age ttl_exp tenure black not_smsa south. and that estimation of the parameters for time-varying regressors is carried out more efficiently. In the case of panel data. it is not considering for the panel nature of the dataset. You just need to type age. When you estimate using random effects. that the requirement of orthogonality of the vi and the Xi is satisfied. there is reason to think that individual effects may be correlated with some of the explanatory variables. The Breusch–Pagan test is the counterpart to the F-test mentioned above. If you want to control for some categories: xi: reg dependent ind1 ind2 i. In order to observe them. This is valid for any regression that you perform. for example. the fixed-effects estimator “always works”.category2 i. constructed such that under the null it follows the _2 distribution with degrees of freedom equal to the number of time-varying regressors in the matrix X.As a consequence. but at the cost of not being able to estimate the effect of time-invariant regressors. Let's now turn to estimation commands for panel data. If the value of H is “large” this suggests that the random effects estimator is not consistent and the fixed-effects model is preferable.category1 i. no reason not to use the more efficient RE estimator. if this hypothesis is not rejected. reg ln_wage grade age ttl_exp tenure black not_smsa south In the previous command. and as a consequence. though. The null hypothesis is that the variance of vi in equation equals zero.

xi. The two-way model assumes the error term as having a specific individual term effect.yi. fixed effects are always a reasonable thing to do with panel data (they always give consistent results) but they may not be the most efficient model to run.. a specific time effect 3.t are the means of the respective variables (and the error) within each time period across individuals and y. or least squares dummy variables . x. • If you have no correlation. Choosing between Fixed effects and Random effects? The Hausman test It is absolutely fundamental that the error term is not correlated with the independent variables.. Statistically. are the means of the respective variables (and the error) within the individual across time. This choice is between fixed effects (or within. Random effects will give you better P-values as they are a more efficient estimator. you are always concerned in choosing between two alternative regressions.)B + (vit ..t.FGLS) estimation. so you should run random effects if it is statistically justifiable to do so.t + y. the error term can be the result of the sum of one component: 1. . In the one-way model..Running Panel regressions in Stata In empirical work in panel data.) © Thierry Warin 63 © Thierry Warin 64 .. then the random effects model should be used because it is a weighted average of between and within estimations. The fixed effects (or within regression) is an OLS regression of the form: (yit . .t + v. and vi.. in the two-way model. But.xi.t and v. • The generally accepted way of choosing between fixed and random effects is running a Hausman test. is the overall mean of the respective variables (and the error).. 2. In panel data. if there is correlation between the individual and/or time effects and the independent variables.) = (xit .t + x.LSDV) estimation and random effects (or feasible generalized least squares .vi. y.x. and v. x. the error term can be the result of the sum of three components: 1. then the individual and time effects (fixed effects model) must be estimated as dummy variables in order to solve for the endogeneity problem.v.y. and an additional idiosyncratic term. . assumes the error term as having a specific individual term effect where yi..

. absorb(idcode) robust You may be interested in running a maximum likelihood estimation in panel data. It is the test for time dummies. Second. we apply the "testparm" command. you can use the following command: areg ln_wage grade age ttl_exp tenure black not_smsa south. which assumes the null hypothesis that the time dummies are not jointly significant. when you are doing empirical work in panel data is to choose for the inclusion or not of time effects (time dummies) in your fixed effects model. In the next fixed effects regression. should you include time effects? Other important question.. If they are insignificant (P-value. 5. If you get a significant Pvalue. We reject the null hypothesis that the time dummies are not jointly significant if p-value smaller than 10%. xtreg dependentvar independentvar1 independentvar2. 4.05) then it is safe to use random effects.) © Thierry Warin 65 © Thierry Warin 66 . and then do the comparison. 3. xtreg ln_wage grade age ttl_exp tenure black not_smsa south y. In order to perform the test for the inclusion of time dummies in our fixed effects regression.. 1. . If you want a fixed effects model with robust standard errors. estimate the random effects model. testparm y 3. mle Fixed effects or random effects when time dummies are involved: a test What about if the inclusion of time dummies in our regression would permit us to use a random effects model in the individual effects? [This question is not usually considered in typical empirical work. you should use fixed effects. you need to first estimate the fixed effects model. save the coefficients so that you can compare them with the results of the next model. To run a Hausman test comparing fixed with random effects in Stata. re estimates store random hausman fixed random If you qualify for a fixed effects model. first we run fixed effects including the time dummies. and as a consequence our fixed effects regression should include time effects. the time dummies were abbreviated to "y" (see “Generating time dummies”. You would type: xtreg ln_wage grade age ttl_exp tenure black not_smsa south.the purpose here is to show you an additional test for random effects in panel data. The hausman test tests the null hypothesis that the coefficients estimated by the efficient random effects estimator are the same as the ones estimated by the consistent fixed effects estimator. 2. fe 2. however. but you could type them all if you prefer. fe estimates store fixed xtreg dependentvar independentvar1 independentvar2..The Hausman test checks a more efficient model against a less efficient but consistent model to make sure that the more efficient model also gives consistent results. Prob>chi2 larger than .. 1.

One strategy for handling this problem. nor does it take into account the differenced structure of the error _it . xttest0 3. However. Two additional commands that are very usefull in empirical work are the Arellano and Bond estimator (GMM estimator) and the Arellano and Bover estimator (system GMM). Instead of de-meaning the data. and thus we should use a fixed effects model with time effects.1. an alternative tactic for sweeping out the group effects: Although the Anderson–Hsiao estimator is consistent. strictly speaking. Estimators which ignore this correlation will be consistent only as T ! 1 (in which case the marginal effect of "it on the group mean of y tends to vanish). It is improved upon by the methods of Arellano and Bond (1991) and Blundell and Bond (1998). but a subtler issue remains. it is not most efficient: it does not make the fullest use of the available instruments. Moreover. The rationale behind it is. First. they suggest taking the first difference. vi. implementing the finite-sample correction devised by Windmeijer (2005). This procedure has the double effect of handling heteroskedasticity and/or serial correlation. Dynamic panels and GMM estimations Special problems arise when a lag of the dependent variable is included among the regressors in a panel model. computing the covariance matrix of the 2-step estimator via the standard GMM formulae has been shown to produce grossly biased results in finite samples. since the value of vi affects yi at all t. Stata implements natively the Arellano–Bond estimator. we will run a random effects regression including dummies. leads to standard errors for the 2-step estimator that can be considered relatively accurate. which applies to both fixed and random effects estimation. One-step estimators have sometimes been preferred on the grounds that they are more robust. re 2. plus producing estimators that are asymptotically efficient. The fixed-effects model sweeps out the group effects and so overcomes this particular problem. and producing consistent estimates of _ and _. then yit 1 is bound to be correlated with the error. that of a GMM estimator. was proposed by Anderson and Hsiao (1981). The null hypothesis of random effects is again rejected if p-value smaller than 10%. which assumes the null hypothesis of random effects. and then we will apply the "xttest0" command to test for random effects in this case. First. if the error uit includes a group effect. © Thierry Warin 67 © Thierry Warin 68 . That means that OLS will be inconsistent as well as inefficient. our time xtreg ln_wage grade age ttl_exp tenure black not_smsa south y.

eq(level). the lagged variables (in level) of each variable from list1. So far. When noleveleq is specified. But although this two-step estimation is asymptotically more efficient. eq(both) and collapse o lag(a.b). To fix this issue. o Option mz replaces the missing values of the exogenous variables by zero. Otherwise. the option is eq(both).Both commands permit you do deal with dynamic panels (where you want to use as independent variable lags of the dependent variable) as well with problems of endogeneity. it is the GMM estimator in difference that’s used. lag(1 2)) gmm (y. options2) two robust small 1. eq(both). options): • list1 is the list of the non-exogenous independent variables • options1 may take the following values: lag(a. will be used as instruments. leads to biased results. pass and mz. By default. But it reduces the statistical efficiency of the estimator in large samples. Example: gmm(z. the xtabond2 command proceeds to a correction of the covariance matrix for finite samples. 5. allowing thus to include in the regression the observations whose data on exogenous variables are missing. iv(list2. 3. at least two periods. You may want to have a look at them The commands are respectively "xtabond" and "xtabond2". eq(level). 2. The pass option allows to prevent that exogenous variables are differentiated to serve as instruments in equations in first difference. eq(level)) gmm(x. options2): • List2 is the list of variables that are strictly exogenous. so in order to check how it works. and options2 may take the following values: eq(diff). Example 2: gmm(x. dated from t-a to tb. a=1. If you want to look at it. whereas for variable y. just type: help xtabond "xtabond2" is not a built in command in Stata. By default. This option impacts the coefficients only if the variables are exogenous. if noleveleq is not specified. and are used undifferentiated to serve as instruments in the equations in level. options1) iv(list2. o Eq(diff). gmm(list1. lagged by © Thierry Warin 69 © Thierry Warin 70 . there is no test to know whether the on-step GMM estimator or two-step GMM estimator should be used. lag (2 3)) ⇒ for variable x. the lagged values of one period and two periods will be used as instruments. it is the GMM estimator in system that’s used. eq(diff) pass) allows to use variable x in level as an instrument in the equation in level as well as in the equation in difference. previously. "xtabond" is a built in command in Stata. lag(2 . o Option collapse reduces the size of the instruments matrix and aloow to prevent the overestimation bias in small samples when the number of instruments is close to the number of observations. If b=●. the equation in level. 4. the first differences dated t-a+1 will be used as instruments. o Options eq(diff). Option robust: • This option allows to correct the t-test for heteroscedasticity. xtabond2 dep_variable ind_variables (if. How does it work? The xtabond2 commands allows to estimate dynamic models either with the GMM estimator in difference or the GMM estimator in system. or for both. Example: gmm(x y. You type the following: findit xtabond2 The next steps to install the command should be obvious. eq(level). whereas for the equation in level. eq(level) or eq(both) mean that the instruments must be used respectively for the equation in first difference. the lagged values of two and three periods will be used as instruments. it means b is infinite.)) ⇒ all the lagged variables of x and y. noleveleq gmm(list1.b) means that for the equation in difference. you must get it from the net (this is another feature of Stata. and b=●. Option two: • This option specifies the use of the GMM estimation in two steps. eq(diff). will be used as instruments. in). and eq(both): see above o By default.you can always get additional commands from the net). the exogenous variables are differentiated to serve as instruments in the equations in first difference.

0 52 759522. can be easily replicated using OLS regressions and the time series commands introduced in the previous tutorials.57878 chic | L1 | .8349305 .9 ------------------------------------------------------------------------------ test L.3413 279.05 Prob > F = 0. A simple example in Stata: *Causality direction A: Do chickens Granger-cause eggs? For example.egg = 0.7250 ---------+-----------------------------Adj R-squared = 0.24 Model | 38021977. Err. t P>|t| [95% Conf.0000 Residual | 3.1170e+11 52 2.289 0.217 0.chic Source | SS df MS Number of obs = 53 ---------+-----------------------------F( 2.chic = 0.569 170065. _cons | 279.9612 Total | 39495157.203 0.6937 0.075617 11.8292 **Causality direction B: Do eggs Granger-cause chickens? This involves the same techniques. but here you need to regress chickens against the lags of chickens and the lags of eggs.032 7837.829 -.277 -12.0009383 ( 1) L.000 . Interval] ---------+-------------------------------------------------------------------egg | L1 | . Once again.0492e+10 Prob > F = 0.027241 35. Err. to obtain a description of the data.chic Source | SS df MS Number of obs = 53 ---------+-----------------------------F( 2. Table 1.25 2.5832 R-squared = 0.9613121 .egg ( 1) L. Std.0001136 . t P>|t| [95% Conf.25 Root MSE = 171. and the regression equations you are going to run.906597 1.0000 Residual | 1473179. 50) = 0. Interval] ---------+-------------------------------------------------------------------egg | L1 | -4.6830493 .0984e+10 2 4.92 Model | 8.0011655 . it is required that you show explicitly what are the NULL and ALTERNATIVE hypotheses of this test.32139 3.0712e+10 50 614248751 R-squared = 0.16 50 29463.egg L.9868117 _cons | 88951.72 40384.44 841.egg L.999 0.9 Prob > F = 0. Option small: • This option replaces the z-statistics by the t-test results.000 .042 0. 50) = 645.0 © Thierry Warin 71 © Thierry Warin 72 .099 0.9627 ---------+-----------------------------Adj R-squared = 0. using one lag you have: regress chic L. The results of Thurman and Fisher's (1988).016027 chic | L1 | -.65 -----------------------------------------------------------------------------egg | Coef.8 2 19010988.7140 Total | 1. Std.1480e+09 Root MSE = 24784 -----------------------------------------------------------------------------chic | Coef.323 -282.0 F( 1.933252 -1.1226 -----------------------------------------------------------------------------And you can test if chickens Granger cause eggs using a F-test: test L.chic TESTS In need for a causality test? The first thing to do is to use the command summarize.22156 3.6. detail or other functions presented in the previous tutorials.0005237 -0. using the number of lags equals 1 you proceed as follows: regress egg L. 50) = 65. For example.

623901 L2 | .740 0. 41) = 4.59828 -0.252 0.egg L3.853 -97.8466 21. containing your results.186 0.0154e+10 41 491569158 R-squared = 0.26989 80.15897 chic | L1 | .0 L2.6593 L2 | -62.egg = 0.0 L3.0461663 .0256691 .929 -. Err.176 0.38472 26.09684 -0.egg = 0.2772 test L.49408 41.chic L2.2059394 -0. Interval] ---------+-------------------------------------------------------------------egg | L1 | 87. Please provide a table in the same format of Thurman and Fisher's (1988).1573878 . 50) = 1.0000 Residual | 2. t P>|t| [95% Conf.246 0.F( 1.chic L4. Example: Do eggs Granger cause chickens (in four lags)? regress chic L. and 4.egg ( 1) ( 2) ( 3) ( 4) L.144 0.egg L.3974153 L4 | . plus a graphical analysis.3 -----------------------------------------------------------------------------and then test the joint significance of all lags of eggs © Thierry Warin 73 © Thierry Warin 74 .8697736 L3 | -. Causality in further lags: To test Granger causality in further lags.84086 L4 | -22.090 0.3336602 .63552 30.3.464 -84.1181e+10 Prob > F = 0.496 0.egg L2.75 Model | 8.3849984 _cons | 147330.chic Source | SS df MS Number of obs = 50 ---------+-----------------------------F( 8.11014 141.3 46385.21 Prob > F = 0.235 -.030 .87471 3.2332566 .7802 Total | 1.32 3.4343907 .2369e+09 Root MSE = 22171 -----------------------------------------------------------------------------chic | Coef.45797 .76817 -1. the procedures are the same.egg = 0.0 L4.8161 ---------+-----------------------------Adj R-squared = 0.214513 44.886 -.0 F( 4.002 33.0184877 .0057 Do that for the for lags 1.egg L3.2.1934323 1. Std.1779262 0.egg = 0. Just remember to test the joint hypothesis of non-significance of the "causality" terms.43 39. 41) = 22.85845 L3 | -8.9451e+10 8 1.142 -146.chic L3.26 Prob > F = 0.003 53653.2039095 2.egg L2.egg L4.egg L4.2 241007.206 0.0961e+11 49 2.

+ β k X k ) (16) Logit regression is similar to probit regression except that the cumulative distribution function is different...... X k ) = F ( β0 + β1 X 1 + . © Thierry Warin 75 © Thierry Warin 76 . Probit and logit regressions Probit and logit regressions are models designed for binary dependent variables. Probit regression Pr ( Y = 1 X 1 . X k ) = 1 1+ e − ( β 0 + β1 X 1 +.. it makes sense to adopt a nonlinear formulation that forces the predicted values to be between zero and one....... X k ) = φ ( β0 + β1 X 1 + . (15) Logit regression Pr ( Y = 1 X 1 . + βk X k ) Where φ is the cumulative standard normal distribution... Logit regression uses the logistic cumulative probability distribution function. Probit regression uses the standard normal cumulative probability distribution function..... Because a regression with a binary dependent variable Y models the probability that Y=1.Linear probability model Maximum likelihood estimation 1. + β k X k ) Pr ( Y = 1 X 1 .

7927 75% 278 597 90% 389 643 Variance 18439. Variance Skewness 4. 135.028006 © Thierry Warin 77 © Thierry Warin 78 .039773 99% 594 920 Kurtosis 4. detail No. Dev.703958 1. 465 50% 75% 90% 95% 99% 4 10 18 27 58 Mean 7. We see that timedrs phyheal and stress show considerable skewness. visits physical/mental health prof ------------------------------------------------------------Percentiles Smallest 1% 0 0 5% 0 0 10% 1 0 Obs 465 25% 2 0 Sum of Wgt.122581 Largest Std. 465 50% 178 Mean 204.193594 17 18 Variance 17.66 95% 441 731 Skewness 1. 465 50% 75% 90% 95% 99% 6 9 12 14 17 Mean 6.94849 60 60 Variance 119.edu/stat/stata/modules/reg/health. . summarize timedrs phyheal menheal stress.098588 EXAMPLES Health Care use http://www. Dev.58623 18 Skewness .901075 Largest Std. 10. normal No.99% 12 15 Kurtosis 4. 4. 465 50% 75% 90% 95% 5 6 8 9 Largest 13 13 14 Mean Std. Dev.972043 2. of physical health problems ------------------------------------------------------------Percentiles Smallest 1% 2 2 5% 2 2 10% 2 2 Obs 465 25% 3 2 Sum of Wgt.388296 5.768424 Let's graph the distribution of the variables.ats. clear Let's start by checking univariate the distribution of these variables. Dev.2172 Largest Std.8695 75 Skewness 3. .6005144 18 Kurtosis 2.23763 81 Kurtosis 15. of mental health problems ------------------------------------------------------------Percentiles Smallest 1% 0 0 5% 0 0 10% 1 0 Obs 465 25% 3 0 Sum of Wgt. kdensity timedrs.698121 Life Change Units ------------------------------------------------------------Percentiles Smallest 1% 0 0 5% 25 0 10% 59 0 Obs 465 25% 98 0 Sum of Wgt.ucla.9472 No.

let's try running a regression and examine the diagnostics.0000 Residual | 43451.2137 Total | 55619. 461) = 43. matrix symbol(.7085 . graph timedrs phyheal menheal stress.10512 Prob > F = 0.03 Model | 12168. . .4495 464 119. normal © Thierry Warin 79 © Thierry Warin 80 ..869503 Root MSE = 9.3154 3 4056. kdensity stress. normal From the graphs above. kdensity menheal.254087 R-squared = 0.2188 ---------+-----------------------------Adj R-squared = 0. Below we create scatterplot matrices and they clearly show problems that need to be addressed. while stress is somewhat less skewed.) . regress timedrs phyheal menheal stress Source | SS df MS Number of obs = 465 ---------+-----------------------------F( 3. normal Even though we know there are problems with these variables. timedrs and phyheal seem the most skewed.1341 461 94. kdensity phyheal.

0278367 1 1. Err.1290286 -0.000 .1668434 .4771213 Sum of Wgt.7781513 Mean .4771213 Obs 465 25% .146128 Variance .770852 1.769 0.741285 Largest Std.4771213 10% .124195 -3.296 0.4771213 .1724357 1.20412 Kurtosis 2.495666 -----------------------------------------------------------------------------The rvfplot shows a real fan spread pattern where the variability of the residuals grows across the fitted values.4771213 . .-----------------------------------------------------------------------------timedrs | Coef.69897 Mean .972175 © Thierry Warin 81 © Thierry Warin 82 .940 -.2438915 stress | .0207128 _cons | -3.845098 1.1555756 1.899495 0 Sum of Wgt.4771213 . Dev.913814 Kurtosis 2.786948 .447158 1.041393 1. .354632 sstress ------------------------------------------------------------Percentiles Smallest 1% 0 0 5% 5 0 10% 7. Interval] ---------+-------------------------------------------------------------------phyheal | 1.2277155 1. .681146 0 Obs 465 25% 9.30103 0 Obs 465 25% . .146128 .914029 -1.83 Prob > chi2 = 0.60206 . generate ltimedrs = log10(timedrs+1) .0000 Let's address the problems of non-normality and heteroscedasticity. generate sstress = sqrt(stress) Let's examine the distributions of these new variables. We make these transformations below.2210735 8. hettest Cook-Weisberg test for heteroscedasticity using fitted values of timedrs Ho: Constant variance chi2(1) = 148. t P>|t| [95% Conf.34166 Mean Largest Std.4152538 1. Std. Dev.880814 Skewness .352511 2.39955 4. 465 50% 75% 90% 95% 99% . Tabachnick and Fidell recommend a log (to the base 10) transformation for ltimedrs and phyheal and a square root transformation for stress.113943 1.9542425 1.67333 24. 16. rvfplot ltimedrs ------------------------------------------------------------Percentiles Smallest 1% 0 0 5% 0 0 10% .2632227 .7437625 Largest Std. detail 50% 75% 90% 95% 99% . generate lphyheal = log10(phyheal+1) .4771213 0 Sum of Wgt. These transformations have nearly completely reduced the skewness. Dev. summarize ltimedrs lphyheal sstress.0096656 .811711 lphyheal ------------------------------------------------------------Percentiles Smallest 1% .704848 1.083 0.43358 13.0036121 3.78533 Variance . 465 50% 75% 13.000 1.78533 1.0136145 .278754 1.0065162 .075 0.4771213 5% . 465 The hettest command confirms there is a problem of heteroscedasticity.176091 Skewness . . .221385 menheal | -.001 -5.

010 0.3315 Kurtosis 3.72252 95% 21 27.505687 .) . t P>|t| [95% Conf.000 1.3788 ---------+-----------------------------Adj R-squared = 0. The distributions look pretty good.72308 25. graph ltimedrs lphyheal menheal sstress.35744 Variance 24.32835 -----------------------------------------------------------------------------ltimedrs | Coef.90% 19.0908912 99% 24. normal © Thierry Warin 83 © Thierry Warin 84 . normal The scatterplots for the transformed variables look better. .082244 1.1077396 12. Interval] ---------+-------------------------------------------------------------------lphyheal | 1. regress ltimedrs lphyheal menheal sstress Source | SS df MS Number of obs = 465 ---------+-----------------------------F( 3. 461) = 93.3747 Total | 80.37212 30.70 Model | 30.3070861 3 10.0101644 464 .7030783 461 . kdensity sstress. kdensity ltimedrs. kdensity lphyheal.102605 Let's use kdensity to look at the distribution of these new variables. . Std. normal Now let's try running a regression and diagnostics with these transformed variables. Err. .102362 Prob > F = 0.293965 .107815788 R-squared = 0.0000 Residual | 49.03701 Skewness -. matrix symbol(.172435699 Root MSE = .

and Cook's D. There still is a flat portion in the bottom left of the plot.0070268 .0755985 -5. hettest Cook-Weisberg test for heteroscedasticity using fitted values of ltimedrs Ho: Constant variance chi2(1) = 0.60 Prob > F = 0.6134 We use the ovtest with the rhs option to test for omitted higher order trends (e.g.832 0.3529 We use the ovtest command to test for omitted variables from the equation.000 -. graph rstu l [w=d] Below we show the same plot showing the subject number. .0156626 . cooksd .87 Prob > F = 0. predict rstud. studentized residuals.2923398 -----------------------------------------------------------------------------The distribution of the residuals looks better.0016188 . ovtest Ramsey RESET test using powers of the fitted values of ltimedrs Ho: model has no omitted variables F(3. .0033582 4. and plot these. These result look mostly OK.5894606 -. .menheal | . and see that observation 548 is the observation we identified in the plot above.4409002 . cubic trends). rvfplot Ho: model has no omitted variables F(9. avplots The hettest command is no longer significant.86 Prob > chi2 = 0. . 458) = 0.713 -. ovtest. rstudent . There is one observation in the middle top-right section that has a large Cook's D (large bubble) a fairly large residual. suggesting that the residuals are homoscedastic. .0090632 . but not a very large leverage. .5525 Examination of the added variable plots below show no dramatic problems. quadratic. . 452) = 0. predict d. rhs Ramsey RESET test using powers of the independent variables Let's create leverage. leverage . predict l. symbol([subjno]) © Thierry Warin 85 © Thierry Warin 86 . The results suggest there are no omitted higher order trends.664 0.368 0. and there is a residual in the top left.0102645 sstress | .0222619 _cons | -. The results suggest no omitted variables. graph rstu l.000 .0043995 0.

3035176 -----------------------------------------------------------------------------Let's try robust regression and again check to see if the results change.0031765 3.0094834 sstress | .0044805 0.7399455 .0186633 _cons | -.0000 R-squared = 0.000 1.0699104 -6.314 0.4409002 . Interval] ---------+-------------------------------------------------------------------lphyheal | 1.0065397 25. nbreg timedrs phyheal menheal stress Fitting comparison Poisson model: © Thierry Warin 87 © Thierry Warin 88 .5782828 -. we could have tried analyzing the data using poisson regression.1698972 .1827147 menheal | .307 0.00324634 Robust regression estimates Number of obs = F( 3.420 0. . Std. Interval] ---------+-------------------------------------------------------------------phyheal | .0428896 17. Std.0011976 .0024729 .94 Prob > F = 0.361 0.000 . Err.0061833 .0000 Pseudo R2 = 465 -----------------------------------------------------------------------------| Robust ltimedrs | Coef.0061789 .6558835 .4590465 . 461) = 105.0156626 .507096 menheal | .772 Iteration 2: log likelihood = -2398.466 0.3788 Root MSE = . 461) = 114. by trying negative binomial regression.0041615 0.980 0.05608508 Huber iteration 3: maximum difference in weights = .0071859 .000 .66878052 Huber iteration 2: maximum difference in weights = . z P>|z| [95% Conf.64 Prob > chi2 = 0.000 .000 -.293965 .931 0.0089293 . Let's try running the regression using robust standard errors and see if we get the same results.32835 465 -----------------------------------------------------------------------------ltimedrs | Coef.000 1. . rreg ltimedrs lphyheal menheal sstress Huber iteration 1: maximum difference in weights = .0223958 _cons | -.27285455 Biweight iteration 5: maximum difference in weights = .5995681 -.0148395 stress | .000 .000 .3185249 -----------------------------------------------------------------------------Since the dependent variable was a count variable.772 Poisson regression Number of obs = LR chi2(3) = 1307.01514261 Log likelihood = -2398.0013055 .910 0. t P>|t| [95% Conf. t P>|t| [95% Conf.001421 .01226236 Biweight iteration 4: maximum difference in weights = .000114 12. regress ltimedrs lphyheal menheal sstress.718 -. .0016444 _cons | .16334 1.252 0.0124211 .772 0.0044165 1.080834 1. Std. the results below (using robust standard errors) are virtually the same as the prior results.0068723 .363605 .66 Prob > F = 0.1019097 13. Interval] ---------+-------------------------------------------------------------------lphyheal | 1.000 -.3092 Iteration 1: log likelihood = -2398. the results are nearly identical to the original results.0000 465 The residuals look like they are OK.754 -.381 0. We try analyzing the original variables using poisson regression.1084569 11.0715078 -6. . Err. Again.1570797 .2142 -----------------------------------------------------------------------------timedrs | Coef.0034264 4.400 0.571 0. Indeed.Biweight iteration 6: maximum difference in weights = .56387 menheal | . poisson timedrs phyheal menheal stress Iteration 0: log likelihood = -2399.0104235 sstress | .0016188 . Err. robust Regression with robust standard errors Number of obs = F( 3.162 -.8240076 -----------------------------------------------------------------------------We can check to see if there is overdispersion in the poisson regression.

000 .77 Prob > chi2 = 0.363 -.2685003 menheal | .0113085 .5593 log likelihood = -1360.382 0.3159535 .0629302 .26 = 0.6247321 .0356838 stress | .000 -.0165 This module illustrated some of the diagnostic techniques and remedies that can be used in regression analysis.0165 log likelihood = -1453.0024222 _cons | .235 0.772 Fitting constant-only model: Iteration 0: Iteration 1: Iteration 2: Iteration 3: log likelihood = -1454.0124366 0.0220181 10.0017756 .772 Iteration 2: log likelihood = -2398.0003299 5. indicating that the negative binomial model would be preferred over the poisson model.1249824 2. Err.014 .009 0.3092 Iteration 1: log likelihood = -2398. The main problems shown here were problems of non-normality and heteroscedasticity that could be mended using log and square root transformations.1821909 .1614747 ---------+-------------------------------------------------------------------alpha | . Std.4125 log likelihood = -1453. © Thierry Warin 89 © Thierry Warin 90 .463 0.909 0.7290933 .0000 Pseudo R2 = 0.Iteration 0: log likelihood = -2399.2253456 .8911 log likelihood = -1360.8849 -----------------------------------------------------------------------------timedrs | Coef.0168 log likelihood = -1453.8849 log likelihood = -1360.3758 log likelihood = -1362.001129 .0634 Negative binomial regression LR chi2(3) Prob > chi2 Log likelihood = -1360.3078912 .0574651 .000 .850888 -----------------------------------------------------------------------------Likelihood ratio test of alpha=0: chi2(1) = 2075.5528521 ---------+-------------------------------------------------------------------/lnalpha | -. z P>|z| [95% Conf.0000 The test of overdispersion (test of alpha=0) is significant.4704323 -.0130667 . Fitting full model: Iteration 0: Iteration 1: Iteration 2: Iteration 3: Iteration 4: log likelihood = -1380. Interval] ---------+-------------------------------------------------------------------phyheal | .0788172 -4.8849 Number of obs = 465 = 184.

For tables with any number of rows and columns. Protestant. linear-by-linear association test. select Eta. while the majority of large companies (more than 2500 employees) yield low service profits. For tables in which both rows and columns contain ordered values. When both table variables are quantitative. Spearman’s rho is a measure of association between rank orders. For 2 ´ 2 tables. select Chi-square to calculate the Pearson chi-square and the likelihood-ratio chi-square. routine. The McNemar test is a nonparametric test for two related dichotomous variables. Correlations yields the Pearson correlation coefficient. Chi-square. training and consulting) than those from larger companies? From a crosstabulation. For predicting column categories from row categories. It tests for changes in responses using the chi-square distribution. Example. Pearson’s r. and a layer factor (control variable). Somers’ d. odds ratio. Yates’ corrected chi-square is computed for all other 2 ´ 2 tables. you might learn that the majority of small companies (fewer than 500 employees) yield high service profits.Appendix 1 The Crosstabs procedure forms two-way and multiway tables and provides a variety of tests and measures of association for two-way tables. When both table variables (factors) are quantitative. Cochran’s and Mantel-Haenszel. select Gamma (zero-order for 2-way tables and conditional for 3-way to 10-way tables). symmetric and asymmetric lambdas. Fisher’s exact test. Kendall’s tau-c. For tables in which both rows and columns contain ordered values. Pearson chi-square. © Thierry Warin 91 © Thierry Warin 92 . Nominal. Crosstabs’ statistics and measures of association are computed for two-way tables only. likelihood-ratio chisquare. Cramér’s V. uncertainty coefficient. Statistics and measures of association. Kappa. Fisher’s exact test. When one variable is categorical and the other is quantitative. a measure of linear association between the variables. measuring agreement between two raters). such as Catholic. Correlations. and Yates’ corrected chi-square (continuity correction). the results for a two-way table for the females are computed separately from those for the males and printed as panels following one another. The structure of the table and whether categories are ordered determine what test or measure to use. Cohen’s kappa. Kendall’s tau-b. Yates’ corrected chisquare. and Kendall’s tau-c. rho (numeric data only). Kendall’s tau-b. select Somers’ d. Risk. The Mantel-Haenszel common odds ratio is also computed. For example. gamma. Contingency coefficient. If you specify a row. and Jewish). McNemar test. relative risk estimate. Spearman’s rho. or dull). select Risk for relative risk estimates and the odds ratio. r. Goodman and Kruskal’s tau. The categorical variable must be coded numerically. the likelihood-ratio chi-square. you can select Phi (coefficient) and Cramér’s V. Correlations yields Spearman’s correlation coefficient. For nominal data (no intrinsic order. For tables with two rows and two columns. select Cohen’ s Kappa. a column. no) against LIFE (is life exciting. Ordinal. For tables with two rows and two columns. It is useful for detecting changes in responses due to experimental intervention in "before and after" designs. and Uncertainty coefficient. Cochran's and Mantel-Haenszel. Fisher’s exact test is computed when a table that does not result from missing rows or columns in a larger table has a cell with an expected frequency of less than 5. contingency coefficient. select Chi-square to calculate the Pearson chi-square. For tables that have the same categories in the columns as in the rows (for example. Nominal by Interval. phi. Lambda (symmetric and asymmetric lambdas and Goodman and Kruskal’s tau). Are customers from small companies more likely to be profitable in sales of services (for example. Cochran’s and Mantel-Haenszel statistics can be used to test for independence between a dichotomous factor variable and a dichotomous response variable. the Crosstabs procedure forms one panel of associated statistics and measures for each value of the layer factor (or a combination of values for two or more control variables). Chi-square yields the linear-by-linear association test. if GENDER is a layer factor for a table of MARRIED (yes. eta coefficient. conditional upon covariate patterns defined by one or more layer (control) variables. along with Breslow-Day and Tarone's statistics for testing the homogeneity of the common odds ratio. McNemar.

References © Thierry Warin 93 .

Sign up to vote on this title
UsefulNot useful