Professional Documents
Culture Documents
Our Purpose
Examine these assumptions Provide various tests for these assumptions
Theory Sample SAS code (SAS, Version 8.2)
Normality
Why normal?
ANOVA is an Analysis of Variance Analysis of two variances, more specifically, the ratio of two variances Statistical inference is based on the F distribution which is given by the ratio of two chi-squared distributions No surprise that each variance in the ANOVA ratio come from a parent normal distribution
Calculations can always be derived no matter what the distribution is. Calculations are algebraic properties separating sums of squares. Normality is only needed for statistical inference.
Normality Tests
Wide variety of tests we can perform to test if the data follows a normal distribution. Mardia (1980) provides an extensive list for both the univariate and multivariate cases, categorizing them into two types
Properties of normal distribution, more specifically, the first four moments of the normal distribution
Shapiro-Wilks W (compares the ratio of the standard deviation to the variance multiplied by a constant to one)
Goodness-of-fit tests,
Kolmogorov-Smirnov D Cramer-von Mises W2 Anderson-Darling A2
Normality Tests
proc univariate data=temp normal plot; var expvar; run;
Tests for Normality
Test
Shapiro-Wilk Kolmogorov-Smirnov Cramer-von Mises Anderson-Darling
Normal Probability Plot 8.25+ | | | | | | | 4.25+ | | | | | | | 0.25+* +++* ++**** ++++ ** ++++***** ++****** * ****************** ** ** +++ *+++ * *
-----p Value-----Pr Pr Pr Pr < > > > W D W-Sq A-Sq <0.0001 <0.0100 <0.0050 <0.0050
Test
Shapiro-Wilk Kolmogorov-Smirnov Cramer-von Mises Anderson-Darling
-----p Value-----Pr Pr Pr Pr < > > > W D W-Sq A-Sq 0.6521 >0.1500 >0.2500 >0.2500
+ ++++
+----+----+----+----+----+----+----+----+----+----+
Stem 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0
Leaf 0
# 1
Boxplot *
1 1 1 1 3 1 2 8 5 14 23 39
Normal Probability Plot 2.3+ ++ * | ++* | +** | +** | **** | *** | **+ | ** | *** | **+ | *** 0.1+ *** | ** | *** | *** | ** | +*** | +** | +** | **** | ++ | +* -2.1+*++ +----+----+----+----+----+----+----+----+----+----+ -2 -1 0 +1 +2
Leaf 1 7 90 047 6779 469002 2368 005546 228880077 5233446 3458447 366904459 52871 884318651 98619 60 98557220 963 584 853 0 4 8 ----+----+----+----+ Multiply Stem.Leaf by 10**-1
# 1 1 2 3 4 6 4 6 9 7 7 9 5 9 5 2 8 3 3 3 1 1 1
Consequences of Non-Normality
F-test is very robust against non-normal data, especially in a fixed-effects model Large sample size will approximate normality by Central Limit Theorem (recommended sample size > 50) Simulations have shown unequal sample sizes between treatment groups magnify any departure from normality A large deviation from normality leads to hypothesis test conclusions that are too liberal and a decrease in power and efficiency
Randomization test on the F-ratio Other non-parametric test if distribution is unknown Make up our own test using a likelihood ratio if distribution is known
Independence
Independent observations
No correlation between error terms No correlation between independent variables and error
Independence Tests
If we have some notion of how the data was
collected, we can check if there exists any autocorrelation. The Durbin-Watson statistic looks at the correlation of each value and the value before it
Data must be sorted in correct order for meaningful results For example, samples collected at the same time would be ordered by time if we suspect results could depend on time
Independence Tests
proc glm data=temp; class trt; model y = trt / p; output out=out_ds r=resid_var; run; quit; data out_ds; set out_ds; time = _n_; run; proc gplot data=out_ds; plot resid_var * time; run; quit; proc glm data=temp; class trt; model y = trt / p; output out=out_ds r=resid_var; run; quit; data out_ds; set out_ds; time = _n_; run; proc gplot data=out_ds; plot resid_var * time; run; quit;
Homogeneity of Variances
Eisenhart (1947) describes the problem of unequal variances as follows
the ANOVA model is based on the proportion of the mean squares of the factors and the residual mean squares The residual mean square is the unbiased estimator of 2, the variance of a single observation The between treatment mean squares takes into account not only the differences between observations, 2, just like the residual mean squares, but also the variance between treatments If there was non-constant variance among treatments, we can replace the residual mean square with some overall variance, a2, and a treatment variance, t2, which is some weighted version of a2 The neatness of ANOVA is lost
Homogeneity of Variances
The omnibus (overall) F-test is very robust against heterogeneity of variances, especially with fixed effects and equal sample sizes. Tests for treatment differences like t-tests and contrasts are severely affected, resulting in inferences that may be too liberal or conservative.
Brown-Forsythe Test
a slight modification of Levenes test, where the median is substituted for the mean (Kuehl (2000) refers to it as the Levene (med) Test)
Proportion of the largest variance of the treatment groups to the smallest and compares it to a critical value table Tabachnik and Fidell (2001) use the Fmax ratio more as a rule of thumb rather than using a table of critical values.
Fmax ratio is no greater than 10 Sample sizes of groups are approximately equal (ratio of smallest to largest is no greater than 4)
Homogeneous Variances The GLM Procedure Levene's Test for Homogeneity of Y Variance ANOVA of Squared Deviations from Group Means Sum of Squares 10.2533 1663.5 Mean Square 10.2533 16.9747
Heterogenous Variances The GLM Procedure Levene's Test for Homogeneity of y Variance ANOVA of Squared Deviations from Group Means Sum of Squares 10459.1 27921.5 Mean Square 10459.1 284.9
DF 1 98
F Value 0.60
Pr > F 0.4389
DF 1 98
F Value 36.71
Pr > F <.0001
Brown and Forsythe's Test for Homogeneity of Y Variance ANOVA of Absolute Deviations from Group Medians Sum of Squares 0.7087 124.6 Mean Square 0.7087 1.2710
Brown and Forsythe's Test for Homogeneity of y Variance ANOVA of Absolute Deviations from Group Medians Sum of Squares 318.3 333.8 Mean Square 318.3 3.4065
DF 1 98
F Value 0.56
Pr > F 0.4570
DF 1 98
F Value 93.45
Pr > F <.0001
Tests for Homogeneity of Variances (Randomized Complete Block Design and/or Factorial Design) In a CRD, the variance of each treatment group is checked for homogeneity In factorial/RCBD, each cells variance should be checked
H0: ij2 = ij2, For all i,j where i i, j j
Tests for Homogeneity of Variances (Randomized Complete Block Design and/or Factorial Design)
Approach 1
Approach 2
Recall Levenes Test and BrownForsythe Test are ANOVAs based on residuals Find residual for each observation Run ANOVA
data newgroup; set oldgroup; if block = 1 and treat = 1 then newgroup if block = 1 and treat = 2 then newgroup if block = 2 and treat = 1 then newgroup if block = 2 and treat = 2 then newgroup if block = 3 and treat = 1 then newgroup if block = 3 and treat = 2 then newgroup run; proc glm data=newgroup; class newgroup; model y = newgroup; means newgroup / hovtest=levene hovtest=bf; run; quit; = 1; = 2; = 3; = 4; = 5; = 6;
proc sort data=oldgroup; by treat block; run; proc means data=oldgroup noprint; by treat block; var y; output out=stats mean=mean median=median; run; data newgroup; merge oldgroup stats; by treat block; resid = abs(mean - y); if block = 1 and treat = 1 then newgroup = 1; run; proc glm data=newgroup; class newgroup; model resid = newgroup; run; quit;
DF 2 2
If there are repititions, homogeneity is to be shown within each cell like RCBD If there are repeated-measures, follow guidelines for sphericity, compound symmetry and additivity as well
Only do specific comparisons (sphericity does not apply since only two groups sphericity implies more than two) MANOVA Use an MLE procedure to specify variance-covariance matrix
Other Concerns
Outliers and influential points
Data should always be checked for influential points that might bias statistical inference
Use scatterplots of residuals Statistical tests using regression to detect outliers
DFBETAS Cooks D
References
Casella, G. and Berger, R. (2002). Statistical Inference. United States: Duxbury. Cochran, W. G. (1947). Some Consequences When the Assumptions for the Analysis of Variances are not Satisfied. Biometrics. Vol. 3, 22-38. Eisenhart, C. (1947). The Assumptions Underlying the Analysis of Variance. Biometrics. Vol. 3, 1-21. Ito, P. K. (1980). Robustness of ANOVA and MANOVA Test Procedures. Handbook of Statistics 1: Analysis of Variance (P. R. Krishnaiah, ed.), 199-236. Amsterdam: NorthHolland. Kaskey, G., et al. (1980). Transformations to Normality. Handbook of Statistics 1: Analysis of Variance (P. R. Krishnaiah, ed.), 321-341. Amsterdam: North-Holland. Kuehl, R. (2000). Design of Experiments: Statistical Principles of Research Design and Analysis, 2nd edition. United States: Duxbury. Kutner, M. H., et al. (2005). Applied Linear Statistical Models, 5th edition. New York: McGraw-Hill. Mardia, K. V. (1980). Tests of Univariate and Multivariate Normality. Handbook of Statistics 1: Analysis of Variance (P. R. Krishnaiah, ed.), 279-320. Amsterdam: North-Holland. Tabachnik, B. and Fidell, L. (2001). Computer-Assisted Research Design and Analysis. Boston: Allyn & Bacon.