Professional Documents
Culture Documents
Pagsanjan
MAED MATH 1B
A post hoc test is used only after find a statistically significant result and needs
to determine where our differences come from. The term "post hoc" is derived from
the Latin phrase "after the occurrence." There are several other post hoc tests devised,
and most of them will give us identical results. The following are the most regularly
used ones. The most commonly used ones are the following:
1. Bonferroni Test
2. Bonferroni Procedure
3. Benjamini-Hochberg (BH) procedure
4. Duncan’s new multiple range test (MRT)
5. Dunn’s Multiple Comparison Test
6. Dunnett’s correction
7. Fisher’s Least Significant Difference (LSD)
8. Holm-Bonferroni Procedure
9. Newman-Keuls
10. Rodger’s Method
11. Scheffé’s Method
12. Tukey’s Honest Significant Difference
1. Bonferroni Test
A Bonferroni test is the simplest post hoc analysis. It is a series of t-tests
performed on each pair of groups. For example, the number of comparisons quickly
increases as the number of groups grows, inflating Type I error rates. Using the
Bonferroni test, the significance level decreases by the number of comparisons,
resuming the original Type I error rate once it is all performed. After we have our new
significance level, we simply do independent samples t-tests to look for a difference
between our two groups. This adjustment is sometimes called a Bonferroni
Correction, and it is easy to do by hand if we want to compare obtained p-values to
our new corrected α level, but it will be more difficult to do when using critical values
as we do for our analyses so we will leave our discussion of it to that.
or
where t is the value from the t distribution for γ degrees of freedom and α/2k
confidence, sp is the pooled standard deviation, y is the mean and n is the sample size.
The i and j represent two different treatments. As long as the confidence interval does
not contain 0, there is a significant difference in the two means.
6. Dunnett’s correction
Dunnett's test compares each experimental, or treatment, group to a single
control group by generating a student's t-statistic for each. Since each comparison has
the same control in common, the procedure incorporates the dependencies between
these comparisons. Following the ANOVA, Dunnett's test may be performed to
determine whether pairs have significant differences.
Because all other samples are compared to one fixed "control" group, it should only
be used when one is available. If you don't have a control group, you can utilize
Tukey's Test. It is the same as Tukey’s test but it can control means.
8. Holm-Bonferroni Method
The usual Bonferroni method is sometimes criticized for being too
conservative. Holm's sequential Bonferroni post hoc test is a less rigorous correction
for multiple comparisons. The Holm-Bonferroni Method (also known as Holm's
Sequential Bonferroni Procedure) is a procedure for dealing with the familywise error
rates of multiple hypothesis tests (FWER). It's a Bonferroni correction that has been
adjusted. It's just as simple to calculate as the single-step Bonferroni approach, but it's
more powerful.
Formula:
9. Newman-Keuls
Newman-Keuls (sometimes called Student–Newman–Keuls or SNK) is a post
hoc test for differences in means. Once an ANOVA has given a statistically
significant result, you can run a Newman-Keuls to see which specific pairs of means
are different. The outliers range distribution will be used in the test. This post hoc test,
like Tukey's, identifies sample means that vary from one another.
For comparing pairs of means, Newman-Keuls utilizes distinct critical values. As a
result, substantial differences are more likely to be discovered. Duncan's multiple
range test (MRT) is a variation of the Student–Newman–Keuls test that employs
rising alpha levels to compute critical values in each step. It will result in a lower
probability of Type I error.
Ho: mean A = mean B,
Ha: mean A ≠ mean B.
As we can see, these are somewhat broader than the intervals we got with Tukey's
HSD. If all other components are equivalent, this indicates that they are more likely to
contain zero. However, the findings are the same in our case, and we infer that the
three groups are different once more.
Table 2: Differences between the group means and the Tukey’s HSD confidence
intervals
Comparison Difference Tukey’s HSD CI
None vs Relevant 40.60 (28.87, 52.33)
None vs Unrelated 19.50 (7.77, 31.23)
Relevant vs Unrelated 21.10 (9.37, 32.83)
As we can see, none of these intervals contain 0.00, so we can conclude that all three
groups are different from one another.
There are many more post hoc tests and they all approach the task in different
ways, with some being more conservative and others being more powerful. In general,
though, they will give highly similar answers. It is critical to be able to grasp a post
hoc analysis in this situation. If you're given confidence intervals for post hoc
analysis, read them the same way you did confidence intervals in chapter 10: if they
include zero, there's no difference; if they don't include zero, there is.
References:
Foster et al. (2021, May 2). “Post Hoc Tests.” (University of Missouri-St. Louis,
Rice University, & University of Houston, Downtown Campus).Retrieved
May 17, 2021, from https://stats.libretexts.org/@go/page/7154
Vidhi, J. “Duncan’s Multiple Range Test (With Diagram) | Statistics” Retrieved May
20, 2021 https://www.biologydiscussion.com/vegetable-breeding/duncans-
multiple-range-test-with-diagram-statistics/68180