You are on page 1of 8

ONE-WAY ANOVA

ANOVA ("analysis of variance")


● is a statistical technique that assesses potential differences in a scale-level dependent variable by
a nominal-level variable having 2 or more categories.

Ronald Fisher 1918


● Fisher analysis of variance
● Extends the t and z test which only allows nominal level variable to have two categories

ANOVA ASSUMPTIONS
● The data is normally distributed
● Homogeneity (equal) of variance
● Observations are independent of each other

One-Way ANOVA
● One-Way ANOVA ("analysis of variance") is a parametric test.
● Direct extension of the two-sample t-test
● It is used to compare the means of two or more independent groups in order to determine
whether there is statistical evidence that the associated population means are significantly
different.

This test is also known as:


o One-Factor ANOVA
o One-Way Analysis of Variance
o Between Subjects ANOVA
o Completely randomized design

The variables used in this test are known as:


● Dependent variable
● Independent variable (also known as the grouping variable, or factor)
- This variable divides cases into two or more mutually exclusive levels, or groups

Common Uses
The One-Way ANOVA is often used to analyze data from the following types of studies:
● Field studies
● Experiments
● Quasi-experiments

The One-Way ANOVA is commonly used to test statistical differences among the means of two or more:
● Groups
● Interventions
● change scores

Data Requirements:
Your data must meet the following requirements:
● Dependent variable that is continuous (interval or ratio level)
● Independent variable that is categorical (nominal or ordinal)
● Independent samples/groups (independence of observations)
● Random sample of data from the population
● Normal distribution of the dependent variable for each group
● Homogeneity of variances
● No outliers

Formula
The figure below shows the structure and sequence of calculation for the ANOVA (Gravetter & Wallnau,
2015)

Sample Manual Computation


Sample Study
Level of Social media use among students and score in Beck's
Depression Inventory.

● Null Hypothesis (Ho): There is no significant difference


on the group means. Therefore, μ1-μ2-μ3.
● Alternative Hypothesis (Ha): There is a significant
difference in at least one group's overall mean.
Note: If the computed value of F is greater than the critical value of F, we reject the null hypothesis.

*Post Hoc Test → to determine which among the mean differences are
considered significant.
Post Hoc Tests

Post hoc tests (or post tests) are additional hypothesis tests that are done after an ANOVA to
determine exactly which mean differences are significant and which are not.

More specifically, these tests are done after ANOVA when:


1. You reject HO and;
2. There are three or more treatments (k ≥ 3)

Rejecting H0 indicates that at least one difference exists among the treatments. If there are only
two treatments, then there is no question about which means are different and, therefore, no need
for posttests. However, with three or more treatments (k ≥ 3), the problem is to determine exactly
which means are significantly different.

Deciding which test to run largely depends on what comparisons you're interested in:
● If you only want to make pairwise comparisons, run the Tukey procedure because it will have a
narrower confidence interval.
● If you want to compare all possible simple and complex pairs of means, run the Scheffe test as it
will have a narrower confidence interval.

Only run this test if you have rejected the null hypothesis in an ANOVA test, indicating that the means are
not the same. Otherwise, the means are equal and so there is no point in running this test.

● The null hypothesis for the test is that all means are the same:
HO: ui = Uj
● The alternate hypothesis is that the means are not the same:
HO: ui # Uj

Tukey's HSD test

● It is a commonly used test in psychological research.


● It allows you to compute a single value that determines the minimum difference between
treatment means that is necessary for significance.
● This value, called the honestly significant difference, or HSD, is then used to compare any two
treatment conditions.

If the mean difference exceeds Tukey's HSD, then you conclude that there is a significant difference
between the treatments. Otherwise, you cannot conclude that the treatments are significantly
different.
The formula for Tukey’s HSD Test is

Where:
● MSwithin is the withintreatments variance from the ANOVA
● n = is the number of scores in each treatment.
● to locate the appropriate value of q, you must know the number of treatments in the overall
experiment (k), the degrees of freedom for MSwithin and you must select an alpha level

Tukey's test requires that the sample size (n), be the same for all treatments.

Scheffé Test

● It has the distinction of being one of the safest of all possible post hoc tests (smallest risk of a Type
I error).
● The Scheffé test uses an F-ratio to evaluate the significance of the difference between any two
treatment conditions.

END.

You might also like