You are on page 1of 4

# Analysis of variance or ANOVA is a case of using F-test to compare variance.

F test is based
on F-distribution. It is generally used to compare the variance of two sets of observation. ANOVA uses
the underlying assumption that several sample means were obtained from normally distributed
population having same variance or standard deviation. ANOVA involves classifying and cross classifying
data and then testing if the mean of a specified classification differ significantly. The ANOVA technique
was initially used in agrarian research and is now actively used in researches based on experimental
design, whether in natural science or in social science. ANOVA analysis techniques have been discussed
under the following heads: 1. One Way ANOVA In one way ANOVA, data is classified according to one
factor only. 2. Two Way ANOVA Two ways ANOVA studies the effect of more than one factor
simultaneously. It allows the researcher to examine the interactions between the two factors.

Analysis of Variance (ANOVA) is a commonly used statistical technique for investigating data by comparing
the means of subsets of the data. The base case is the one-way ANOVA which is an extension of two-sample t
test for independent groups covering situations where there are more than two groups being compared.

In one-way ANOVA the data is sub-divided into groups based on a single classification factor and the standard
terminology used to describe the set of factor levels is treatment even though this might not always have
meaning for the particular application. There is variation in the measurements taken on the individual
components of the data set and ANOVA investigates whether this variation can be explained by the grouping
introduced by the classification factor.

Why not use multiple t-tests instead of ANOVA? Why should we use ANOVA in preference to carrying
out a series of t-tests? I think this is best explained by using an example; suppose we want to compare
the results from 12 analysts taking part in a training exercise. If we were to use t-tests, we would need
to calculate 66 t-values. Not only is this a lot of work but the chance of reaching a wrong conclusion
increases. The correct way to analyse this sort of data is to use one-way ANOVA.

• ANOVA is a powerful tool for determining if there is a statistically significant difference between two
or more sets of data. • One-way ANOVA should be used when we are comparing several sets of
observations. • Two-way ANOVA is the method used when there are two separate factors that may be
influencing a result. • Except for the smallest of data sets ANOVA is best carried out using a spreadsheet
or statistical software package

Compared with using multiple t-tests, one-way and two-way ANOVA require fewer measurements to
discover significant effects (i.e., the tests are said to have more power). This is one reason why ANOVA is
used frequently when analysing data from statistically designed experiments. Other ANOVA and
multivariate ANOVA (MANOVA) methods exist for more complex experimental situations

The basic principle of ANOVA is to test for differences among the means of the populations
by examining the amount of variation within each of these samples, relative to the amount
of variation between the samples.
Analysis of Variance in Research Methodology

Analysis of variance (ANOVA) is a collection of statistical models and their associated procedures

One-way (or single factor) ANOVA: Under the one-way ANOVA, we consider only one factor and
then observe that the reason for said factor to be important is that several possible types of samples
can occur within that factor. We then determine if there are differences within that factor. The
technique involves the following steps:
This ratio is used to judge whether the difference among several sample means is significant or is just
a matter of sampling fluctuations. For this purpose we look into the table, giving the values of F for
given degrees of freedom at different levels of significance. If the worked out value of F, as stated
above, is less than the table value of F, the difference is taken as insignificant i.e., due to chance and
the null-hypothesis of no difference between sample means stands. In case the calculated value of F
happens to be either equal or more than its table value, the difference is considered as significant
(which means the samples could not have come from the same universe) and accordingly the
conclusion may be drawn. The higher the calculated value of F is above the table value, the more
definite and sure one can be about his conclusions.