You are on page 1of 36

# PARAMETRIC

AND
NON PARAMETRIC
TEST
PARAMETRIC TESTS

## Parametric test is a statistical test that makes

assumptions about the parameters of the
population distribution(s) from which one’s data
is drawn.
APPLICATIONS
• Used for Quantitative data.

## • Used when data are measured on approximate

interval or ratio scales of measurement.

## • Data should follow normal distribution.

Types of Parametric tests
1. Z-test
2. t-test
• t-test for one sample
• t-test for two samples
i. Unpaired two sample t-test
ii. Paired two sample t-test
3.ANOVA (Analysis of variance)
* One way ANOVA
• Two way ANOVA
4. Pearson’s r correlation
1. Z- Test:

## 1. Z-test is a statistical test where normal

distribution is applied and is basically used
for dealing with problems relating to large
samples when the frequency is greater than
or equal to 30.
2. It is used when population standard
deviation is known.
Contd…
Assumptions:
• Population is normally distributed
• The sample is drawn at random

Conditions:
• Population standard deviation σ is known
• Size of the sample is large (say n > 30)
Contd..
Let 𝑋1 , 𝑋2 ………𝑋𝑛 be a random sample size of n
from a normal population with mean µ and
variance 𝜎 2 .
Let x̅ be the sample mean of sample of size “n”

Null Hypothesis:
Population mean (µ) is equal to a specified value µο
𝐻0 : µ = µο
2. T- test:
Derived by W S Gosset in 1908.

Properties of t distribution:
i. It has mean 0
ii. It has variance greater than one
iii. It is bell shaped symmetrical distribution about mean

## Assumption for t test:

i. Sample must be random, observations independent
ii. Standard deviation is not known
iii. Normal distribution of population
One Sample t-test
Assumptions:
• Population is normally distributed
• Sample is drawn from the population and it
should be random
• We should know the population mean

Conditions:
• Population standard deviation is not known
• Size of the sample is small (<30)
Contd..
• In one sample t-test , we know the population
mean.
• We draw a random sample from the
population and then compare the sample mean
with the population mean and make a statistical
decision as to whether or not the sample mean
is different from the population.
Two sample t-test
• Used when the two independent random
samples come from the normal populations
having unknown or same variance.

## • We test the null hypothesis that the two

population means are same i.e., µ1 = µ2
Contd…
Assumptions:
1. Populations are distributed normally
2. Samples are drawn independently and at random

Conditions:
1. Standard deviations in the populations are same
and not known
2. Size of the sample is small
Paired t-test
Used when measurements are taken from the
same subject before and after some
manipulation or treatment.

## Ex: To determine the significance of a difference

in blood pressure before and after
substance
Assumptions & conditions:
Assumptions:
1. Populations are distributed normally
2. Samples are drawn independently and at
random
Conditions:
1. Standard deviations in the populations are
same and not known
2. Size of the sample is small
3.Pearson’s ‘r’ Correlation

## • Correlation is a technique for investigating the

relationship between two quantitative,
continuous variables.

## • Pearson’s Correlation Coefficient (r) is a

measure of the strength of the association
between the two variables
Types of correlation
Type of correlation Correlation
coefficient
• Perfect positive r = +1
correlation
• Partial positive correlation 0 < r < +1
• No correlation r=0
• Partial negative correlation 0 > r > -1
• Perfect negative correlation r = -1
4.ANOVA (Analysis of Variance)

## • Analysis of Variance (ANOVA) is a collection of

statistical models used to analyse the
differences between group means or variances.

## • Compares multiple groups at one time

• Developed by R.A.Fischer
ANOVA ( ANALYSIS OF VARIANCE)

One way ANOVA

## Compares two or more unmatched groups when

data are categorized in one factor

Ex:
1. Comparing a control group with three different
doses of aspirin
2. Comparing the productivity of three or more
employees based on working hours in a company
Two way ANOVA
• Used to determine the effect of two nominal
predictor variables on a continuous outcome
variable.
• It analyses the effect of the independent
variables on the expected outcome along with
their relationship to the outcome itself.
Ex: Comparing the employee productivity based
on the working hours and working conditions
Assumptions of ANOVA:
• The samples are independent and selected
randomly.
• Parent population from which samples are
taken is of normal distribution.
• Various treatment and environmental effects
• The experimental errors are distributed
normally with mean zero and variance σ2
Contd..
• ANOVA compares variance by means of F-ratio
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑠𝑎𝑚𝑝𝑙𝑒𝑠
F=
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑤𝑖𝑡h𝑖𝑛 𝑠𝑎𝑚𝑝𝑙𝑒𝑠
• It again depends on experimental designs
Null hypothesis:
Hο = All population means are same
• If the computed Fc is greater than F critical value,
we are likely to reject the null hypothesis.
• If the computed Fc is lesser than the F critical
value , then the null hypothesis is accepted.
ANOVA TABLE
Sources of Sum of Degrees of Mean squares (MS) F-
variation squares(SS) freedom (d.f.) 𝒔𝒖𝒎 𝒐𝒇 𝒔𝒒𝒖𝒂𝒓𝒆𝒔 Ratio
𝒅𝒆𝒈𝒓𝒆𝒆𝒔 𝒐𝒇 𝒇𝒓𝒆𝒆𝒅𝒐𝒎

## Between Treatment sum (k-1) 𝑇𝑟𝑀𝑆

samples or of squares 𝑇𝑟𝑆𝑆 𝐸𝑀𝑆
groups { TrSS } (𝑘 − 1)
(treatments)
Within samples Error sum of (n-k)
or groups squares (ESS) 𝐸𝑆𝑆
{errors} (𝑛 − 𝑘)

## TOTAL Total sum of (n-1)

squares(TSS)
S.N0. TYPE OF GROUP PARAMETRIC TESTS

## 1. Comparison of two paired groups PAIRED T-TEST

Paired t-test
2. Comparison of two unpaired UNPAIRED TWO
groups Unpaired two sample t- SAMPLE t-test
test
3. Comparison of population and One sample t-test
sample drawn from the same
population One sample t-test
4. Comparison of three or more Two- way ANOVA
matched groups but varied in two
factors Two way ANOVA
5. Comparison of three or more One way ANOVA
matched groups but varied in one
factor One way ANOVA
6. Correlation between two variables Pearson correlation
Pearson Correlation
Nonparametric Test
• Techniques that do not rely on data belonging
to any particular distribution

## • Non-parametric statistics do not assume any

underlying distribution of parameter.

## • Non-parametric does not meant that model

lack parameters but that the number and nature
of the parameters are flexible.
Why Nonparametric Test?

## • When the population distribution is abnormal

i.e. too many variables involved.
USAGE
• Decision making/ forecasting.

## • Studying populations that take on a ranked

order (such as movie reviews receiving one to
four stars)

• Simple analysis.
Parametric v Non-parametric
• Parametric tests => have info about population, or
can make certain assumptions
– Assume normal distribution of population.
– Data is distributed normally.
– population variances are the same.

## • Non-parametric tests are used when there are no

– Also known as distribution free tests.
– But info is known about sampling distribution.
Types of Non-parametric test1
1. One sample test
• Chi-square test
• One sample sign test
2. Two samples test
• Median test
• Two samples sign test
3. K-samples test
• Median tets
• Kruskal Wallis test
Types of Non-parametric test
• Chi-square test (χ2):
– Used to compare between observed and expected data. 1. Test of
goodness of fit
2. Test of independence
3. Test of homogeneity
• Kruskal-Wallis test-
– for testing whether samples originate from the same distribution.
– used for comparing more than two samples that are independent,
or not related
– Alternative to ANOVA.
• Wilcoxon signed-rank-
– used when comparing two related samples or repeated
measurements on a single sample to assess whether their population
mean ranks differ.
• Median test-
– Use to test the null hypothesis that the medians of the
populations from which two samples are drawn are identical.
– The data in sample is assigned to two groups, one
consisting of data whose values are higher than the median
value in the two groups combined, and the other consisting
of data whose values are at the median or below
• Sign test:
– can be used to test the hypothesis that there is "no
difference in medians" between the continuous distributions
of two random variables X and Y,
• Fishers exact test:
– test used in the analysis of contingency where sample sizes
are small
Thank you!!!