You are on page 1of 23

SUBJECT

MEASUREMENT AND EVALUATION

TOPIC
DEGREE OF FREEDOM ,TYPE 1AND 2 ERROR AND
LEVEL OF SIGNIFICANCE.

PRESENTED BY
KHULA GUL
M.PHIL 2ND SEMESTER
INSTITUTE OF EDUCATION AND
RESEARCH,UNIVERSITY F PESHAWAR
OUTLINES OF THE PRESENTATION

 LEVEL OF SIGNIFICANCE

 TYPE 1 AND TYPE 2 ERRORS

 DEGREE OF FREEDOMS
NULL HYPOTHESIS (Ho)

Null hypothesis conveys a meaning that there exists no


difference between the different samples.

Null hypothesis – the mean pulse rate among the two


groups are same (or) there is no significant
difference between their pulse rates
 By using various tests of significance we either: –
 reject the null hypothesis
Or
 Accept the null hypothesis

 rejecting null hypothesis → difference is


significant.
 Accepting null hypothesis → difference is not
significant.
LEVEL OF SIGNIFICANCE 

• Level of significance – “P” value.


• P-value is a function of the observed sample results
(a statistic) that is used for testing a statistical
hypothesis.
• It is the probability of null hypothesis being true.
• The test which is done for testing
the research hypothesis against the null hypothesis
• It is the probability of null hypothesis being true. It
can accept or reject the null hypothesis based on P
value.
• Practically, p < 0.05 (5%) is considered significant.
• P = 0.05 implies, – we may go wrong 5 out of 100
times by rejecting null hypothesis.
or

• we can attribute significance with 95% confidence


LEVEL OF CONFIDENCE

• It is expressed as a percentage and represents


how often the true percentage of the
population who would pick an answer lies
within the confidence interval.
• The 95%confidence level means you can be
95% certain; the 99% confidence level means
you can be 99% certain. Most researchers use
the 95% confidence level.
DIFFERENCE BETWEEN THE TWO
• Level of significance …….Level of confidence
• .01....1%_______________ 99%....99
• .05....5%_______________ 95%....95
• .1.... 10% ______________ 90%....99

• The probability with which we reject a null hypothesis when it is


true is the level of significance. α  

• The probability with which we will accept a null hypothesis when


it is true is the confidence level.
• 1- α
Two tail test
In the context of testing hypothesis there are basically
two types of errors we

 TYPE 1 ERROR

 TYPE 2 ERROR
TYPE 1 ERROR

• A type I error, also known as an error of the first kind, occurs


when the null hypothesis (Ho) is true, but is rejected.
• A type I error may be compared with a so called false
positive .
• A Type I error occurs when we believe a falsehood .
• The rate of the type I error is called the size of the test and
denoted by the Greek letter α (alpha).
• It usually equals the significance level of a test.

• If type I error is fixed at 5 %, it means that there are about 5


chances in 100 that we will reject Ho when Ho is true.
TYPE II ERROR
 Type II error, also known as an error of the second kind, occurs
when the null hypothesis is false, but erroneously fails to be
rejected.
• Type ii error means accepting the hypothesis which should
have been rejected .
• A type ii error may be compared with a so-called false
negative.
• A type ii error is committed when we fail to believe a truth
• The rate of the type ii error is denoted by the greek letter β
(beta) and related to the power of a test (which equals 1-β ).
REDUCING TYPE I ERRORS

• Prescriptive testing is used to increase the level of


confidence, which in turn reduces type I errors.
• The chances of making a type i error are reduced by
increasing the level of confidence
REDUCING TYPE II ERRORS
• Descriptive testing is used to better describe the test condition
and acceptance criteria, which in turn reduces type II errors.

• This increases the number of times we reject the null hypothesis


– with a resulting increase in the number of type i errors
(rejecting h0 when it was really true and should not have
been rejected).

• Therefore, reducing one type of error comes at the expense of


increasing the other type of error.
DEGREES OF FREEDOM

• The degrees of freedom in a statistical calculation


represent how many values involved in a calculation
have the freedom to vary.
• The degrees of freedom can be calculated to help
ensure the statistical validity of chi-square tests, t-
tests and even the more advanced f-tests. 
• Degrees of freedom calculations identify how many
values in the final calculation are allowed to vary,
they can contribute to the validity of an outcome.
• These calculations are dependent upon the sample
size, or observations, and the parameters to be
estimated, but generally, in statistics, degrees of
freedom equal the number of observations minus the
number of parameters.
• This means there are more degrees of freedom with a
larger sample size.
FORMULA FOR DEGREE OF FREEDOM

• The statistical formula to determine degrees of


freedom is quite simple. It states that degrees of
freedom equal the number of values in a data set
minus 1, and looks like this:
• D.F = N-1
• Where N is the number of values in the data set
(sample size).
FORMULA FOR DEGREE OF FREEDOM

• Take a look at the sample computation.


If there is a data set of 4, (n=4).
• Using the formula, the degrees of freedom would be
calculated as d.F = N-1:
• In this example, it looks like, d f = 4-1 = 3
• The degrees of freedom for the chi-
square are calculated using the following formula:
d.F = (r-1)(c-1) where r is the number of rows and c
is the number of columns.

You might also like