Professional Documents
Culture Documents
3. P-value:
• The p-value is the probability value that is used to quantify the
amount of evidence, if any, against the null hypothesis. More formally,
the p-value is found to be the probability of observing the test statistic,
or something more extreme, assuming the null hypothesis is true.
• Put simply, the more extreme the test statistic, the smaller the p-
value. The smaller the p-value, the greater the amount of statistical
evidence against the assumed truth of H0 .
4. Significance Level:
For every hypothesis test, a significance level, denoted α, is assumed.
This is used to qualify the result of the test. The significance level
defines a cut off point, at which you decide whether there is sufficient
evidence to view Ho as incorrect and favour HA instead.
• If the p-value is greater than or equal to α, then you conclude there is
insufficient evidence against the null hypothesis, and therefore you
retain Ho when compared to HA.
• If the p-value is less than α, then the result of the test is statistically
significant. This implies there is sufficient evidence against the null
hypothesis, and therefore you reject Ho in favour of HA. .
5. Decision rule: Based on the p-value and the predetermined level of
significance, the decision rule determines whether to reject the null
hypothesis. If the p-value is smaller than the alpha level, the null
hypothesis is rejected in favor of the alternative hypothesis.
6. Conclusion: Based on the decision rule and analysis, a conclusion is
drawn regarding the null hypothesis. It states whether there is enough
evidence to support the alternative hypothesis or if there is insufficient
evidence to reject the null hypothesis.
1. Type I Error (False Positive):
• Type I error occurs when we reject the Null hypothesis but the Null
hypothesis is correct /actually true. This case is also known as a false
positive. If your p-value is less than α, you reject
the null statement. If the null is really true, though, the α directly
defines the probability that you incorrectly reject it. This is referred to
as a Type I error.
• Symbolically, it is denoted as α (alpha). it is the significance level or
the probability of making a Type I error. It's set before conducting the
test and represents the maximum allowable probability of rejecting the
null hypothesis when it is true.
• Example: Suppose a pharmaceutical company is testing a new drug's
effectiveness. The null
hypothesis (H0) states that the drug has no effect. A Type I error would
occur if the company
concludes that the drug is effective (rejects the null hypothesis) when,
in reality, it has no effect.
2. Type II Error (False Negative):
• Type II error occurs when we fail to remove the Null Hypothesis when
the Null hypothesis is incorrect/the alternative hypothesis is correct.
This case is also known as a false negative.
Ho = NULL Hypothesis Ha = Alternative Hypothesis
• Mathematical Definition of Type II Error:
P(Probability of failing to remove Ho / Probability of Ho being false ) =
P(Accept Ho | Ho False)
• Type II error occurs when you fail to reject a null hypothesis that is
actually false. It represents the situation where you conclude that
there is no significant effect or difference when, in reality, there is an
effect or difference.
• Symbolically, it is denoted as α (alpha). it is the probability of making
a Type II error. The power of a statistical test is equal to 1 - β and
measures the test's ability to detect an effect when it exists.
• Example: Continuing with the drug example, let's say the drug
actually has a beneficial effect. A Type II error would occur if the
pharmaceutical company fails to conclude that the drug is effective
(fails to reject the null hypothesis) when, in fact, it does have a positive
effect.