You are on page 1of 3

GAGE UNIVERSITY COLLEGE

DEPARTMENT OF MBA

BUSINESS RESEARCH METHODS


GROUP ASSIGNMENT-1

Prepared By:
Eyerusalem Abebe ------------------ MMBA/227/14
Ahmedin Sherefa ------------------ MMBA/378/14
Derbew Tafere ----------------------MMBA/264/14
Dereje Gemechu -------------------- MMBA/261/14
Habtamu Siyoum -------------------- MMBA/260/14
Selam Tsegaye -----------------------MMBA/248/14

Submitted to: Dr. Fetene B.

April, 2022
Hypothesis Testing

Hypothesis testing is the formal procedure used by statisticians to test whether a certain
hypothesis is true or not. It's a four-step process that involves writing the hypothesis, creating an
analysis plan, analyzing the data, and then interpreting the data. These tests are useful because
you can use these tests to help you prove your hypotheses. If you have a successful test, then you
can publish that information to let people know what you have found.
For example, a cleaning company can publish information that proves that their cleaning product
kills 99% of all germs if they perform a hypothesis test that has data that proves their hypothesis
that their cleaning product kills 99% of germs.
While these tests can be very helpful, there is a danger when it comes to interpreting the results.
It is possible to make two different kinds of errors when interpreting the results.

Type I Errors

The first type is called a type I error. This type of error happens when you say that the null
hypothesis is false when it is actually true. Our null hypothesis is the hypothesis for our expected
outcome. If our null hypothesis is that dogs live longer than cats, it would be like saying dogs
don't live longer than cats, when in fact, they do. To help you remember this type I error, think of
it as having just one wrong. You are wrongly thinking that the null hypothesis is false. In
statistics, we label the probability of making this kind of error with this symbol:

α
It is called alpha. This is a value that you decide on. Usually, it is 0.05, which means that you are
okay with a 5% chance of making a type I error. The lower the alpha number, the lower the risk
of you making such an error. The tricky part with setting the alpha number is that if you set it too
low, it may mean that you won't catch the really small differences that may be there.

Type II Errors

The other type of error is called a type II error. This type of error happens when you say that the
null hypothesis is true when it is actually false. For our null hypothesis that dogs live longer than
cats, it would be like saying that dogs do live longer than cats, when in fact, they don't. To help
you remember a type II error, think of two wrongs. You are wrongly thinking that the null

1
hypothesis is wrong. The probability of making a type II error is labeled with a beta symbol like
this:

β
This type of error can be decreased by making sure that your sample size, the number of test
subjects you have, is large enough so that real differences can be spotted. So for the dogs and
cats, this would mean that you need to gather data about enough dogs and cats to see a real
difference between them. If you have information about just one dog and one cat, you can't say
for sure that the statement that dogs live longer than cats is true or not. If the dog lives longer
than the cat, then you might make the mistake of saying that dogs do live longer than cats, even
though the opposite were true, your sample size isn't large enough for you to see a difference.

If you take this beta value and you subtract it from 1 (1 - beta), you will get what is called the
power of your test. The higher the power of your test, the less likely you are to make a type II
error.

You might also like