You are on page 1of 7

Module 4: Parametric Statistics

Learning outcomes:
At the end of the lesson, you are expected to:
 Discuss the statistical hypothesis;
 Explain the types of error in statistical analysis;
 Categorize One-tailed and two-tailed test of hypothesis; and
 Familiarize the Six Basic Steps in Hypothesis Testing.
Parametric Statistics is a statistical procedure for testing hypothesis or estimating parameters, based on population
parameters. In general, it is a branch of Statistics that require stringent assumptions (i.e., normality, homogeneity of
variances, etc.) concerning the distribution of the population about which inferences will be drawn. In general, a statistical
procedure for testing hypothesis or estimating parameters based on population parameters. A hypothesis is a test that
based on certain specific assumptions about the probability distribution of population value or the sizes of population
parameters. Generally, Parametric Statistics are more powerful compared to Nonparametric Statistics.
Parametric Statistical tests refer to a large category of tests assuming the normality of the underlying populations
from which the samples were selected at random. These are tests which require that data set to be normally distributes and
that the variance of the data sets be the same. In addition, these tests require interval or ratio level of measurement. They
specify certain conditions about the distribution of responses in the population from which the research sample was drawn.
A parametric statistical test involves the estimation of the value of at least one population parameter.

4.1 Statistical Hypothesis


A statistical hypothesis is a preconceived idea about the value of a population parameter which can be validated or
verified through statistical procedures or tests. It is an assertion, presumption, or tentative theory which aims to explain facts
about real world. In attempting to reach decisions, it is advantageous to make assumptions (or educated guesses) about the
target populations. Such assumptions, which may be correct or not are called statistical hypotheses. They are generally
statements about the probability distributions of the populations. These hypotheses are then subjected to statistical testing.
In hypothesis testing, we start with an assumed or hypothesized value of a population parameter. After a random
representative sample is collected, we compare the sample statistics, such as the sample mean ( x ), with the hypothesized
parameter such as the population mean (M ). Then, we either accept or reject the hypothesized value as being correct.
The hypothesized value is rejected only if the sample result clearly is unlikely to occur when the hypothesis is true (i.e., the
computed value is greater than or equal to the tabulated value at a certain level of significance), we should make it clear at
this point that the acceptance of a statistical hypothesis is a result of insufficient evidence to reject it and does not
necessarily imply that it is true.

Types of hypothesis
1. Null Hypothesis
In many instances, we formulate statistical hypothesis for the sole purpose or rejecting or nullifying it. For example,
if we want to formulate the hypothesis that there is no difference between the yield of a crop applied with fertilizer and yield
of the same crop without fertilizer (i.e., any observed differences are due to merely to fluctuations in sampling from the same
population). Similarly, if we want to decide whether a given coin is balanced, we formulate the hypothesis that the coin is fair
(i.e., probability of heads, P=O.5). Such hypotheses are often called null hypotheses and are commonly denoted by H 0 . In
brief, null hypothesis is often defined as statement of no difference, statement of equality, or statement of no effect. Null
hypothesis is the hypothesis that we formulate with the hope of rejecting it. Although we frequently use the terms accept and
reject throughout in this module, it is important to understand that the rejection of a hypothesis is to conclude that it is false.
On the other hand, the acceptance of a hypothesis implies that we have no evidence to believe otherwise. With this
terminology, researcher or statistician should always state as his/her hypothesis that which he/she hopes to reject and that
is the null hypothesis H 0. If he/she is interested in a new drug, he/she should assume that it is no better than the new drug
on the market and then set out to reject this contention. Similarly, to prove that the new teaching method is superior to the
traditional method, we test the hypothesis that there is no difference in the two method.
2. Alternative Hypothesis
Any hypothesis that differs from a given null hypothesis is called an alternative hypothesis, sometimes it is
considered as the researcher’s working hypothesis. For example, if Ho: p=O.5, alternative hypothesis might be
p ≠ 0.5 , p>0.5 ,∨p <0.5∨ p=0.75. a hypothesis alternative to the null hypothesis is denoted by H a . Rejection of the null
hypothesis leads to the acceptance of the alternative hypothesis.
4.2 Type I and Type II Errors
In actual condition, making a decision is sometimes difficult since we are not always sure if we made the correct or
wrong decision. Similarly, in hypothesis testing, there is also the probability of committing errors in making a decision of
whether to accept or reject the hypothesis. There are two (2) possible types of errors that may be committed in hypothesis
testing. In statistical test, we call these errors as Type I error and Type II error. If we reject a hypothesis when it should be
accepted, a Type I error has been committed. On the other hand, if we accept a hypothesis when it should be rejected, a
Type II error has been made. In either case, a wrong decision or error in has occurred.
The probability of committing a Type I error is called the level of significance or significance level of the test and is
denoted by a Greek letter, α (“alpha”). This is generally specified before any samples are drawn so that the results obtained
will not influence our choice or selection. Generally or in practice, a significance level of 5% or 1% is utilize although other
values are also used. For example, if the 1% significance level is chosen in designing a decision rule, then there are about 1
chance in 100 that we would reject the hypothesis when it should be accepted. That is, we are about 99% confident that we
have made the right decision. In such case, we say that the hypothesis has been rejected at the 1% level of significance,
which only means that the hypothesis has a 1% probability of being wrong. The probability of Type I error is always equal to
the level of significance used in testing the null hypothesis (Ho).
The probability of committing Type II error is generally designated by the Greek letter, β ("beta"). The value of β is
equal to the probability of committing an error in accepting Ho when in fact it is false.
In order for a test of hypothesis to be good, it must be designed so as to minizine errors of decision. This is easier
said than done, because for any given sample size, an attempt to lower one type of error is generally accompanied by an
increase in the other type of error. In reality, one type of error may be more serious than the other, therefore, a compromise
should be made in favor of limiting the more serious error. The only way to reduce both Type I and Type II errors is to
increase the sample size (n), which may or may not be possible. Table 1.1 summarizes the types of decisions and the
possible consequences and error of the decisions which are made in hypothesis testing.

Possible decisions or Possible States


Courses of Action
Null hypothesis is true Null hypothesis is false
Correct Decision
Type II error
Accept null hypothesis (Probability=1−α )
(Probability=β )

Type I error Correct Decision


Reject null hypothesis
( Probability=α ) (Probability=1−β)
Usually, 1−α is referred to as the level of confidence while 1−β is called the power of the test. The first correct
decision in hypothesis testing is that of accepting the H 0when it is really true. The probability of making this correct decision
is 1−α .Hence, the level of significance utilized in conducting a hypothesis testing is 5% then the probability of accepting
correctly a true hypothesis is 95% (level of confidence).
A second correct decision in hypothesis testing is made if the H 0 is rejected when it is really false. The probability
of the correct decision is 1−β . The power of a test is its ability to discriminate a true from the from the false hypothesis or
the ability to reject a false hypothesis. In this case, it is very important to determine the appropriate sample size (n) when
doing a statistical research.
In general, the four decisions if hypothesis testing including the probability attached to each of them can be
summarized as shown below.
1. Rejecting H 0 when in fact it is true. This is Type I error with a probability of
occurrence specified by α .
2. Accepting H 0when it is true. This correct decision has a probability value of 1−α .
3. Accepting H 0 when in fact it is false. This is a type II error with a probability value of β .
4. Rejecting H 0 when in fact it is false (and should be rejected). This correct decision is the power of a test
with a probability value of 1−β .

4.3 One-tailed and Two-tailed Tests


In all statistical hypothesis tests, the equal or effect or no difference sign must be stated in the null hypothesis
(H ¿¿ 0) .¿ This is so, since when carry out by a hypothesis test, we center sampling distribution the test statistics over the
value stated in the H 0. This is the assumption in the legal parlance of 'innocent until proven guilty'.

The requirement of always having the equal sign (=) in the H 0 is thus essential in all hypotheses testing.

Since the alternative hypothesis (Ha) is formulated to be different from the H 0, then we can consider these thrae
types of alternative hypothesis as shown below:

a.) H a : M > M 0 (one – tailed/sided test to the right)

Acceptance
region (rejection region)
1−α α

>
Critical/tabular value
b.) H a : M < M 0 (one – tailed/sided test to the left)

Acceptance
region
1−α (rejection region)
α

<
Critical/tabular value

c.) H a : M ≠ M 0 (two – tailed/sided test both directions)

Acceptance (rejection region)


region
α 1−α α
2 2

< >
Critical/ Critical/

tabular value tabular value

If H a : M > M 0 is applied, then from the direction of inequality, we use a one – tailed/sided test to the right
(unidirectional) and therefore the rejection region is located the right tail of the distribution. Similarly, if H a : M < M 0, we use
a one – sided test to the left and the rejection region is located on the left side of the distribution as shown in the above
illustration. Finally, H a : M ≠ M 0 (either > or <), we use a two – sided or two – tailed test and the rejection region is located
α
on both tails or sides of the distribution (non – directional test) in which case, we divide into two (2) level of significance ( )
2
.

4.4 Six Basic Steps in Hypothesis Testing


Hypothesis testing as a statistical procedure requires a systematic approach. The following are the general
procedures involved in statistical hypothesis.
1. State or formulate the null hypothesis (H 0 ) and the alternative hypothesis ( H ¿¿ a). ¿
2. Specify the level of significance α to be used. The level of significance is the statistical standard which is specified
for rejecting the null hypothesis (H 0 ). If the 5% level of significance is used, there is a probability of 0.05 of
rejecting the (H 0 ) when it is true. This is called Type I error and the probability of Type I error is always equal to
the level of significance (α ) that is used. The most frequently used levels of significance in hypothesis testing are
the 5% and the 1% levels. A Type II error occurs if the H 0 is accepted when in fact it is false.
3. Select the most appropriate test statistic or statistical tool. There is a specific statistical tool or test statistic that is
appropriate for each kind of statistical hypothesis. Identify also the type of statistical test as either one – tailed test
or two – tailed test depending how the alternative hypothesis is being expressed; the alternative hypothesis uses
the > or < sign, the test is classified as one – sided/tailed test and the rejection region is located either at the right
tail or at the left tail of the distribution. While if the alternative hypothesis uses the not equal sign (≠), the test is
considered as a two – sided/tailed and the rejection region is located at both tails of the distribution. The level of
α
significance in here should be divided by two ( ) in each tail.
2
4. Compute the actual value of the test statistic from the sample data (i.e., z – test or t – test or F – test, etc.)
5. Establish the critical (rejection) region or the tabular value for the selected test statistic from the statistical table
(statistical table will be given to you) based on the degree of freedom (for t – test and F – test) and the level of
significance (α ). Take note the type of statistical test to be used whether it is a one – tailed test or a two – tailed
test as elaborated in steps 1 and 3.
6. Making decision, conclusion and recommendation/s. the computed or observed value of the sample statistic is
compared with the tabular or critical value (or values) of the test statistic. This is the basis whether to accept or
reject the null hypothesis. Accepting the H 0 implies rejecting the alternative hypothesis ( H ¿¿ a)¿, in like manner,
rejecting H 0 means accepting H a . Given below are guidelines in making a decision for a given null hypothesis:
6.1 Reject the null hypothesis ( H ¿¿ 0) ¿ if the computed value is greater than or equal (≥) to the tabular
value.
6.2 Accept the null hypothesis ( H ¿¿ 0) ¿ if the computed value is less than (¿) the tabular value.

Making conclusion and recommendation are the last part in hypothesis testing. At this point, the researcher will
explain his decision based on the result of his statistical analysis. Interpreting the outcome of the research may not just end
by simply saying that the null hypothesis is accepted or rejected. It is the primary obligation of the researcher to further
explain the implication of the result and drawing conclusion by answering the original problem and to make necessary
recommendation. In some instances, this should be supported by related review of literature/s.

Summary on Steps in Hypothesis Testing


1. State the null and the alternative hypothesis
2. Specify level of significance (α )
3. Select statistical tool
4. Computations
5. Critical region or the tabular value
6. Decision
ACTIVITY 4: PARAMETRIC TEST
Direction:
General instruction: In every activity always indicate your name and activity no. You can use band paper or yellow
pad paper in answering the activity. Pass your activity on the set date and time. Answer
what is
being
asked. Explain your answer not more than 5 sentence.
1. When does a researcher accept and reject a null hypothesis?
2. Discuss and differentiate one-tailed and two-tailed test of hypothesis.
3. Differentiate parametric and non-parametric test.
4. Give all the assumptions and formula of the following parametric test.
Parametric test Assumptions Formula
z-test for one-sample mean

z-test: Testing the Differences


between Two means (Large
Independent Samples)
t-test for one-sample mean

Independent sample t-test

Paired t-test

For 25 points.
At the end of this module, I have learned that…
___________________________________________________________________________________________________________
___________________________________________________________________________________________________________
___________________________________________________________________________________________________________
___________________________________________________________________________________________________________
____________________________________________________________________.
HONESTY is the foundation of TRUST.

You might also like