You are on page 1of 6

1.

Hypotheses about a population Mean (Sigma Known) 299

>Z Table
>One of the most basic hypothesis tests is a test about a population mean.
>A business researcher might be interested in testing to determine whether an established or
accepted mean value for an industry is still true or in testing a hypothesized mean value for a
new theory or product.
>As an example, a computer products company sets up a telephone service to assist
customers by providing technical support. The average wait time during weekday hours is 37
minutes. However, a recent hiring effort added technical consultants to the system, and
management believes that the average wait time decreased, and they want to prove it. Other
business scenarios resulting in hypothesis tests of a single mean might include the following:
■ A financial investment firm wants to test to determine whether the average hourly change
in the Dow Jones Average over a 10-year period is +0.25.
■ A manufacturing company wants to test to determine whether the average thickness of a
plastic bottle is 2.4 milli meters.
■ A retail store wants to test to determine whether the average age of its customers is less
than 40 years.

2. Hypotheses testing about a proportion (Sigma is Unknown) 315

>Z Table
>Proportion = Probability
>Data analysis used in business decision making often contains proportions to describe such
aspects as market share, consumer makeup, quality defects, on-time delivery rate, profitable
stocks, and others.
>Business surveys often produce information expressed in proportion form, such as .45 of all
businesses offer flexible hours to employees or .88 of all businesses have Web sites.
>Business researchers conduct hypothesis tests about such proportions to determine whether
they have changed in some way.
>As an example, suppose a company held a 26% or .26, share of the market for several
years. Due to a massive marketing effort and improved product quality, company officials
believe that the market share increased, and they want to prove it.
>Other examples of hypothesis testing about a single population proportion might include:
■ A market researcher wants to test to determine whether the proportion of new car
purchasers who are female has increased.
■ A financial researcher wants to test to determine whether the proportion of companies that
were profitable last year in the average investment officer’s portfolio is .60.
■ A quality manager for a large manufacturing firm wants to test to determine whether the
proportion of defective items in a batch is less than .04.
3. Hypotheses about a population Mean (Sigma Unknown) 308

>T Table
> Very often when a business researcher is gathering data to test hypotheses about a single
population mean, the value of the population standard deviation is unknown and the
researcher must use the sample standard deviation as an estimate of it.
>In such cases, the z test cannot be used.

4. Hypotheses Testing & confidence intervals about the difference in 2


means: Independent samples & Population Variance Unknown
(S is Unknown) 355

>T Table
>This technique is used whenever the population variances are unknown (and hence the
sample variances must be used) and the samples are independent (not related in any
way).
>The hypothesis test presented in this section is a test that compares the means of two
samples to determine whether there is a difference in the two population means from which
The samples come.
> An assumption underlying this technique is that the measurement or characteristic being
studied is normally distributed for both populations.

5. Statistical Inferences for 2 related population 365

>T Test
>A method is presented to analyze dependent samples or related samples.
>Some Researchers refer to this test as the matched-pairs test.
>Others call it the t test for related Measures or the correlated t test.
>Sometimes as an experimental control mechanism, the same person or object is measured
both before and after a treatment.
>Certainly, the after measurement is not independent of the before measurement because the
measurements are taken on the same person or object in both cases.
>Table 10.4 gives data from a hypothetical study in which people were asked to rate
a company before and after one week of viewing a 15-minute DVD of the company twice
a day.
>The before scores are one sample and the after scores are a second sample, but each
pair of scores is related because the two measurements apply to the same person.
>The before scores and the after scores are not likely to vary from each other as much as
scores gathered from independent samples because individuals bring their biases about
businesses and the company to the study.
>These individual biases affect both the before scores and the after scores in the same way
because each pair of scores is measured on the same person.
6. One Way Anova or the Completely Randomized Design 406

>F Table
>Anova= Analysis of variance
>One of the simplest experimental designs is the ANOVA.
>In the Anova, subjects are assigned randomly to treatments.
>The ANOVA contains only one independent variable, with two or more treatment levels
(No. of column), or classifications.
>If only two treatment levels, or classifications, of the independent variable are present, the
design is the same one used to test the difference in means of two independent populations
presented in Chapter 10, which used the t test to analyze the data.
>We will focus on Anova with three or more classification levels.
>ANOVA will be used to analyze the data that result from the treatments.
>ANOVA could be structured for a tire-quality study in which tire quality is the independent
variable and the treatment levels are low, medium, and high quality.
>The dependent variable might be the number of miles driven before the tread fails state
inspection.
>A study of daily sales volumes for Wal-Mart stores could be undertaken by using a
ANOVA with demographic setting as the independent variable.
>The treatment levels, or classifications, would be inner-city stores, suburban stores, stores
in medium-sized cities, and stores in small towns.
>The dependent variable would be sales dollars.

7. Mann-Whitney U Test (Rank) 678

>The Mann-Whitney U test is a nonparametric counterpart of the T-test used to compare


the means of two independent populations.
>This test was developed by Henry B. Mann and D. R.Whitney in 1947.
>Recall that the t test for independent samples presented in Chapter 10 can be used when
data are at least interval in measurement and the populations are normally distributed.
>However, if the assumption of a normally distributed population is invalid or if the data are
only ordinal in measurement, the T-test should not be used.
>In such cases, the Mann-Whitney U test is an acceptable option for analyzing the data.
>The following assumptions underlie the use of the Mann-Whitney U test.
1. The samples are independent.
2. The level of data is at least ordinal.
>The two-tailed hypotheses being tested with the Mann-Whitney U test are as follows.
H0: The two populations are identical.
Ha: The two populations are not identical.
>Computation of the U test begins by arbitrarily designating two samples as group 1 &
group 2.
>The data from the two groups are combined into one group, with each data value retaining a
group identifier of its original group.
>The pooled values are then ranked from 1 to n, with the smallest value being assigned a
rank of 1.
>The sum of the ranks of values from group 1 is computed and designated as W1 and the
sum of the ranks of values from group 2 is designated as W2.
>The Mann-Whitney U test is implemented differently for small samples than for large
samples.
>If both n1, n2 ≤ 10, the samples are considered small. U-Table
>If either n1 or n2 ≥10, the samples are considered large. Z-Table

8. Wilcoxon Matched-Pairs Signed Rank Test 686

>The Mann-Whitney U test presented a nonparametric alternative to the t test for two
independent samples.
>If the two samples are related or dependent, the U test is not applicable.
>A test that does handle related data is the Wilcoxon matched-pairs signed rank test,
which serves as a nonparametric alternative to the t test for two related samples.
>Developed by Frank Wilcoxon in 1945, the Wilcoxon test, like the t test for two related
samples.
>It is used to analyze several different types of studies when the data of one group are related
to the data in the other group, including before-and-after studies, studies in which measures
are taken on the same person or object under two different conditions, and studies of twins or
other relatives.
>Two assumptions underlie the use of this technique.
1. The paired data are selected randomly.
2. The underlying distributions are symmetrical.
>Hypotheses-
For two-tailed tests: H0: Md=0, Ha: Md ≠ 0 (Md=Median)
For one-tailed tests: H0: Md=0, Ha: Md >0 or H0: Md=0, Ha: Md <0
>If T-Calculated is ≤ T-Critical rejects the Null Hypothesis.
>If T-Calculated is ≥ T-Critical rejects the Null Hypothesis.

9. Kruskal-Wallis Test 694

>Chi-Square Table
>The nonparametric alternative to the one-way analysis of variance (Anova) is the Kruskal-
Wallis test, developed in 1952 by William H. Kruskal and W. Allen Wallis.
>Like the one-way Anova, the Kruskal-Wallis test is used to determine whether c≥ 3
samples come from the same or different populations.
>Whereas the one-way ANOVA is based on the assumptions of normally distributed
populations, independent groups, at least interval level data, and equal population variances.
>The Kruskal-Wallis test can be used to analyze ordinal data and is not based on any
assumption about population shape.
>The Kruskal-Wallis test is based on the assumption that the c groups are independent and
that individual items are selected randomly.
>The hypotheses tested by the Kruskal-Wallis test follow.
H0: The c populations are identical.
Ha: At least one of the c populations is different.
10. Friedman Test 699

>Chi-Square Table
>The Friedman test, developed by M. Friedman in 1937, is a nonparametric alternative to
the randomized block design.
>The randomized block design has the same assumptions as other ANOVA procedures,
including observations are drawn from normally distributed populations.
>When this assumption cannot be met or when the researcher has ranked data, the Friedman
test provides a nonparametric alternative.
>Three assumptions underlie the Friedman test.
1. The blocks are independent.
2. No interaction is present between blocks and treatments.
3. Observations within each block can be ranked.
>The hypotheses being tested are as follows.
H0: The treatment populations are equal.
Ha: At least one treatment population yields larger values than at least one other
treatment population.

11. Chi-Square Goodness-of-fit test

A. For Uniform Distribution 646

>The chi-square goodness-of-fit test is used to analyze probabilities of multinomial


distribution trials along a single dimension.
>For example, if the variable being studied is economic class with three possible outcomes
of lower income class, middle income class, and upper income class, the single dimension is
economic class and the three possible outcomes are the three classes. On each trial, one and
only one of the outcomes can occur. In other words, a family unit must be classified either as
lower income class, middle income class, or upper income class and cannot be in more than
one class.
>The chi-square goodness-of-fit test compares the expected, or theoretical, frequencies
of categories from a population distribution to the observed, or actual, frequencies from a
distribution to determine whether there is a difference between what was expected and
what was observed.
>For example, airline industry officials might theorize that the ages of airline ticket
purchasers are distributed in a particular way. To validate or reject this expected distribution,
an actual sample of ticket purchaser ages can be gathered randomly, and the observed results
can be compared to the expected results with the chi-square goodness-of-fit test.
>This test also can be used to determine whether the observed arrivals at teller windows at a
bank are Poisson distributed, as might be expected.
>In the paper industry, manufacturers can use the chi-square goodness-of-fit test to
determine whether the demand for paper follows a uniform distribution throughout the year.
B. Contingency Analysis (Chi-Square Test of Independence) 656

>The chi-square goodness-of-fit test is used to analyze the distribution of frequencies for
categories of one variable, such as age or number of bank arrivals, to determine whether the
distribution of these frequencies is the same as some hypothesized or expected distribution.
>However, the goodness-of-fit test cannot be used to analyze two variables simultaneously.
>A different chi-square test, the chi-square test of independence, can be used to analyze
the frequencies of two variables with multiple categories to determine whether the two
variables are independent.
>Many times this type of analysis is desirable.
>For example, a market researcher might want to determine whether the type of soft drink
preferred by a consumer is independent of the consumer’s age. An organizational
behaviourist might want to know whether absenteeism is independent of job classification.
Financial investors might want to determine whether type of preferred stock investment is
independent of the region where the investor resides.
>The chi-square test of independence can be used to analyze any level of data measurement,
but it is particularly useful in analyzing nominal data.
>The business researcher would tally the frequencies of responses to these two questions
into a two-way table called a contingency table.
>Because the chi-square test of independence uses a contingency table, this test is sometimes
referred to as contingency analysis.
>If the two variables are independent, they are not related.
>In a sense, the chi-square test of independence is a test of whether the variables are related.
>The null hypothesis for a chi square test of independence is that the two variables are
independent (not related).
>If the null hypothesis is rejected, the conclusion is that the two variables are not
independent and are related.

You might also like