Professional Documents
Culture Documents
Chapter 1
Statistics
Collection of methods for planning experiments, obtaining data, and then organizing,
summarizing, presenting, analyzing, interpreting, and drawing conclusions.
Variable
Characteristic or attribute that can assume different values
Random Variable
A variable whose values are determined by chance.
Population
All subjects possessing a common characteristic that is being studied.
Sample
A subgroup or subset of the population.
Parameter
Characteristic or measure obtained from a population.
Statistic (not to be confused with Statistics)
Characteristic or measure obtained from a sample.
Descriptive Statistics
Collection, organization, summarization, and presentation of data.
Inferential Statistics
Generalizing from samples to populations using probabilities. Performing hypothesis testing,
determining relationships between variables, and making predictions.
Qualitative Variables
Variables which assume non-numerical values.
Quantitative Variables
Variables which assume numerical values.
Discrete Variables
Variables which assume a finite or countable number of possible values. Usually obtained by
counting.
Continuous Variables
STAT
Statistics: Introduction
Population vs Sample
The population includes all objects of interest whereas the sample is only a portion of the
population. Parameters are associated with populations and statistics with samples. Parameters
are usually denoted using Greek letters (mu, sigma) while statistics are usually denoted using
Roman letters (x, s).
There are several reasons why we don't work with populations. They are usually large, and it is
often impossible to get data for every object we're studying. Sampling does not usually occur
without cost, and the more items surveyed, the larger the cost.
We compute statistics, and use them to estimate parameters. The computation is the first part of
the statistics course (Descriptive Statistics) and the estimation is the second part (Inferential
Statistics)
Discrete vs Continuous
Discrete variables are usually obtained by counting. There are a finite or countable number of
choices available with discrete data. You can't have 2.63 people in the room.
Continuous variables are usually obtained by measuring. Length, weight, and time are all
examples of continous variables. Since continuous variables are real numbers, we usually round
them. This implies a boundary depending on the number of decimal places. For example: 64 is
really anything 63.5 <= x < 64.5. Likewise, if there are two decimal places, then 64.03 is really
anything 63.025 <= x < 63.035. Boundaries always have one more decimal place than the data
and end in a 5.
Levels of Measurement
There are four levels of measurement: Nominal, Ordinal, Interval, and Ratio. These go from
lowest level to highest level. Data is classified according to the highest level which it fits. Each
additional level adds something the previous level didn't have.
Types of Sampling
There are five types of sampling: Random, Systematic, Convenience, Cluster, and Stratified.
Random sampling is analogous to putting everyone's name into a hat and drawing out several
names. Each element in the population has an equal chance of occuring. While this is the
preferred way of sampling, it is often difficult to do. It requires that a complete list of every
element in the population be obtained. Computer generated lists are often used with random
sampling. You can generate random numbers using the TI82 calculator.
Systematic sampling is easier to do than random sampling. In systematic sampling, the list of
elements is "counted off". That is, every kth element is taken. This is similar to lining everyone
up and numbering off "1,2,3,4; 1,2,3,4; etc". When done numbering, all people numbered 4
would be used.
Convenience sampling is very easy to do, but it's probably the worst technique to use. In
convenience sampling, readily available data is used. That is, the first people the surveyor runs
into.
Cluster sampling is accomplished by dividing the population into groups -- usually
geographically. These groups are called clusters or blocks. The clusters are randomly selected,
and each element in the selected clusters are used.
Stratified sampling also divides the population into groups called strata. However, this time it is
by some characteristic, not geographically. For instance, the population might be separated into
males and females. A sample is taken from each of these strata using either random, systematic,
or convenience sampling.
Chapter 2
Statistics: Frequency Distributions & Graphs
Definitions
Raw Data
Data collected in original form.
Frequency
The number of times a certain value or class of values occurs.
Frequency Distribution
The organization of raw data in table form with classes and frequencies.
Categorical Frequency Distribution
A frequency distribution in which the data is only nominal or ordinal.
STAT
Histogram
A graph which displays the data by using vertical bars of various heights to represent
frequencies. The horizontal axis can be either the class boundaries, the class marks, or the class
limits.
Frequency Polygon
A line graph. The frequency is placed along the vertical axis and the class midpoints are placed
along the horizontal axis. These points are connected with lines.
Ogive
A frequency polygon of the cumulative frequency or the relative cumulative frequency. The
vertical axis the cumulative frequency or relative cumulative frequency. The horizontal axis is
the class boundaries. The graph always starts at zero at the lowest class boundary and will end up
at the total frequency (for a cumulative frequency) or 1.00 (for a relative cumulative frequency).
Pareto Chart
A bar graph for qualitative data with the bars arranged according to frequency.
Pie Chart
Graphical depiction of data as slices of a pie. The frequency determines the size of the slice. The
number of degrees in any slice is the relative frequency times 360 degrees.
Pictograph
A graph that uses pictures to represent data.
Stem and Leaf Plot
A data plot which uses part of the data value as the stem and the rest of the data value (the leaf)
to form groups or classes. This is very useful for sorting data quickly.
Statistics: Data Description
Definitions
Statistic
Characteristic or measure obtained from a sample
Parameter
Characteristic or measure obtained from a population
Mean
Sum of all the values divided by the number of values. This can either be a population mean
(denoted by mu) or a sample mean (denoted by x bar)
STAT
Median
The midpoint of the data after being ranked (sorted in ascending order). There are as many
numbers below the median as above the median.
Mode
The most frequent number
Skewed Distribution
The majority of the values lie together on one side with a very few values (the tail) to the other
side. In a positively skewed distribution, the tail is to the right and the mean is larger than the
median. In a negatively skewed distribution, the tail is to the left and the mean is smaller than the
median.
Symmetric Distribution
The data values are evenly distributed on both sides of the mean. In a symmetric distribution, the
mean is the median.
Weighted Mean
The mean when each value is multiplied by its weight and summed. This sum is divided by the
total of the weights.
Midrange
The mean of the highest and lowest values. (Max + Min) / 2
Range
The difference between the highest and lowest values. Max - Min
Population Variance
The average of the squares of the distances from the population mean. It is the sum of the
squares of the deviations from the mean divided by the population size. The units on the variance
are the units of the population squared.
Sample Variance
Unbiased estimator of a population variance. Instead of dividing by the population size, the sum
of the squares of the deviations from the sample mean is divided by one less than the sample
size. The units on the variance are the units of the population squared.
Standard Deviation
The square root of the variance. The population standard deviation is the square root of the
population variance and the sample standard deviation is the square root of the sample variance.
The sample standard deviation is not the unbiased estimator for the population standard
deviation. The units on the standard deviation is the same as the units of the population/sample.
STAT
Coefficient of Variation
Standard deviation divided by the mean, expressed as a percentage. We won't work with the
Coefficient of Variation in this course.
Chebyshev's Theorem
The proportion of the values that fall within k standard deviations of the mean is at least 1 -
1/k^2where k > 1. Chebyshev's theorem can be applied to any distribution regardless of its
shape.
Empirical or Normal Rule
Only valid when a distribution in bell-shaped (normal). Approximately 68% lies within 1
standard deviation of the mean; 95% within 2 standard deviations; and 99.7% within 3 standard
deviations of the mean.
Standard Score or Z-Score
The value obtained by subtracting the mean and dividing by the standard deviation. When all
values are transformed to their standard scores, the new mean (for Z) will be zero and the
standard deviation will be one.
Percentile
The percent of the population which lies below that value. The data must be ranked to find
percentiles.
Quartile
Either the 25th, 50th, or 75th percentiles. The 50th percentile is also called the median.
Decile
Either the 10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, or 90th percentiles.
Lower Hinge
The median of the lower half of the numbers (up to and including the median). The lower hinge
is the first Quartile unless the remainder when dividing the sample size by four is 3.
Upper Hinge
The median of the upper half of the numbers (including the median). The upper hinge is the 3rd
Quartile unless the remainder when dividing the sample size by four is 3.
Box and Whiskers Plot (Box Plot)
A graphical representation of the minimum value, lower hinge, median, upper hinge, and
maximum. Some textbooks, and the TI-82 calculator, define the five values as the minimum, first
Quartile, median, third Quartile, and maximum.
STAT
Average could mean one of four things. The arithmetic mean, the median, midrange, or mode.
For this reason, it is better to specify which average you're talking about.
Mean
Population Mean:
Sample Mean:
Frequency Distribution:
The mean of a frequency distribution is also the weighted mean.
STAT
Median
The data must be ranked (sorted in ascending order) first. The median is the number in the
middle.
To find the depth of the median, there are several formulas that could be used, the one that we
will use is:
Depth of median = 0.5 * (n + 1)
Raw Data
The median is the number in the "depth of the median" position. If the sample size is even, the
depth of the median will be a decimal -- you need to find the midpoint between the numbers on
either side of the depth of the median.
Find the cumulative frequencies for the data. The first value with a cumulative frequency greater
than depth of the median is the median. If the depth of the median is exactly 0.5 more than the
cumulative frequency of the previous class, then the median is the midpoint between the two
classes.
Since the data is grouped, you have lost all original information. Some textbooks have you
simply take the midpoint of the class. This is an over-simplification which isn't the true value
(but much easier to do). The correct process is to interpolate.
Find out what proportion of the distance into the median class the median by dividing the sample
size by 2, subtracting the cumulative frequency of the previous class, and then dividing all that
bay the frequency of the median class.
Multiply this proportion by the class width and add it to the lower boundary of the median class.
Mode
The mode is the most frequent data value. There may be no mode if no one value appears more
than any other. There may also be two modes (bimodal), three modes (trimodal), or more than
three modes (multi-modal).
For grouped frequency distributions, the modal class is the class with the largest frequency.
STAT
Midrange
The midrange is simply the midpoint between the highest and lowest values.
Summary
The Mean is used in computing other statistics (such as the variance) and does not exist for open
ended grouped frequency distributions (1). It is often not appropriate for skewed distributions
such as salary information.
The Median is the center number and is good for skewed distributions because it is resistant to
change.
The Mode is used to describe the most typical case. The mode can be used with nominal data
whereas the others can't. The mode may or may not exist and there may be more than one value
for the mode (2).
The Midrange is not used very often. It is a very rough estimate of the average and is greatly
affected by extreme values (even more so than the mean).
Range
The range is the simplest measure of variation to find. It is simply the highest value
minus the lowest value.
RANGE = MAXIMUM - MINIMUM
Since the range only uses the largest and smallest values, it is greatly affected by
extreme values, that is - it is not resistant to change.
Variance
STAT
"Average Deviation"
The range only involves the smallest and largest numbers, and it would be desirable to
have a statistic which involved all of the data values.
The first attempt one might make at this is something they might call the average
deviation from the mean and define it as:
The problem is that this summation is always zero. So, the average deviation will
always be zero. That is why the average deviation is never used.
Population Variance
So, to keep it from being zero, the deviation from the mean is squared and called the
"squared deviation from the mean". This "average squared deviation from the mean"
is called the variance.
One would expect the sample variance to simply be the population variance with the
population mean replaced by the sample mean. However, one of the major uses of
statistics is to estimate the corresponding parameter. This formula has the problem
that the estimated value isn't the same as the parameter. To counteract this, the sum of
the squares of the deviations is divided by one less than the sample size.
Standard Deviation
There is a problem with variances. Recall that the deviations were squared. That
means that the units were also squared. To get the units back the same as the original
data values, the square root must be taken.
STAT
The sample standard deviation is not the unbiased estimator for the population
standard deviation.
The calculator does not have a variance key on it. It does have a standard deviation
key. You will have to square the standard deviation to find the variance.
What's wrong with the first formula, you ask? Consider the following example - the
last row are the totals for the columns
Not too bad, you think. But this can get pretty bad if the sample mean doesn't happen
to be an "nice" rational number. Think about having a mean of 19/7 =
2.714285714285... Those subtractions get nasty, and when you square them, they're
really bad. Another problem with the first formula is that it requires you to know the
mean ahead of time. For a calculator, this would mean that you have to save all of the
numbers that were entered. The TI-82 does this, but most scientific calculators don't.
Now, let's consider the shortcut formula. The only things that you need to find are the
sum of the values and the sum of the values squared. There is no subtraction and no
decimals or fractions until the end. The last row contains the sums of the columns, just
like before.
1. Record each number in the first column and the square of each number in the
second column.
2. Total the first column: 23
3. Total the second column: 111
4. Compute the sum of squares: 111 - 23*23/5 = 111 - 105.8 = 5.2
5. Divide the sum of squares by one less than the sample size to get the variance =
5.2 / 4 = 1.3
x x^2
4 16
5 25
3 9
6 36
5 25
23 111
Chebyshev's Theorem
STAT
The proportion of the values that fall within k standard deviations of the mean will be
Chebyshev's Theorem is true for any sample set, not matter what the distribution.
Empirical Rule
The empirical rule is only valid for bell-shaped (normal) distributions. The following
statements are true.
Approximately 68% of the data values fall within one standard deviation of the
mean.
Approximately 95% of the data values fall within two standard deviations of
the mean.
Approximately 99.7% of the data values fall within three standard deviations of
the mean.
The empirical rule will be revisited later in the chapter on normal probabilities.
The mean of the standard scores is zero and the standard deviation is 1. This is the
nice feature of the standard score -- no matter what the original scale was, when the
data is converted to its standard score, the mean is zero and the standard deviation is
1.
STAT
The kth percentile is the number which has k% of the values below it. The data must
be ranked.
It is sometimes easier to count from the high end rather than counting from the low
end. For example, the 80th percentile is the number which has 80% below it and 20%
above it. Rather than counting 80% from the bottom, count 20% from the top.
If you wish to find the percentile for a number (rather than locating the kth percentile),
then
The percentiles divide the data into 100 equal regions. The deciles divide the data into
10 equal regions. The instructions are the same for finding a percentile, except instead
of dividing by 100 in step 2, divide by 10.
Quartiles (4 regions)
The quartiles divide the data into 4 equal regions. Instead of dividing by 100 in step 2,
divide by 4.
Note: The 2nd quartile is the same as the median. The 1 st quartile is the 25th percentile,
the 3rd quartile is the 75th percentile.
STAT
The quartiles are commonly used (much more so than the percentiles or deciles). The
TI-82 calculator will find the quartiles for you. Some textbooks include the quartiles
in the five number summary.
Hinges
The lower hinge is the median of the lower half of the data up to and including the
median. The upper hinge is the median of the upper half of the data up to and
including the median.
The hinges are the same as the quartiles unless the remainder when dividing the
sample size by four is three (like 39 / 4 = 9 R 3).
The statement about the lower half or upper half including the median tends to be
confusing to some students. If the median is split between two values (which happens
whenever the sample size is even), the median isn't included in either since the median
isn't actually part of the data.
The median will be in position 10.5. The lower half is positions 1 - 10 and the upper
half is positions 11 - 20. The lower hinge is the median of the lower half and would be
in position 5.5. The upper hinge is the median of the upper half and would be in
position 5.5 starting with original position 11 as position 1 -- this is the original
position 15.5.
The median is in position 11. The lower half is positions 1 - 11 and the upper half is
positions 11 - 21. The lower hinge is the median of the lower half and would be in
position 6. The upper hinge is the median of the upper half and would be in position 6
when starting at position 11 -- this is original position 16.
A graphical representation of the five number summary. A box is drawn between the
lower and upper hinges with a line at the median. Whiskers (a single line, not a box)
extend from the hinges to lines at the minimum and maximum values.
Outliers
Outliers are extreme values. There are mild outliers and extreme outliers. The Bluman
text does not distinguish between mild outliers and extreme outliers and just treats
either as an outlier.
Extreme Outliers
Extreme outliers are any data values which lie more than 3.0 times the interquartile
range below the first quartile or above the third quartile. x is an extreme outlier if ...
x < Q1 - 3 * IQR
or
x > Q3 + 3 * IQR
Mild Outliers
Mild outliers are any data values which lie between 1.5 times and 3.0 times the
interquartile range below the first quartile or above the third quartile. x is a mild
outlier if ...
Q1 - 3 * IQR <= x < Q1 - 1.5 * IQR
or
Q1 + 1.5 * IQR < x <= Q3 + 3 * IQR
Chapter 4
STAT
Definitions
Factorial
A positive integer factorial is the product of each natural number up to and
including the integer.
Permutation
An arrangement of objects in a specific order.
Combination
A selection of objects without regard to order.
Tree Diagram
A graphical device used to list all possibilities of a sequence of events in a
systematic way.
Fundamental Theorems
Arithmetic
Every integer greater than one is either prime or can be expressed as an unique
product of prime numbers
Algebra
Every polynomial function on one variable of degree n > 0 has at least one real or
complex zero.
Linear Programming
In a sequence of events, the total possible number of ways all events can performed is
the product of the possible number of ways each individual event can be performed.
Factorials
If n is a positive integer, then
n! = n (n-1) (n-2) ... (3)(2)(1)
n! = n (n-1)!
A special case is 0!
0! = 1
Permutations
A permutation is an arrangement of objects without repetition where order is
important.
A permutation of n objects, arranged into one group of size n, without repetition, and
order being important is:
n Pn = P(n,n) = n!
n Pr = P(n,r) = n! / (n-r)!
Assuming that you start a n and count down to 1 in your factorials ...
Distinguishable Permutations
Sometimes letters are repeated and all of the permutations aren't distinguishable from
each other.
If a word has N letters, k of which are unique, and you let n (n1, n2, n3, ..., nk) be the
frequency of each of the k letters, then the total number of distinguishable
permutations is given by:
Here are the frequency of each letter: S=3, T=3, A=1, I=2, C=1, there are 10 letters
total
10! 10*9*8*7*6*5*4*3*2*1
Permutations = -------------- = -------------------- = 50400
3! 3! 1! 2! 1! 6 * 6 * 1 * 2 * 1
Combinations
A combination is an arrangement of objects without repetition where order is not
important.
Note: The difference between a permutation and a combination is not whether there is
repetition or not -- there must not be repetition with either, and if there is repetition,
you can not use the formulas for permutations or combinations. The only difference
in the definition of a permutation and a combination is whether order is
important.
n Cr = C(n,r) = n! / ( (n-r)! * r! )
notation:
Assuming that you start a n and count down to 1 in your factorials ...
Pascal's Triangle
Combinations are used in the binomial expansion theorem from algebra to give the
coefficients of the expansion (a+b)^n. They also form a pattern known as Pascal's
Triangle.
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
STAT
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
Each element in the table is the sum of the two elements directly above it. Each
element is also a combination. The n value is the number of the row (start counting at
zero) and the r value is the element in the row (start counting at zero). That would
make the 20 in the next to last row C(6,3) -- it's in the row #6 (7 th row) and position #3
(4th element).
Symmetry
Since combinations are symmetric, if n-r is smaller than r, then switch the
combination to its alternative form and then use the shortcut given above.
TI-82
You can use the TI-82 graphing calculator to find factorials, permutations, and
combinations.
Tree Diagrams
Tree diagrams are a graphical way of listing all the possible
outcomes. The outcomes are listed in an orderly fashion, so
listing all of the possible outcomes is easier than just trying
to make sure that you have them all listed. It is called a tree
diagram because of the way it looks.
The first event appears on the left, and then each sequential
event is represented as branches off of the first event.
The tree diagram to the right would show the possible ways
of flipping two coins. The final outcomes are obtained by
following each branch to its conclusion: They are from top to bottom:
STAT
HH HT TH TT
Chapter 5
Stats: Probability
Definitions
Probability Experiment
Process which leads to well-defined results call outcomes
Outcome
The result of a single trial of a probability experiment
Sample Space
Set of all possible outcomes of a probability experiment
Event
One or more outcomes of a probability experiment
Classical Probability
Uses the sample space to determine the numerical probability that an event will
happen. Also called theoretical probability.
Equally Likely Events
Events which have the same probability of occurring.
Complement of an Event
All the events in the sample space except the given events.
Empirical Probability
Uses a frequency distribution to determine the numerical probability. An
empirical probability is a relative frequency.
Subjective Probability
Uses probability values based on an educated guess or estimate. It employs
opinions and inexact information.
Mutually Exclusive Events
Two events which cannot happen at the same time.
Disjoint Events
Another name for mutually exclusive events.
Independent Events
Two events are independent if the occurrence of one does not affect the
probability of the other occurring.
Dependent Events
STAT
Two events are dependent if the first event affects the outcome or occurrence of
the second event in a way the probability is changed.
Conditional Probability
The probability of an event occurring given that another event has already
occurred.
Bayes' Theorem
A formula which allows one to find the probability that an event occurred as
the result of a particular previous event.
Sample Spaces
A sample space is the set of all possible outcomes. However, some sample spaces are
better than others.
Consider the experiment of flipping two coins. It is possible to get 0 heads, 1 head, or
2 heads. Thus, the sample space could be {0, 1, 2}. Another way to look at it is flip
{ HH, HT, TH, TT }. The second way is better because each event is as equally likely
to occur as any other.
When writing the sample space, it is highly desirable to have events which are equally
likely.
Another example is rolling two dice. The sums are { 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }.
However, each of these aren't equally likely. The only way to get a sum 2 is to roll a 1
on both dice, but you can get a sum of 4 by rolling a 1-3, 2-2, or 3-1. The following
table illustrates a better sample space for the sum obtain when rolling two dice.
Second Die
First Die 1 2 3 4 5 6
1 2 3 4 5 6 7
2 3 4 5 6 7 8
3 4 5 6 7 8 9
STAT
4 5 6 7 8 9 10
5 6 7 8 9 10 11
6 7 8 9 10 11 12
Classical Probability
The above table lends itself to describing data another way -- using a probability
distribution. Let's consider the frequency distribution for the above sums.
2 1 1/36
3 2 2/36
4 3 3/36
5 4 4/36
6 5 5/36
7 6 6/36
8 5 5/36
9 4 4/36
10 3 3/36
11 2 2/36
12 1 1/36
If just the first and last columns were written, we would have a probability
distribution. The relative frequency of a frequency distribution is the probability of the
event occurring. This is only true, however, if the events are equally likely.
STAT
This gives us the formula for classical probability. The probability of an event
occurring is the number in the event divided by the number in the sample space.
Again, this is only true when the events are equally likely. A classical probability is
the relative frequency of each event in the sample space when each event is equally
likely.
Empirical Probability
Empirical probability is based on observation. The empirical probability of an event is
the relative frequency of a frequency distribution based upon observation.
P(E) = f / n
Probability Rules
There are two rules which are very important.
The probability of any event which is not in the sample space is zero.
The probability of an event not occurring is one minus the probability of it occurring.
P(E') = 1 - P(E)
Chapter 6
Definitions
Random Variable
Variable whose values are determined by chance
Probability Distribution
The values a random variable can assume and the corresponding probabilities
of each.
Expected Value
The theoretical mean of the variable.
Binomial Experiment
An experiment with a fixed number of independent trials. Each trial can only
have two outcomes, or outcomes which can be reduced to two outcomes. The
probability of each outcome must remain constant from trial to trial.
Binomial Distribution
The outcomes of a binomial experiment with their corresponding probabilities.
Multinomial Distribution
A probability distribution resulting from an experiment with a fixed number of
independent trials. Each trial has two or more mutually exclusive outcomes.
The probability of each outcome must remain constant from trial to trial.
Poisson Distribution
A probability distribution used when a density of items is distributed over a
period of time. The sample size needs to be large and the probability of success
to be small.
Hypergeometric Distribution
A probability distribution of a variable with two outcomes when sampling is
done without replacement.
Probability Functions
A probability function is a function which assigns probabilities to the values of a
random variable.
If these two conditions aren't met, then the function isn't a probability function. There
is no requirement that the values of the random variable only be between 0 and 1, only
that the probabilities be between 0 and 1.
Probability Distributions
A listing of all the values the random variable can assume with their corresponding
probabilities make a probability distribution.
A note about random variables. A random variable does not mean that the values can
be anything (a random number). Random variables have a well defined set of
outcomes and well defined probabilities for the occurrence of each outcome. The
random refers to the fact that the outcomes happen by chance -- that is, you don't
know which outcome will occur next.
Here's an example probability distribution that results from the rolling of a single fair
die.
x 1 2 3 4 5 6 sum
The definitions for population mean and variance used with an ungrouped frequency
distribution were:
STAT
Some of you might be confused by only dividing by N. Recall that this is the
population variance, the sample variance, which was the unbiased estimator for the
population variance was when it was divided by n-1.
Recall that a probability is a long term relative frequency. So every f/N can be
replaced by p(x). This simplifies to
be:
What's even better, is that the last portion of the variance is the mean squared. So, the
two formulas that we will be using are:
x 1 2 3 4 5 6 sum
x^2 p(x) 1/6 4/6 9/6 16/6 25/6 36/6 91/6 = 15.1667
Do not use rounded off values in the intermediate calculations. Only round off the
final answer.
Binomial Experiment
STAT
The fact that each trial is independent actually means that the probabilities remain
constant.
There are five things you need to do to work a binomial story problem.
1. Define Success first. Success must be for a single trial. Success = "Rolling a 6 on
a single die"
2. Define the probability of success (p): p = 1/6
3. Find the probability of failure: q = 5/6
4. Define the number of trials: n = 6
5. Define the number of successes out of those trials: x = 2
STAT
Anytime a six appears, it is a success (denoted S) and anytime something else appears,
it is a failure (denoted F). The ways you can get exactly 2 successes in 6 trials are
given below. The probability of each is written to the right of the way it could occur.
Because the trials are independent, the probability of the event (all six dice) is the
product of each probability of each outcome (die)
1 FFFFSS 5/6 * 5/6 * 5/6 * 5/6 * 1/6 * 1/6 = (1/6)^2 * (5/6)^4
2 FFFSFS 5/6 * 5/6 * 5/6 * 1/6 * 5/6 * 1/6 = (1/6)^2 * (5/6)^4
3 FFFSSF 5/6 * 5/6 * 5/6 * 1/6 * 1/6 * 5/6 = (1/6)^2 * (5/6)^4
4 FFSFFS 5/6 * 5/6 * 1/6 * 5/6 * 5/6 * 1/6 = (1/6)^2 * (5/6)^4
5 FFSFSF 5/6 * 5/6 * 1/6 * 5/6 * 1/6 * 5/6 = (1/6)^2 * (5/6)^4
6 FFSSFF 5/6 * 5/6 * 1/6 * 1/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
7 FSFFFS 5/6 * 1/6 * 5/6 * 5/6 * 5/6 * 1/6 = (1/6)^2 * (5/6)^4
8 FSFFSF 5/6 * 1/6 * 5/6 * 5/6 * 1/6 * 5/6 = (1/6)^2 * (5/6)^4
9 FSFSFF 5/6 * 1/6 * 5/6 * 1/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
10 FSSFFF 5/6 * 1/6 * 1/6 * 5/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
11 SFFFFS 1/6 * 5/6 * 5/6 * 5/6 * 5/6 * 1/6 = (1/6)^2 * (5/6)^4
12 SFFFSF 1/6 * 5/6 * 5/6 * 5/6 * 1/6 * 5/6 = (1/6)^2 * (5/6)^4
13 SFFSFF 1/6 * 5/6 * 5/6 * 1/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
14 SFSFFF 1/6 * 5/6 * 1/6 * 5/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
15 SSFFFF 1/6 * 1/6 * 5/6 * 5/6 * 5/6 * 5/6 = (1/6)^2 * (5/6)^4
Notice that each of the 15 probabilities are exactly the same: (1/6)^2 * (5/6)^4.
Also, note that the 1/6 is the probability of success and you needed 2 successes. The
5/6 is the probability of failure, and if 2 of the 6 trials were success, then 4 of the 6
must be failures. Note that 2 is the value of x and 4 is the value of n-x.
Further note that there are fifteen ways this can occur. This is the number of ways 2
successes can be occur in 6 trials without repetition and order not being important, or
a combination of 6 things, 2 at a time.
A coin is tossed 10 times. What is the probability that exactly 6 heads will occur.
Example:
Find the mean, variance, and standard deviation for the number of sixes that appear
when rolling 30 dice.
The mean is 30 * (1/6) = 5. The variance is 30 * (1/6) * (5/6) = 25/6. The standard
deviation is the square root of the variance = 2.041241452 (approx)
Multinomial Probabilities
A multinomial experiment is an extended binomial probability. The difference is that
in a multinomial experiment, there are more than two possible outcomes. However,
there are still a fixed number of independent trials, and the probability of each
outcome must remain constant from trial to trial.
STAT
Instead of using a combination, as in the case of the binomial probability, the number
of ways the outcomes can occur is done using distinguishable permutations.
The probability that a person will pass a College Algebra class is 0.55, the probability
that a person will withdraw before the class is completed is 0.40, and the probability
that a person will fail the class is 0.05. Find the probability that in a class of 30
students, exactly 16 pass, 12 withdraw, and 2 fail.
Outcome x p(outcome)
Pass 16 0.55
Withdraw 12 0.40
Fail 2 0.05
Total 30 1.00
Poisson Probabilities
Named after the French mathematician Simeon Poisson, Poisson probabilities are
useful when there are a large number of independent trials with a small probability of
success on a single trial and the variables occur over a period of time. It can also be
used when a density of items is distributed over a given area or volume.
Example:
If there are 500 customers per eight-hour day in a check-out lane, what is the
probability that there will be exactly 3 in line during any five-minute period?
The expected value during any one five minute period would be 500 / 96 =
5.2083333. The 96 is because there are 96 five-minute periods in eight hours. So, you
expect about 5.2 customers in 5 minutes and want to know the probability of getting
exactly 3.
p(3;500/96) = e^(-500/96) * (500/96)^3 / 3! = 0.1288 (approx)
Hypergeometric Probabilities
Hypergeometric experiments occur when the trials are not independent of each other
and occur due to sampling without replacement -- as in a five card poker hand.
Example:
How many ways can 3 men and 4 women be selected from a group of 7 men and 10
women?
Note that the sum of the numbers in the numerator are the numbers used in the
combination in the denominator.
This can be extended to more than two groups and called an extended hypergeometric
problem.
Chapter 7
Definitions
Central Limit Theorem
Theorem which stats as the sample size increases, the sampling distribution of
the sample means will become approximately normally distributed.
Correction for Continuity
A correction applied to convert a discrete distribution to a continuous
distribution.
Finite Population Correction Factor
A correction applied to the standard error of the means when the sample size is
more than 5% of the population size and the sampling is done without
replacement.
Sampling Distribution of the Sample Means
Distribution obtained by using the means computed from random samples of a
specific size.
Sampling Error
Difference which occurs between the sample statistic and the population
parameter due to the fact that the sample isn't a perfect representation of the
population.
Standard Error or the Mean
The standard deviation of the sampling distribution of the sample means. It is
equal to the standard deviation of the population divided by the square root of
the sample size.
Standard Normal Distribution
A normal distribution in which the mean is 0 and the standard deviation is 1. It
is denoted by z.
Z-score
Also known as z-value. A standardized score in which the mean is zero and the
standard deviation is 1. The Z score is used to represent the standard normal
distribution.
Mean is zero
Variance is one
Standard Deviation is one
Data values represented by z.
Normal Probabilities
Comprehension of this table is vital to success in the course!
There is a table which must be used to look up standard normal probabilities. The z-
score is broken into two parts, the whole number and tenth are looked up along the
left side and the hundredth is looked up across the top. The value in the intersection of
the row and column is the area under the curve between zero and the z-score looked
up.
Because of the symmetry of the normal distribution, look up the absolute value of any
z-score.
STAT
There are several different situations that can arise when asked to find normal
probabilities.
Situation Instructions
Between zero and Look up the area in the table
any number
Between two positives, or Look up both areas in the table and subtract the smaller
Between two negatives from the larger.
Between a negative and Look up both areas in the table and add them together
a positive
Less than a negative, or Look up the area in the table and subtract from 0.5000
Greater than a positive
Greater than a negative, or Look up the area in the table and add to 0.5000
Less than a positive
1. If there is only one z-score given, use 0.5000 for the second area, otherwise
look up both z-scores in the table
2. If the two numbers are the same sign, then subtract; if they are different signs,
then add. If there is only one z-score, then use the inequality to determine the
second sign (< is negative, and > is positive).
This is more difficult, and requires you to use the table inversely. You must look up
the area between zero and the value on the inside part of the table, and then read the z-
score from the outside. Finally, decide if the z-score should be positive or negative,
based on whether it was on the left side or the right side of the mean. Remember, z-
scores can be negative, but areas or probabilities cannot be.
Situation Instructions
Area between 0 and a value Look up the area in the table
Make negative if on the left side
Area in one tail Subtract the area from 0.5000
Look up the difference in the table
Make negative if in the left tail
STAT
Area including one complete half Subtract 0.5000 from the area
(Less than a positive or greater than a Look up the difference in the table
negative) Make negative if on the left side
Within z units of the mean Divide the area by 2
Look up the quotient in the table
Use both the positive and negative z-scores
Two tails with equal area Subtract the area from 1.000
(More than z units from the mean) Divide the area by 2
Look up the quotient in the table
Use both the positive and negative z-scores
Using the table becomes proficient with practice, work lots of the normal probability
problems!
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
2.5 0.4938 0.4940 0.4941 0.4943 0.4945 0.4946 0.4948 0.4949 0.4951 0.4952
2.6 0.4953 0.4955 0.4956 0.4957 0.4959 0.4960 0.4961 0.4962 0.4963 0.4964
2.7 0.4965 0.4966 0.4967 0.4968 0.4969 0.4970 0.4971 0.4972 0.4973 0.4974
2.8 0.4974 0.4975 0.4976 0.4977 0.4977 0.4978 0.4979 0.4979 0.4980 0.4981
2.9 0.4981 0.4982 0.4982 0.4983 0.4984 0.4984 0.4985 0.4985 0.4986 0.4986
3.0 0.4987 0.4987 0.4987 0.4988 0.4988 0.4989 0.4989 0.4989 0.4990 0.4990
The values in the table are the areas between zero and the z-score. That is,
P(0<Z<z-score)
When all of the possible sample means are computed, then the following properties
are true:
The mean of the sample means will be the mean of the population
The variance of the sample means will be the variance of the population
divided by the sample size.
The standard deviation of the sample means (known as the standard error of
the mean) will be smaller than the population standard deviation and will be
equal to the standard deviation of the population divided by the square root
of the sample size.
If the population has a normal distribution, then the sample means will have a
normal distribution.
STAT
If the population is not normally distributed, but the sample size is sufficiently
large, then the sample means will have an approximately normal distribution.
Some books define sufficiently large as at least 30 and others as at least 31.
The formula for a z-score when working with the sample means is:
In the following, N is the population size and n is the sample size. The adjustment is
to multiply the standard error by the square root of the quotient of the difference
between the population and sample sizes and one less than the population
size.
Recall that according to the Central Limit Theorem, the sample mean of any
distribution will become approximately normal if the sample size is sufficiently large.
The correction is to either add or subtract 0.5 of a unit from each discrete x-value.
This fills in the gaps to make it continuous. This is very similar to expanding of limits
to form boundaries that we did with group frequency distributions.
Examples
Discrete Continuous
x=6 5.5 < x < 6.5
x>6 x > 6.5
x >= 6 x > 5.5
x<6 x < 5.5
x <= 6 x < 6.5
As you can see, whether or not the equal to is included makes a big difference in the
discrete distribution and the way the conversion is performed. However, for a
continuous distribution, equality makes no difference.
1. Identify success, the probability of success, the number of trials, and the
desired number of successes. Since this is a binomial problem, these are the
same things which were identified when working a binomial problem.
2. Convert the discrete x to a continuous x. Some people would argue that step 3
should be done before this step, but go ahead and convert the x before you
forget about it and miss the problem.
3. Find the smaller of np or nq. If the smaller one is at least five, then the larger
must also be, so the approximation will be considered good. When you find np,
you're actually finding the mean, mu, so denote it as such.
4. Find the standard deviation, sigma = sqrt (npq). It might be easier to find the
variance and just stick the square root in the final calculation - that way you
don't have to work with all of the decimal places.
5. Compute the z-score using the standard formula for an individual score (not the
one for a sample mean).
6. Calculate the probability desired.
Chapter 8
Stats: Estimation
STAT
Definitions
Confidence Interval
An interval estimate with a specific level of confidence
Confidence Level
The percent of the time the true mean will lie in the interval estimate given.
Consistent Estimator
An estimator which gets closer to the value of the parameter as the sample size
increases.
Degrees of Freedom
The number of data values which are allowed to vary once a statistic has been
determined.
Estimator
A sample statistic which is used to estimate a population parameter. It must be
unbiased, consistent, and relatively efficient.
Interval Estimate
A range of values used to estimate a parameter.
Maximum Error of the Estimate
The maximum difference between the point estimate and the actual parameter.
The Maximum Error of the Estimate is 0.5 the width of the confidence interval
for means and proportions.
Point Estimate
A single value used to estimate a parameter.
Relatively Efficient Estimator
The estimator for a parameter with the smallest variance.
T distribution
A distribution used when the population variance is unknown.
Unbiased Estimator
An estimator whose expected value is the mean of the parameter being
estimated.
inferred (or estimated) from this sample statistic. Let me say that again: Statistics are
calculated, parameters are estimated.
We talked about problems of obtaining the value of the parameter earlier in the course
when we talked about sampling techniques.
Another area of inferential statistics is sample size determination. That is, how large
of a sample should be taken to make an accurate estimation. In these cases, the
statistics can't be used since the sample hasn't been taken yet.
Point Estimates
There are two types of estimates we will find: Point Estimates and Interval Estimates.
The point estimate is the single best value.
Unbiased: The expected value of the estimator must be equal to the mean of
the parameter
Consistent: The value of the estimator approaches the value of the parameter
as the sample size increases
Relatively Efficient: The estimator has the smallest variance of all estimators
which could be used
Confidence Intervals
The point estimate is going to be different from the population parameter because due
to the sampling error, and there is no way to know who close it is to the actual
parameter. For this reason, statisticians like to give an interval estimate which is a
range of values used to estimate the parameter.
The maximum error of the estimate is denoted by E and is one-half the width of the
confidence interval. The basic confidence interval for a symmetric distribution is set
up to be the point estimate minus the maximum error of the estimate is less than the
STAT
true population parameter which is less than the point estimate plus the maximum
error of the estimate. This formula will work for means and proportions because they
will use the Z or T distributions which are symmetric. Later, we will talk about
variances, which don't use a symmetric distribution, and the formula will be different.
Area in Tails
Since the level of confidence is 1-alpha, the amount in the tails is alpha. There is a
notation in statistics which means the score which has the specified area in the right
tail.
Examples:
Z(0.05) = 1.645 (the Z-score which has 0.05 to the right, and 0.4500 between 0
and it)
Z(0.10) = 1.282 (the Z-score which has 0.10 to the right, and 0.4000 between 0
and it).
As a shorthand notation, the () are usually dropped, and the probability written as a
subscript. The greek letter alpha is used represent the area in both tails for a
confidence interval, and so alpha/2 will be the area in one tail.
Notice in the above table, that the area between 0 and the z-score is simply one-half of
the confidence level. So, if there is a confidence level which isn't given above, all you
STAT
need to do to find it is divide the confidence level by two, and then look up the area in
the inside part of the Z-table and look up the z-score on the outside.
Also notice - if you look at the student's t distribution, the top row is a level of
confidence, and the bottom row is the z-score. In fact, this is where I got the extra
digit of accuracy from.
You are estimating the population mean, mu, not the sample mean, x bar.
Once you have computed E, I suggest you save it to the memory on your calculator.
On the TI-82, a good choice would be the letter E. The reason for this is that the limits
for the confidence interval are now found by subtracting and adding the maximum
error of the estimate from/to the sample mean.
STAT
Student's t Distribution
When the population standard deviation is unknown, the mean has a Student's t
distribution. The Student's t distribution was created by William T. Gosset, an Irish
brewery worker. The brewery wouldn't allow him to publish his work under his name,
so he used the pseudonym "Student".
Degrees of Freedom
A degree of freedom occurs for every data value which is allowed to vary once a
statistic has been fixed. For a single mean, there are n-1 degrees of freedom. This
value will change depending on the statistic being used.
is that the limits for the confidence interval are now found by subtracting and adding
the maximum error of the estimate from/to the sample mean.
Notice the formula is the same as for a population mean when the population standard
deviation is known. The only thing that has changed is the formula for the maximum
error of the estimate.
The values in the table are the areas critical values for the given areas in the
right tail or in both tails.
All estimation done here is based on the fact that the normal can be used to
approximate the binomial distribution when np and nq are both at least 5. Thus, the p
STAT
that were talking about is the probability of success on a single trial from the binomial
experiments.
Recall:
If the formula for z is divided by n in both the numerator and the denominator, then
Solving this for p to come up with a confidence interval, gives the maximum error of
This is not, however, the formula that we will use. The problem with estimation is that
you don't know the value of the parameter (in this case p), so you can't use it to
estimate itself - if you knew it, then there would be no problem to work out. So we
will replace the parameter by the statistic in the formula for the maximum error of the
estimate.
When you're computing E, I suggest that you find the sample proportion, p hat, and
save it to P on the calculator. This way, you can find q as (1-p). Do NOT round the
value for p hat and use the rounded value in the calculations. This will lead to error.
STAT
Once you have computed E, I suggest you save it to the memory on your calculator.
On the TI-82, a good choice would be the letter E. The reason for this is that the limits
for the confidence interval are now found by subtracting and adding the maximum
error of the estimate from/to the sample proportion.
The sample size determination formulas come from the formulas for the maximum
error of the estimates. The formula is solved for n. Be sure to round the answer
obtained up to the next whole number, not off to the nearest whole number. If you
round off, then you will exceed your maximum error of the estimate in some cases.
By rounding up, you will have a smaller maximum error of the estimate than allowed,
but this is better than having a larger one than desired.
Population Mean
Here is the formula for the sample size which is obtained by
solving the maximum error of the estimate formula for the
population mean for n.
Population Proportion
Here is the formula for the sample size which is obtained by
solving the maximum error of the estimate formula for the
population proportion for n. Some texts use p hat and q hat,
but since the sample hasn't been taken, there is no value for
the sample proportion. p and q are taken from a previous
study, if one is available. If there is no previous study or estimate available, then use
0.5 for p and q, as these are the values which will give the largest sample size, and it is
better to have too large of a sample size and come under the maximum error of the
estimate than to have too small of a sample size and exceed the maximum error of the
estimate.
STAT
Chapter 9
A statement based upon the null hypothesis. It is either "reject the null
hypothesis" or "fail to reject the null hypothesis". We will never accept the null
hypothesis.
Conclusion
A statement which indicates the level of evidence (sufficient or insufficient), at
what level of significance, and whether the original claim is rejected (null) or
supported (alternative).
Introduction
Be sure to read through the definitions for this section before trying to make sense out
of the following.
The first thing to do when given a claim is to write the claim mathematically (if
possible), and decide whether the given claim is the null or alternative hypothesis. If
the given claim contains equality, or a statement of no change from the given or
accepted condition, then it is the null hypothesis, otherwise, if it represents change, it
is the alternative hypothesis.
The following example is not a mathematical example, but may help introduce the
concept.
Example
"He's dead, Jim," said Dr. McCoy to Captain Kirk.
Mr. Spock, as the science officer, is put in charge of statistically determining the
correctness of Bones' statement and deciding the fate of the crew member (to vaporize
or try to revive)
The correct answer is that there is change. Dead represents a change from the
accepted state of alive. The null hypothesis always represents no change. Therefore,
the hypotheses are:
States of nature are something that you, as a statistician have no control over. Either it
is, or it isn't. This represents the true nature of things.
Decisions are something that you have control over. You may make a correct decision
or an incorrect decision. It depends on the state of nature as to whether your decision
is correct or in error.
There are four possibilities that can occur based on the two possible states of nature
and the two decisions which we can make.
Statisticians will never accept the null hypothesis, we will fail to reject. In other
words, we'll say that it isn't, or that we don't have enough evidence to say that it isn't,
but we'll never say that it is, because someone else might come along with another
sample which shows that it isn't and we don't want to be wrong.
Statistically (double) speaking ...
State of Nature
Decision H0 True H0 False
Reject H0 Patient is Patient is dead,
alive,
Sufficient evidence of death
Sufficient
evidence of
death
STAT
In English ...
State of Nature
Decision H0 True H0 False
Reject H0 Vaporize a Vaporize a dead person
live person
State of Nature
Decision H0 True H0 False
Reject H0 Type I Correct Assessment
Error
alpha
Since Type I is the more serious error (usually), that is the one we concentrate on. We
usually pick alpha to be very small (0.05, 0.01). Note: alpha is not a Type I error.
Alpha is the probability of committing a Type I error. Likewise beta is the probability
of committing a Type II error.
Conclusions
STAT
Conclusions are sentence answers which include whether there is enough evidence or
not (based on the decision), the level of significance, and whether the original claim is
supported or rejected.
Conclusions are based on the original claim, which may be the null or alternative
hypotheses. The decisions are always based on the null hypothesis
Original Claim
H0 H1
Decision "REJECT" "SUPPORT"
Reject H0 There There is sufficient evidence at the alpha level of
"SUFFICIENT" is sufficient evidence at significance to support the claim that (insert original
the alpha level of claim here)
significance
to reject the claim that
(insert original claim
here)
This document will explain how to determine if the test is a left tail, right tail, or two-
tail test.
Decision Rule: Reject H0 if t.s. < c.v. (left) or t.s. > c.v. (right)
(Reject H0 if the test statistic is more extreme than the critical value)
Using the confidence interval to perform a hypothesis test only works with a two-
tailed test.
If the hypothesized value of the parameter lies within the confidence interval
with a 1-alpha level of confidence, then the decision at an alpha level of
significance is to fail to reject the null hypothesis.
If the hypothesized value of the parameter lies outside the confidence interval
with a 1-alpha level of confidence, then the decision at an alpha level of
significance is to reject the null hypothesis.
STAT
1. Write the original claim and identify whether it is the null hypothesis or the
alternative hypothesis.
2. Write the null and alternative hypothesis. Use the alternative hypothesis to
identify the type of test.
3. Write down all information from the problem.
4. Find the critical value using the tables
5. Compute the test statistic
6. Make a decision to reject or fail to reject the null hypothesis. A picture showing
the critical value and test statistic may be useful.
7. Write the conclusion.
You are testing mu, you are not testing x bar. If you knew the value of mu, then there
would be nothing to test.
The critical value is obtained from the normal table, or the bottom line from the t-
table.
The critical value is obtained from the t-table. The degrees of freedom for this test is
n-1.
If you're performing a t-test where you found the statistics on the calculator (as
opposed to being given them in the problem), then use the VARS key to pull up the
statistics in the calculation of the test statistic. This will save you data entry and avoid
round off errors.
General Pattern
Notice the general pattern of these test statistics is (observed - expected) / standard
deviation.
In 1897, legislature was introduced in Indiana which would make 3.2 the official
value of pi for the State. Now, that sounds ridiculous, but is it really?
Claim: Pi is 3.2.
STAT
To test the claim, we're going to generate a whole bunch of values for pi, and then test
to see if the mean is 3.2.
Procedure:
The area of the unit circle is pi. The area of the unit circle in the first quadrant is pi/4.
The calculator generates random numbers between 0 and 1. What we're going to do is
generate two random numbers which will simulate a randomly selected point in a unit
square in the first quadrant. If the point is within the circle, then the distance from
(0,0) will be less than or equal to 1, if the point is outside the circle, the distance will
be greater than 1.
Have the calculator generate a squared distance from zero (the square of the distance
illustrates the same properties as far as being less than 1 or greater than 1). Do this 25
times. Each time, record whether the point is inside the circle (<1) or outside the circle
(>1).
RAND^2 + RAND^2
Pi/4 is approximately equal to the ratio of the points inside the circle to the total
number of points. Therefore, pi will be 4 times the ratio of the points inside the circle
to the total number of points.
This whole process is repeated several times, and the mean and standard deviation is
recorded.
The hypothesis test is then conducted using the t-test to see if the true mean is 3.2
(based on the sample mean).
Example:
The critical value, with an 0.05 level of significance since none was stated, for a two-
tail test with 19 degrees of freedom is t = +/- 2.093.
Since the test statistic is not in the critical region, the decision is fail to reject the null
hypothesis
There is insufficient evidence at the 0.05 level of significance to reject the claim that
pi is 3.2.
Note the double speak, but it serves to illustrate the point. We would not dare to claim
that pi was 3.2, even though this sample seems to illustrate this. The sample doesn't
provide enough evidence to show it's not 3.2, but there may be another sample
somewhere which does provide enough evidence (let's hope so). So, we won't say it is
3.2, just that we don't have enough evidence to prove it isn't 3.2.
You are testing p, you are not testing p hat. If you knew the value of p, then there
would be nothing to test.
The critical value is found from the normal table, or from the bottom
row of the t-table.
STAT
The steps involved in the hypothesis testing remain the same. The only thing that
changes is the formula for calculating the test statistic and perhaps the distribution
which is used.
General Pattern
Notice the general pattern of these test statistics is (observed - expected) / standard
deviation.
Classical Approach
The Classical Approach to hypothesis testing is to compare a test statistic and a
critical value. It is best used for distributions which give areas and require you to look
up the critical value (like the Student's t distribution) rather than distributions which
have you look up a test statistic to find an area (like the normal distribution).
The Classical Approach also has three different decision rules, depending on whether
it is a left tail, right tail, or two tail test.
One problem with the Classical Approach is that if a different level of significance is
desired, a different critical value must be read from the table.
P-Value Approach
The P-Value Approach, short for Probability Value, approaches hypothesis testing
from a different manner. Instead of comparing z-scores or t-scores as in the classical
approach, you're comparing probabilities, or areas.
The level of significance (alpha) is the area in the critical region. That is, the area in
the tails to the right or left of the critical values.
The p-value is the area to the right or left of the test statistic. If it is a two tail test, then
look up the probability in one tail and double it.
STAT
If the test statistic is in the critical region, then the p-value will be less than the level
of significance. It does not matter whether it is a left tail, right tail, or two tail test.
This rule always holds.
Reject the null hypothesis if the p-value is less than the level of significance.
You will fail to reject the null hypothesis if the p-value is greater than or equal to the
level of significance.
The p-value approach is best suited for the normal distribution when doing
calculations by hand. However, many statistical packages will give the p-value but not
the critical value. This is because it is easier for a computer or calculator to find the
probability than it is to find the critical value.
Another benefit of the p-value is that the statistician immediately knows at what level
the testing becomes significant. That is, a p-value of 0.06 would be rejected at an 0.10
level of significance, but it would fail to reject at an 0.05 level of significance.
Warning: Do not decide on the level of significance after calculating the test statistic
and finding the p-value.
Here is a proportion to help you keep the order straight. Any proportion equivalent to
the following statement is correct.
The test statistic is to the p-value as the critical value is to the level of significance.
Chapter 10
Definitions
Dependent Samples
Samples in which the subjects are paired or matched in some way. Dependent
samples must have the same sample size, but it is possible to have the same
sample size without being dependent.
Independent Samples
Samples which are independent when they are not related. Independent samples
may or may not have the same sample size.
STAT
There are two possible cases when testing two population means, the dependent
case and the independent case. Most books treat the independent case first, but I'm
putting the dependent case first because it follows immediately from the test for
a single population mean in the previous chapter.
Here are some steps to help you accomplish the hypothesis testing
1. Write down the original claim in simple terms. For example: After > Before.
2. Move everything to one side: After - Before > 0.
3. Call the difference you have on the left side D: D = After - Before > 0.
4. Convert to proper notation:
5. Compute the new variable D and be sure to follow the order you have defined
in step 3. Do not simply take the smaller away from the larger. From this
point, you can think of having a new set of values. Technically, they are called
D, but you can think of them as x. The original values from the two samples
can be discarded.
6. Find the mean and standard deviation of the variable D. Use these as the
values in the t-test from chapter 9.
It is important to note that the variance of the difference is the sum of the variances,
not the standard deviation of the difference is the sum of the standard deviations.
When we go to find the standard error, we must combine variances to do so. Also,
you're probably wondering why the variance of the difference is the sum of the
variances instead of the difference of the variances. Since the values are squared, the
negative associated with the second variable becomes positive, and it becomes the
sum of the variances. Also, variances can't be negative, and if you took the difference
of the variances, it could be negative.
sampling distribution of the sample means is the variance divided by the sample size,
so what we are doing is add the variance of each mean together. The test statistic is
shown.
Ok, you're probably wondering how do you know if the variances are equal or not if
you don't know what they are. Some books teach the F-test to test the equality of two
variances, and if your book does that, then you should use the F-test to see. Other
books (statisticians) argue that if you do the F-test first to see if the variances are
equal, and then use the same level of significance to perform the t-test to test the
difference of the means, that the overall level of significance isn't the same. So, the
Bluman text tells the student whether or not the variances are equal and the Triola
text.
simply the weighted mean of the variance. The weighting factors are the degrees of
freedom.
Remember that the normal distribution can be used to approximate the binomial
distribution in certain cases. Specifically, the approximation was considered good
when np and nq were both at least 5. Well, now, we're talking about two proportions,
so np and nq must be at least 5 for both samples.
The test statistic has the same general pattern as before (observed minus expected
divided by standard error). The test statistic used here is similar to that for a single
population proportion, except the difference of proportions are used instead of a single
proportion, and the value of p-bar is used instead of p in the standard error portion.
Some people will be tempted to try to simplify the denominator of this test statistic
incorrectly. It can be simplified, but the correct simplification is not to simply place
the product of p-bar and q-bar over the sum of the n's. Remember that to add fractions,
you must have a common denominator, that is why this simplification is incorrect.
Chapter 11
Definitions
Coefficient of Determination
The percent of the variation that can be explained by the regression equation
Correlation
A method used to determine if a relationship between variables exists
Correlation Coefficient
A statistic or parameter which measures the strength and direction of a
relationship between two variables
Dependent Variable
A variable in correlation or regression that can not be controlled, that is, it
depends on the independent variable.
Independent Variable
A variable in correlation or regression which can be controlled, that is, it is
independent of the other variable.
Pearson Product Moment Correlation Coefficient
A measure of the strength and direction of the linear relationship between two
variables
Regression
A method used to describe the relationship between two variables.
Regression Line
The best fit line.
STAT
Scatter Plot
An plot of the data values on a coordinate system. The independent variable is
graphed along the x-axis and the dependent variable along the y-axis
Standard Error of the Estimate
The standard deviation of the observed values about the predicted values
See the instructions on using the calculator to do statistics and lists. This provides an
overview as well as some helpful advice for working with statistics on the calculator.
Scatter Plots
1. Enter the x values into L1 and the y variables into L2.
2. Go to Stat Plot (2nd y=)
3. Turn Plot 1 on
4. Choose the type to be scatter plot (1st type)
5. Set Xlist to L1
6. Set Ylist to L2
7. Set the Mark to any of the three choices
8. Zoom to the Stat setting (#9)
Note, the Ylist and Mark won't show up until you select a scatter plot
Regression Lines
1. Setup the scatter plot as instructed above
2. Go into the Stats, Calc, Setup screen
3. Setup the 2-Var Stats so that: Xlist = L1, Ylist = L2, Freq = 1
4. Calculate the Linear Regression (ax+b) (#5)
5. Go into the Plot screen.
6. Position the cursor on the Y1 plot and hit CLEAR to erase it.
7. While still in the Y1 data entry field, go to the VARS, STATS, EQ screen and
choose option 7 which is the regression equation
8. Hit GRAPH
STAT
Do this once
It is important that you calculate the linear regression variables before trying to graph
the regression line. If you change the data in the lists or have not calculated the linear
regression equations, then you will get an " ERR: Undefined" when you try to graph
the data.
Be sure to turn off the stats plots and/or the Y1 plot when you need to graph other
data.
Stats: Correlation
Sum of Squares
We introduced a notation earlier in the course called the sum of squares. This notation
was the SS notation, and will make these formulas much easier to work with.
STAT
r only measures the strength of a linear relationship. There are other kinds of
relationships besides linear.
r is always between -1 and 1 inclusive. -1 means perfect negative linear
correlation and +1 means perfect positive linear correlation
r has the same sign as the slope of the regression (best fit) line
r does not change if the independent (x) and dependent (y) variables are
interchanged
r does not change if the scale on either variable is changed. You may multiply,
divide, add, or subtract a value to/from all the x-values or y-values without
changing the value of r.
r has a Student's t distribution
STAT
Hypothesis Testing
The claim we will be testing is "There is significant linear correlation"
The Greek letter for r is rho, so the parameter used for linear correlation is rho
H0: rho = 0
H1: rho <> 0
by:
Now, there are n-2 degrees of freedom this time. This is a difference from before. As
an over-simplification, you subtract one degree of freedom for each variable, and
since there are 2 variables, the degrees of freedom are n-2.
the formula for the test statistic is , which does look like the pattern we're
looking for.
Remember that
Hypothesis testing is always done under the assumption that the null hypothesis is true.
Since H0 is rho = 0, this formula is equivalent to the one given in the book.
Additional Note: 1-r2 is later identified as the coefficient of non-determination
The test statistic in this case is simply the value of r. You compare the absolute value
of r (don't worry if it's negative or positive) to the critical value in the table. If the test
statistic is greater than the critical value, then there is significant linear correlation.
Furthermore, you are able to say there is significant positive linear correlation if the
original value of r is positive, and significant negative linear correlation if the original
value of r was negative.
This is the most common technique used. However, the first technique, with the t-
value must be used if it is not a two-tail test, or if a different level of significance
(other than 0.01 or 0.05) is desired.
Causation
If there is a significant linear correlation between two variables, then one of five
situations can be true.
STAT
Stats: Regression
The idea behind regression is that when there is significant linear correlation, you can
use a line to estimate the value of the dependent variable for certain values of the
independent variable.
When there is significant linear correlation. That is, when you reject the null
hypothesis that rho=0 in a correlation hypothesis test.
The value of the independent variable being used in the estimation is close to
the original values. That is, you should not use a regression equation obtained
using x's between 10 and 20 to estimate y when x is 200.
The regression equation should not be used with different populations. That
is, if x is the height of a male, and y is the weight of a male, then you shouldn't
use the regression equation to estimate the weight of a female.
The regression equation shouldn't be used to forecast values not from that
time frame. If data is from the 1960's, it probably isn't valid in the 1990's.
Assuming that you've decided that you can have a regression equation because there is
significant linear correlation between the two variables, the equation becomes: y'
= ax + b or y' = a + bx (some books use y-hat instead of y-prime). The Bluman text
uses the second formula, however, more people are familiar with the notion of y = mx
+ b, so I will use the first.
The regression line is sometimes called the "line of best fit" or the "best fit line".
Since it "best fits" the data, it makes sense that the line passes through the means.
The regression equation is the line with slope a passing through the point
It also turns out that the slope of the regression line can be written as .
Since the standard deviations can't be negative, the sign of the slope is determined by
the sign of the correlation coefficient. This agrees with the statement made earlier that
the slope of the regression line will have the same slope as the correlation coefficient.
TI-82
Luckily, the TI-82 will find these values for us (isn't it a wonderful calculator?). We
can also use the TI-82 to plot the regression line on the scatter plot.
Calculating Values
1. Enter the data. Put the x-values into list 1 and the y-values into list 2.
2. Go into the Stats, Calc, Setup screen
3. Setup the 2-Var Stats so that: Xlist = L1, Ylist = L2, Freq = 1
4. Calculate the Linear Regression (ax+b) (#5)
STAT
This screen will give you the sample linear correlation coefficient, r; the slope
of the regression equation, a; and the y-intercept of the regression equation,
b.
To write the regression equation, replace the values of a and b into the
equation "y-hat = ax+b".
To find the coefficient of determination, square r. You can find the variable r
under VARS, STATS, EQ, r (#6).
Coefficient of Determination
The coefficient of determination is ...
the percent of the variation that can be explained by the regression equation.
the explained variation divided by the total variation
the square of r
Every sample has some variation in it (unless all the values are identical, and that's
unlikely to happen). The total variation is made up of two parts, the part that can be
explained by the regression equation and the part that can't be explained by the
regression equation.
Well, the ratio of the explained variation to the total variation is a measure of how
good the regression line is. If the regression line passed through every point on the
scatter plot exactly, it would be able to explain all of the variation. The further the line
is from the points, the less it is able to explain.
STAT
Coefficient of Non-Determination
The coefficient of non-determination is ...
Chapter 12
Stats: Chi-Square
Definitions
Chi-square distribution
A distribution obtained from the multiplying the ratio of sample variance to
population variance by the degrees of freedom when random samples are
selected from a normally distributed population
Contingency Table
Data arranged in table form for the chi-square independence test
STAT
Expected Frequency
The frequencies obtained by calculation.
Goodness-of-fit Test
A test to see if a sample comes from a population with the given distribution.
Independence Test
A test to see if the row and column variables are independent.
Observed Frequency
The frequencies obtained by observation. These are the sample frequencies.
Chi-Square Probabilities
Since the chi-square distribution isn't symmetric, the method for looking up left-tail
values is different from the method for looking up right tail values.
You can interpolate. This is probably the more accurate way. Interpolation
involves estimating the critical value by figuring how far the given degrees of
freedom are between the two df in the table and going that far between the
critical values in the table. Most people born in the 70's didn't have to learn
interpolation in high school because they had calculators which would do
logarithms (we had to use tables in the "good old" days).
You can go with the critical value which is less likely to cause you to reject in
error (type I error). For a right tail test, this is the critical value further to the
right (larger). For a left tail test, it is the value further to the left (smaller). For a
two-tail test, it's the value further to the left and the value further to the right.
Note, it is not the column with the degrees of freedom further to the right, it's
the critical value which is further to the right. The Bluman text has this wrong
on page 422. The guideline is right, the instructions are wrong.
is given by:
STAT
Testing is done in the same manner as before. Remember, all hypothesis testing is
done under the assumption the null hypothesis is true.
Confidence Intervals
If you solve the test statistic formula for the population variance, you
get:
Note, the left-hand endpoint of the confidence interval comes when the right critical
value is used and the right-hand endpoint of the confidence interval comes when the
left critical value is used. This is because the critical values are in the denominator and
so dividing by the larger critical value (right tail) gives the smaller endpoint.
The idea behind the chi-square goodness-of-fit test is to see if the sample comes from
the population with the claimed distribution. Another way of looking at that is to ask
if the frequency distribution fits a specific pattern.
Two values are involved, an observed value, which is the frequency of a category
from a sample, and the expected frequency, which is calculated based upon the
claimed distribution. The derivation of the formula is very similar to that of the
variance which was done earlier (chapter 2 or 3).
The idea is that if the observed frequency is really close to the claimed (expected)
frequency, then the square of the deviations will be small. The square of the deviation
is divided by the expected frequency to weight frequencies. A difference of 10 may be
very significant if 12 was the expected frequency, but a difference of 10 isn't very
significant at all if the expected frequency was 1200.
If the sum of these weighted squared deviations is small, the observed frequencies are
close to the expected frequencies and there would be no reason to reject the claim that
STAT
it came from that distribution. Only when the sum is large is the a reason to question
the distribution. Therefore, the chi-square goodness-of-fit test is always a right tail
test.
The data are the observed frequencies. This means that there is only one data
value for each category. Therefore, ...
The degrees of freedom is one less than the number of categories, not one
less than the sample size.
It is always a right tail test.
It has a chi-square distribution.
The value of the test statistic doesn't change if the order of the categories is
switched.
1. The values occur with equal frequency. Other words for this are "uniform",
"no preference", or "no difference". To find the expected frequencies, total
the observed frequencies and divide by the number of categories. This
quotient is the expected frequency for each category.
2. Specific proportions or probabilities are given. To find the expected
frequencies, multiply the total of the observed frequencies by the probability
for each category.
STAT
3. The expected frequencies are given to you. In this case, you don't have to do
anything.
4. A specific distribution is claimed. For example, "The data is normally
distributed". To work a problem like this, you need to group the data and find
the frequency for each class. Then, find the probability of being within that
class by converting the scores to z-scores and looking up the probabilities.
Finally, multiply the probabilities by the total observed frequency. (It's not
really as bad as it sounds).
In the test for independence, the claim is that the row and column variables are
independent of each other. This is the null hypothesis.
The multiplication rule said that if two events were independent, then the probability
of both occurring was the product of the probabilities of each occurring. This is key to
working the test for independence. If you end up rejecting the null hypothesis, then
the assumption must have been wrong and the row and column variable are
dependent. Remember, all hypothesis testing is done under the assumption the null
hypothesis is true.
The test statistic used is the same as the chi-square goodness-of-fit test. The principle
behind the test for independence is the same as the principle behind the goodness-of-
fit test. The test for independence is always a right tail test.
In fact, you can think of the test for independence as a goodness-of-fit test where the
data is arranged into table form. This table is called a contingency table.
The degrees of freedom are the degrees of freedom for the row variable times
the degrees of freedom for the column variable. It is not one less than the
sample size, it is the product of the two degrees of freedom.
It is always a right tail test.
It has a chi-square distribution.
The expected value is computed by taking the row total times the column
total and dividing by the grand total
The value of the test statistic doesn't change if the order of the rows or
columns are switched.
The value of the test statistic doesn't change if the rows and columns are
interchanged (transpose of the matrix)
The test statistic is
Chapter 13
Stats: F-Test
Definitions
F-distribution
The ratio of two independent chi-square variables divided by their respective
degrees of freedom. If the population variances are equal, this simplifies to be
the ratio of the sample variances.
Analysis of Variance (ANOVA)
A technique used to test a hypothesis concerning the means of three or mor
populations.
One-Way Analysis of Variance
Analysis of Variance when there is only one independent variable. The null
hypothesis will be that all population means are equal, the alternative
hypothesis is that at least one mean is different.
STAT
Stats: F-Test
F-Test
STAT
The F-test is designed to test if two population variances are equal. It does this by
comparing the ratio of two variances. So, if the variances are equal, the ratio of the
variances will be 1.
All hypothesis testing is done under the assumption the null hypothesis is true
If the null hypothesis is true, then the F test-statistic given above can be
simplified (dramatically). This ratio of sample variances will be test
statistic used. If the null hypothesis is false, then we will reject the null
hypothesis that the ratio was equal to 1 and our assumption that they were
equal.
There are several different F-tables. Each one has a different level of significance. So,
find the correct level of significance first, and then look up the numerator degrees of
freedom and the denominator degrees of freedom to find the critical value.
You will notice that all of the tables only give level of significance for right tail tests.
Because the F distribution is not symmetric, and there are no negative values, you
may not simply take the opposite of the right critical value to find the left critical
value. The way to find a left critical value is to reverse the degrees of freedom, look
up the right critical value, and then take the reciprocal of this value. For example, the
critical value with 0.05 on the left with 12 numerator and 15 denominator degrees of
freedom is found of taking the reciprocal of the critical value with 0.05 on the right
with 15 numerator and 12 denominator degrees of freedom.
Since the left critical values are a pain to calculate, they are often avoided altogether.
This is the procedure followed in the textbook. You can force the F test into a right
tail test by placing the sample with the large variance in the numerator and the smaller
variance in the denominator. It does not matter which sample has the larger sample
size, only which sample has the larger variance.
The numerator degrees of freedom will be the degrees of freedom for whichever
sample has the larger variance (since it is in the numerator) and the denominator
degrees of freedom will be the degrees of freedom for whichever sample has the
smaller variance (since it is in the denominator).
If a two-tail test is being conducted, you still have to divide alpha by 2, but you only
look up and compare the right critical value.
Assumptions / Notes
STAT
A One-Way Analysis of Variance is a way to test the equality of three or more means
at one time by using variances.
Assumptions
The populations from which the samples were obtained must be normally or
approximately normally distributed.
The samples must be independent.
The variances of the populations must be equal.
Hypotheses
The null hypothesis will be that all population means are equal, the alternative
hypothesis is that at least one mean is different.
In the following, lower case letters apply to the individual samples and capital letters
apply to the entire set collectively. That is, n is one of many sample sizes, but N is the
total sample size.
Grand Mean
The grand mean of a set of samples is the total of all the data values
divided by the total sample size. This requires that you have all of the
sample data available to you, which is usually the case, but not always.
It turns out that all that is necessary to find perform a one-way analysis
of variance are the number of samples, the sample means, the sample variances, and
the sample sizes.
STAT
Another way to find the grand mean is to find the weighted average of
the sample means. The weight applied is the sample size.
Total Variation
There is the between group variation and the within group variation. The whole idea
behind the analysis of variance is to compare the ratio of between group variance to
within group variance. If the variance caused by the interaction between the samples
is much larger when compared to the variance that appears within each group, then it
is because the means aren't the same.
The variance due to the interaction between the samples is denoted MS(B) for Mean
Square Between groups. This is the between group variation divided by its degrees of
The variance due to the differences within individual samples is denoted MS(W) for
Mean Square Within groups. This is the within group variation divided by its degrees
F test statistic
Summary Table
All of this sounds like a lot to remember, and it is. However, there is a table which
makes things really nice.
SS df MS F
Notice that each Mean Square is just the Sum of Squares divided by its degrees of
freedom, and the F value is the ratio of the mean squares. Do not put the largest
variance in the numerator, always divide the between variance by the within variance.
If the between variance is smaller than the within variance, then the means are really
close to each other and you will fail to reject the claim that they are all equal. The
degrees of freedom of the F-test are in the same order they appear in the table (nifty,
eh?).
Decision Rule
The decision will be to reject the null hypothesis if the test statistic from the table is
greater than the F critical value with k-1 numerator and N-k denominator degrees of
freedom.
STAT
If the decision is to reject the null, then at least one of the means is different.
However, the ANOVA does not tell you where the difference lies. For this, you need
another test, either the Scheffe' or Tukey test.
When the decision from the One-Way Analysis of Variance is to reject the null
hypothesis, it means that at least one of the means isn't the same as the other means.
What we need is a way to figure out where the differences lie, not just that there is a
difference.
This is where the Scheffe' and Tukey tests come into play. They will help us analyze
pairs of means to see if there is a difference -- much like the difference of two means
covered earlier.
Hypotheses
Both tests are set up to test if pairs of means are different. The formulas
refer to mean i and mean j. The values of i and j vary, and the total
number of tests will be equal to a combination of k objects, 2 at a time
C(k,2), where k is the number of samples.
Scheffé Test
The Scheffe' test is customarily used with unequal sample sizes, although it could be
used with equal sample sizes.
The critical value for the Scheffe' test is the degrees of freedom for the between
variance times the critical value for the one-way ANOVA. This simplifies to be:
CV = (k-1) F(k-1,N-k,alpha)
numerator will be close to zero, and so performing a left tail test wouldn't show
anything.
Tukey Test
The Tukey test is only usable when the sample sizes are the same.
The Critical Value is looked up in a table. It is Table N in the Bluman text. There are
actually several different tables, one for each level of significance. The number of
samples, k, is used as a index along the top, and the degrees of freedom for the within
group variance, v = N-k, are used as an index along the left side.
The test statistic is found by dividing the difference between the means by the square
root of the ratio of the within group variation and the sample size.
Reject the null hypothesis if the absolute value of the test statistic
is greater than the critical value (just like the linear correlation
coefficient critical values).
Assumptions
The populations from which the samples were obtained must be normally or
approximately normally distributed.
The samples must be independent.
The variances of the populations must be equal.
The groups must have the same sample size.
Hypotheses
The null hypotheses for each of the sets are given below.
STAT
1. The population means of the first factor are equal. This is like the one-way
ANOVA for the row factor.
2. The population means of the second factor are equal. This is like the one-way
ANOVA for the column factor.
3. There is no interaction between the two factors. This is similar to performing a
test for independence with contingency tables.
Factors
The two independent variables in a two-way ANOVA are called factors. The idea is
that there are two variables, factors, which affect the dependent variable. Each factor
will have two or more levels within it, and the degrees of freedom for each factor is
one less than the number of levels.
Treatment Groups
Treatement Groups are formed by making all possible combinations of the two
factors. For example, if the first factor has 3 levels and the second factor has 2 levels,
then there will be 3x2=6 different treatment groups.
As an example, let's assume we're planting corn. The type of seed and type of
fertilizer are the two factors we're considering in this example. This example has 15
treatment groups. There are 3-1=2 degrees of freedom for the type of seed, and 5-1=4
degrees of freedom for the type of fertilizer. There are 2*4 = 8 degrees of freedom for
the interaction between the type of seed and type of fertilizer.
The data that actually appears in the table are samples. In this case, 2 samples from
each treatment group were taken.
Main Effect
The main effect involves the independent variables one at a time. The interaction is
ignored for this part. Just the rows or just the columns are used, not mixed. This is the
part which is similar to the one-way analysis of variance. Each of the variances
calculated to analyze the main effects are like the between variances
STAT
Interaction Effect
The interaction effect is the effect that one factor has on the other factor. The degrees
of freedom here is the product of the two degrees of freedom for each factor.
Within Variation
The Within variation is the sum of squares within each treatment group. You have one
less than the sample size (remember all treatment groups must have the same sample
size for a two-way ANOVA) for each treatment group. The total number of treatment
groups is the product of the number of levels for each factor. The within variance is
the within variation divided by its degrees of freedom.
F-Tests
There is an F-test for each of the hypotheses, and the F-test is the mean square for
each main effect and the interaction effect divided by the within variance. The
numerator degrees of freedom come from each effect, and the denominator degrees of
freedom is the degrees of freedom for the within variance in each case.
It is assumed that main effect A has a levels (and A = a-1 df), main effect B has b
levels (and B = b-1 df), n is the sample size of each treatment, and N = abn is the total
sample size. Notice the overall degrees of freedom is once again one less than the total
sample size.
Source SS df MS F
Main Effect A given A, SS / df MS(A) / MS(W)
a-1
Main Effect B given B, SS / df MS(B) / MS(W)
b-1
Interaction Effect given A*B, SS / df MS(A*B) / MS(W)
(a-1)(b-1)
Within given N - ab, SS / df
ab(n-1)
Total sum of others N - 1,
abn - 1
STAT
Summary
The following results are calculated using the Quattro Pro spreadsheet. It provides the
p-value and the critical values are for alpha = 0.05.
From the above results, we can see that the main effects are both significant, but the
interaction between them isn't. That is, the types of seed aren't all equal, and the types
of fertilizer aren't all equal, but the type of seed doesn't interact with the type of
fertilizer.
The two-way ANOVA, Example 13-9, in the Bluman text has the incorrect values in
it. The student would have no way of knowing this because the book doesn't explain
how to calculate the values.
Source of Variation SS df MS F
Sample 3.920 1 3.920 4.752
Column 9.680 1 9.680 11.733
Interaction 54.080 1 54.080 65.552
Within 3.300 4 0.825
Total 70.980 7
The student will be responsible for finishing the table, not for coming up with the sum
of squares which go into the table in the first place.
STAT