# •

Part I
If data is quantitative, it may be useful to find the mean and standard deviation (age, GPA); if data is
categorical (qualitative), then even if it is numerical, finding mean and standard deviation wouldn’t
make sense (zip code, area code), as it is simply used to classify.
Histogram – counts displayed horizontally or vertically, may be used for larger ranges
o Advantages: can see mode and if more than one mode, shape clear, number of values in data set
countable, can see gaps
o Disadvantage: unable to tell if extreme values are outliers, individual values cannot be recovered
Dotplot - similar to histogram, with but with individual values available (used for reasonably small
range)
Stem-and-leaf plot
o Advantage: original data values available, shape clear, range easily identifiable, you can
calculate mean and standard deviation; still able to identify median, IQR, outliers, but not as
quickly as with a boxplot
o Disadvantage: need to know how to organize most effectively in order to see key features of the
distribution
Boxplot
o Advantage: outliers are easily identifiable, median is easily identifiable, numerical values of
range and IQR are clear, good for comparing two distributions, skewness/symmetry clear
o Disadvantage: unable to discern if data includes more than one mode, individual values cannot
be recovered, unable to see gaps in the data
When comparing two distributions, be sure to comment on shape (symmetric/skewed, modes in any
display except boxplots, etc), center (which median is higher, use mode for histograms), and spread
(larger range, larger IQR) and note any gaps or outliers (and where they are)
o NEVER say that a distribution is “skewed” – it is either skewed right or skewed left.
o NEVER claim that a population follows a Normal model unless you are told that it is – if the
distribution is reasonably unimodal and symmetric, then you can say “because a histogram of the
data is reasonably unimodal and symmetric, a Normal model is appropriate.”
o “the range is more than twice as large as” (ratio) rather than “15 grams larger” (difference)
o “the median of A (6ft) is higher than the median of B (4 ft)” add values for more impact
o while you don’t know the mean from a boxplot, you can determine if it is near the mean
(symmetric data) or “pulled” in the direction of the skew.
Data transformations
o Adding a scalar changes measures of center (mean, median, mode), but not measures of spread
(standard deviation, range, IQR)
o Multiplying by a value changes both measures of center and measures of spread
Normal model – family of models for unimodal and symmetric disributions.
z-scores
o normalcdf if using z-score to calculate probability or percentile; invNorm if using percentile to
find z-score.
o You are allowed to use calculator functions, but please be mindful of proper notation. Avoid
“calculator speak”
incorrect
correct
normalcdf(-1, 999) = .8413
P(z > -1) = .8413 OR 1 – P(z < -1) = .8413
invNorm (.95) = 1.645
z-score associated with 95th percentile is 1.645

Any time you recognize that you have a mean and a standard deviation, and a Normal model applies,
you MUST denote the situation as Normal and clearly identify µ and σ. You can use the N(µ, σ)
notation. And don’t forget to declare any variables used.
Part 2
Scatterplots (ch 7)
o Describe FORM, STRENGTH, DIRECTION (“it has a moderately strong non-linear association”)
and discuss scatterplot in context.
o Correlation coefficient is r; no units; -1 ≤ r ≤ 1; if |r| close to 0, then little or no association; if |r|
close to 1, then strong association → line will fit well (though not necessarily linear data)
o Correlation always refers to how “linear” something is: you can have a strong correlation with little
association (Bozo), and you can have a strong association with little correlation (brownie)
o Lurking variables affect both explanatory and response at the same time; “there is a positive linear
association between cell phone use and life expectancy” Should we use cell phones constantly in
order to live longer? No, there is a lurking variable: economic condition. General economic
condition will allow for ownership of technology as well access to better health care.
Least squares regression line (ch 8)
o Regression: each predicted y tends to be closer to the mean than its associated x was
o Moving any number of standard deviations in x moves r times that number of standard deviations in
y
o When giving the equation of a regression line, you can use words or use x and y; if you use x-and-y,
be sure to declare your variables.
o When reading from a computer output, y-intercept could be “constant” or “intercept;”
o Slope: each additional x-unit of x-variable is associated with “slope#” additional y-units of y-variable
▪ Predicted fuel economy = 32 – .1 mph “each additional mph of speed is associated with a
loss of .1 mpg of fuel economy.”
o Residual: observed – expected or e = y – y-hat
o Your calculator creates a list of residuals, but only AFTER you run the regression (STAT, CALC,
8:LinReg)
o A residual plot should “have an equal amount of scatter throughout with no obvious bends or
outliers” if so, linear regression is appropriate
▪ residual plot with a bend/curve negates linearity → linear regression IS NOT appropriate
▪ residual plot with spread changing is inconclusive → linear regression may not be
appropriate
Analyzing scatterplots and linear regression (ch 9)
o Extrapolating is a predication using an x-value beyond the range of x-values in our data set. It is
always dubious, especially when used to predict future trends
o Interpretation of R2 in context: “92% of the variation in fuel economy is accounted for by a linear
regression using speed (in mph) as the explanatory variable [to predict fuel economy (in mpg)”]
o An outlier with a high residual has an x-value close to x-bar, but is much higher (or lower) than the
rest of the data. Typically does NOT affect slope (and therefore not influential; likely affects yintercept)
o An outlier has high leverage if it has an x-value far from x-bar,
▪ If “in line” with a linear pattern, it will strengthen the correlation
▪ If “not in line” with a linear pattern, it will weaken the correlation AND alter the slope,
making it influential
▪ Could also give a false sense of strong correlation (Bozo) and be influential
Re-expressing data (ch 10)
o Can’t re-express data that goes both up and down
▪ Too much? Try square root
▪ Too little? Try negative reciprocal of square root or negative reciprocal

Part 3

Simulations (ch 11)

o Describe your simulation in a way that others could repeat the process
o Example: assign 300 dogs in equal numbers to control, glucosamine, or condroitin “Using a
random number generator on my calculator, I will assign each dog a unique integer from 0 – 299
(I will discard repeats.) Dogs with 0 – 99 will be the control group, dogs with 100 – 199 will
o Be able to write a conclusion and interpret the result of your simultion. “Based on my
simulation, I would see 4 out of 10 people on a cell phone while driving about 4% of the time if
the true rate is actually 12%. Because 4% is so rare, I doubt the legislator’s claim that cell phone
usage while driving is only 12%, and I believe it must be higher than that.”
o Know when it is appropriate to return multiple integers at one time (when there is a designated
number to respond – often true with proportions) and when you should return one integer at a
time (always works, but may be inefficient in many cases.)

Survey design (ch 12)
o We want our sample to be representative. The best way to achieve this is randomization. A
simple random sample (SRS) means that each sample of size n has an equal chance of being
selected.
o When we believe there are difference groups with potential different opinions (men vs women or
seniors vs freshmen), we can use a stratified random sample. This can be done by choosing
the same number from each strata (homogeneous group), but it is always correct to sample
proportionately within strata (e.g. 40% of each sub-group)
o Cluster samples are very practical – we do this when we believe that we have a heterogeneous
group (should be representative). We might sample a homeroom because it is easy to do, and
should represent a variety of opinions on, say, cafeteria food.
o Systematic sampling chooses every 5th person (for example.) If this is done, one must randomly
select which of the first 5 people will be surveyed
o Convenience sampling goes with what is easiest. This can appear similar to cluster sampling.
However, you know who you plan to ask with a cluster sample (homeroom, parish,
neighborhood) and can evaluate how likely it is to be representative; convenience (first table of
people you see, first 100 people in a mall) typically means less thought was given to the
prospective sample
o Volunteer sample: bad! No control over sampling frame; voluntary response bias. (basically a
form of undercoverage bias – some portion of population not sampled at all or has little
representation). Examples would be surveys that pop up on the internet or questionnaire’s
o Nonresponse bias: those who do not respond may differ in some important way from those who
do respond. It is not possible to tell what the respondents might have said.
o Response bias: anything in the survey design that influences the responses.
▪ Respondent tries to please the interviewer
▪ Respondent fears repercussions for answers
▪ Wording bias – worded in a way that may elicit more favorable (or unfavorable)
responses
When
identifying
bias in a sample, remember that you cannot simply point out the bias, but
o
should also mention how the results will be affected

▪ “This would be undercoverage bias because the pollsters would miss all the people not at
home during the day. Those not home during the day are more likely to have jobs, and
may be more inclined to favor the candidate who is looking to reduce income taxes.”

Studies (ch 13)
o If no treatment is imposed on the participants, it is a study
▪ Retrospective: events already happened and researches look up information
▪ Prospective: researchers collect data as it becomes available
o “An experiment randomly assigns treatments to subjects” (or subjects/participants/ experimental
units to treatments)
o replication means that treatments are given to a number of subjects
o the best experiments are usually: randomized, comparative, double-blind, placebo controlled
o comparative typically means including a control group in order to see if results are statistically
significant
o For example, participants are given acetaminophen, or ibuprofen for pain. While the results
might show that ibuprofen is marginally more effective that acetaminophen, neither drug may be
significantly better than taking nothing at all. In that case, it would have helped to include a
placebo group.
o those who influence results or evaluate results should be blinded
o factor also known as the explanatory variable; this is what the researcher introduces to and/or
manipulates for the subject or experimental unit;
▪ a factor can have more than one level
▪ different combinations of factors at various levels are known as the treatments
o response variable measure at the end of the experiment; in well-designed experiments,
observed changes in the response variable can be attributed to the treatment
o a placebo is a treatment known to have no effect
o placebo effect: tendency of human subjects to show a response even when administered a
placebo.
o Matching: pair subjects that are similar in ways not under study (similar socioeconomic
background, similar gender and age, etc)
o Blocking: grouping subjects together to isolate the variability attribute; experiment performed
within each block, results compared within each block
▪ Block 1: women who take estrogen
▪ Block 2: women who do not take estrogen
Confounding
variable: when the levels of one factor are associated with the levels of another
o
factor, so their effects cannot be separated; researchers cannot determine if any observed
measurements of the response variables are due to the treatment or to the confounding variable.
Part IV

Probability (ch 14/15)
o If you can say “or” you add; if you can say “and” you subtract
o Find the probability that “at least one happens,” it is often easier to do “1 – probability that it
never happens” e.g. roll a 6-sided die 5 times and find the probability of rolling at least one 4 . . .
1 – (5/6)5
o When finding conditional probability, it helps to use a tree diagram or contingency table
o Probability is a number between 0 and 1 inclusive

Probability models (ch 16)
o Random variable: based on a random event -- can be continuous (take on any value in a range) or
discrete (countable)
o Expected value: multiply each possible value by the probability that it occurs, and find the sum.
This is in your formula chart, as is standard deviation.
o For probability models, you can put values in L1 and associated probabilities in L2, then use
1VarStats L1, L2
o To find standard deviations, work with variance instead: (omit constants, square everything,
always add, take the square root). You will not find this in your formula chart.

Bernoulli trials (ch 17)
o Bernoulli trials: success or failure, independent (probability never changes)
o Geometric: when you’re looking for how long it will take to achieve a success
▪ If you have a 20% of shooting a basketball, what is the probability of making your first
shot on the 8th try? = (.8)7(.2)
o Binomial: when you have a given amount of trials and you want to know the probability of
“success” in a certain number of trials
▪ If you have a 20% of shooting a basketball, what is the probability of making 6 out of 10
shots? (10 C 6) (.2)6(.8)4 OR binompdf(10, .2, 6)
▪ If you have a 20% of shooting a basketball, what is the probability of making at most 6
out of 10 shots? binomcdf(10, .2, 6)
▪ If you have a 20% of shooting a basketball, what is the probability of making at least 6
out of 10 shots? 1 - binomcdf(10, .2, 5) OR binomcdf(10, .8, 4)
You
can
approximate with a Normal model if np ≥ 10 and nq ≥ 10
o

Part V

Chapter 18/17: Sampling Distributions

How many samples do you need for inference? ONE. You do not actually take a very large number of
samples.
Qualitative (categorical) data
If you were to take many samples, certain conditions must be met.
o Independence: The sampled values should be independent of each other. One [context] should
not affect another [context]
o Randomization: Ideally, SRS. If not, we need to be confident that the sample was not biased
and that it was representative of the population.
o 10% Condition: If the sampling has not been made with replacement, the sample size, n, must
be no larger than 10% of the population. (population > 10n)

o Success/Failure Condition: The sample size needs to be big enough so that the number of
success and the number of failures are at least 10. np = ___ > 10 and n(p – 1)= ___ > 10
A sampling distribution for a single proportion (the imagined histogram of the proportions from all
possible samples) is shaped like a Normal model.

ˆ is p, the true proportion of [context] in the population.
The mean of the sampling distribution of p
ˆ) = %
The standard deviation of the proportion is SD(% p

p(1 − p)
n

Quantitative data

Unlike the sampling distribution of a proportion, the sampling distribution of a mean has almost no
conditions at all.
o Independence: The sampled values should be independent of each other. One [context] should
not affect another [context]
o Randomization: Ideally, SRS. If not, we need to be confident that the sample was not biased
and that it was representative of the population.
o 10% Condition: If the sampling has not been made with replacement, the sample size, n, must
be no larger than 10% of the population. (population > 10n)
o Nearly Normal Condition: establish that the data comes from a population that is Normal OR n
≥ 40 (and cite the Central Limit Theorem)
The Central Limit Theorem (CLT) tells us that the mean of a random sample has a sampling distribution
whose shape can be approximated by a Normal model. The larger the sample, the better the
approximation will be.
The mean of the sampling distribution of x is µ , the true mean of (context) in the population.

σ
σ
The standard deviation of the sampling distribution is% n , written % σ (% x ) = SD(% x ) = % n
The random sample can come from a population whose distribution may be uniform, skewed, or
multimodal, but the shape of the sampling distribution is approximately Normal as long as the sample
size is large enough. “Large enough” is typically a size of n ≥ 40.
If the random sample comes from a population whose distribution is Normal (or approximately Normal)
than the sample size can be small.

Chapter 19/18: Confidence Intervals for single samples of categorical data

A confidence interval is a range of proportion values that we believe, with some level of confidence C,
contain the true proportion of (context) in a population. Remember, we believe the population
proportion is captured in the interval, but it does not matter if it is in the center of our interval or at one
of the extreme values.
pˆ (1 − pˆ )
ˆ
n
The standard error of the sample proportion is SE (% p ) = %
Why error? In addition to natural variability, we are incurring additional variability by using the sample
proportion (rather than the true population proportion) in our calculation.

To find the critical value of z* for a C% confidence interval, find the z-score associated with the C + ½
(100 – C) percentile. Use z-table or invNorm on calculator.
For a fixed confidence level increasing the sample size decreases the margin of error. For a fixed margin
of error, increasing the sample size increases the level of confidence.
1
For a fixed confidence level, increasing the sample size by a factor of x results in a margin of error % x
as large as the original.
There is nothing about a confidence interval that even hints at probability, so don’t ever use the word
probability in relation to a confidence interval. (A population proportion is either in the interval, or it’s
not.)
Creating and Reporting a Confidence Interval
Conditions and assumptions
o Data values should be independent of each other. One [context] should not affect another
[context]
o Randomization. Ideally, SRS. If not, assume it is representative.
o population > 10n
o np = ___ > 10 and n(p – 1)= ___ > 10
Mechanics
o State which “test” you are using: one proportion z interval or, provide the formula
pˆ (1 − pˆ )
ˆ
ˆ
n
% p± z* SE (% p), where SE (% pˆ ) = %

ˆ

p
o Give values for n, z*, % , ( or, alternately, n and the number of “successes” in the sample)
o Give the interval. (low, high)
o Use of calculator is completely acceptable!
Conclusion
o “Based on the sample, I am _______% confident that the true (population) proportion of
[context] is between ____ and ______.”
Interpretation
o “If we took a very large number of random samples of size n, _____% of the samples would
produce _____% confidence intervals that contain the true (population) proportion of [context.]
Chapter 20/19: Hypothesis Tests for single samples of qualitative (categorical) data

The null hypothesis is always the presumed or widely accepted value; it is the “status quo” or “nothingunusual-here” value, the “presumed innocent” status. Think null = “dull”
The alternate hypothesis is the “controversial” value, the “muckraker” theory, and is the one that
someone is challenged to prove.
Use a two-tailed test if you are asked to find out if the proportion of [context] differs from some alleged
value. (“ Is this evidence of a change?”)
Use a one-tailed test when you have reason to believe that the proportion of [context] is greater (or less
than) p0 .
p0 (1 − p0 )
ˆ )= %
n
The standard deviation of the sample proportion is SD (% p
Testing a Hypothesis and Reporting a Conclusion

Give hypotheses, a null hypothesis H0, and an alternate hypothesis, HA. Use of notation is fine, but if
asked to state the hypotheses, then you should also write them out in words.
Conditions and assumptions (same ones used for confidence interval
o Should be independent of each other. One [context] should not affect another [context]
o Randomization. Ideally, SRS. If not, assume it is representative.
o population > 10n
o n p0 = ___ > 10 and n(p0 – 1)= ___ > 10
* remember: you are using p0 as the basis of your hypothesis, so be sure you use p0 and not
Mechanics
o State which “test” you are using: one proportion z test or, provide the formula
pˆ − p0
p0 (1 − p0 )
ˆ )= %
n
z = % SD( p0 ) , where SD (% p

pˆ .

ˆ
o Give values for n, % p , ( or, alternately, n and the number of “successes” in the sample)
o Give P-value in format P(z > z-score) = P-value
o Use of calculator is completely acceptable, and is encouraged.
Conclusion
o With a P-value of P-value, I fail to reject the null hypothesis at the % α = .05 significance level.
There is not enough evidence to
▪ support (state HA)
▪ suggest that the true population proportion of [context] is statistically different from (or
higher than, or lower than) hypothesized value of p0 .
o With a P-value of P-value, I reject the null hypothesis at the % α = .05 significance level. The
evidence suggests that (state HA)
Interpretation
o If the null hypothesis is true and (state null in context), P-value is the probability of observing a
ˆ or larger ( or lower, or one more unlike % pˆ ) due to natural sampling
sample proportion of % p
variability.

Chapter 21: Errors, Power, etc

A Type I error occurs if the null hypothesis is true and we mistakenly reject it.
A Type II error occurs if the null hypothesis is false and we mistakenly fail to reject it.
Reject H0: incorrect decision
Type I error, probability % α

Fail to reject H0: correct decision
probability (1 –% α )

Reject H0: correct decision
probability (1 – % β ), the power of the test

Reject H0: incorrect decision
Type II error, probability % β

We determine % α , the probability of committing a Type I error (rejecting the null hypothesis when it was,
in fact, true). This is set prior to the actual test. The value is generally % α = .05, but we could also
choose based on which type of error we would most like to avoid.

We are not able to assign% β , the probability of committing a Type II error (failing to reject the null
hypothesis when it was, in fact, false)
The power of the test is the probability that the hypothesis test will correctly reject the null hypothesis
when it is false.
In order to find the power of the test, we would need to know the true proportion value of the
population. Since we usually do not know that, then we usually will not know the power.
Effect size is the difference between the null hypothesis value p0 and true population proportion. Again,
we usually do not know this since we usually do not know the true population proportion.

Increasing the sample size increases the power of the test and decreases % β .
Increasing the sample size may or may not affect % α . It depends on what you hold constant.
o If you adopt a fixed alpha level, then % α remains the same regardless of sample size.
o If you adopt a fixed decision rule (I'll conclude the new type of clay is better if no more than 1 of
the pieces break when fired) then increasing sample size reduces % α = P(Type I error).
You may be asked to state, in context, what committing Type I and Type II errors would be. This
statement is based on which hypothesis is actually true, and what decision you mistakenly made.
(Remember, this is a not a mistake due to carelessness, but a mistake due to drawing an unusual
sample.)
A separate concern of Type I and Type II errors would be the consequences the incorrect decision
carries.
The severity of a Type of error depends on the context of the scenario – we cannot always assume that
Type I is more serious than Type II (or vice versa)
Once you have made the decision to reject (or fail to reject), you are either right, or you have committed
an error. However, there is only one possibly type of error for each decision – it would be wise to know
which is which!
Consistency is key: Once you have said that you reject the null hypothesis, you are suggesting that there
IS evidence of the alternative hypothesis, and should state what that is. “Reject” is not consistent with
phrases such as “there is no evidence.”
We may find evidence to support the alternate hypothesis, but we never ever prove that the null
hypothesis is true. When we fail to reject the null hypothesis, we have retained it as plausible.

Chapter 22: Confidence Intervals for two samples of qualitative (categorical) data

A confidence interval for two proportions is always about the difference between two proportions. If
two homogeneous sample populations had equal proportions of [context], then their difference would be
zero.
If zero is included in a confidence interval for differences, then one group could have a higher OR lower
proportion of [context] than the other; if zero is not included, it would appear that one group has a higher
(if positive values) or lower (if negative values) proportion of [context] than the other.

pˆ1 (1 − pˆ1 ) pˆ 2 (1 − pˆ 2 )
+
n1
n2

The standard error of the sample proportion is SE (% pˆ1 − pˆ 2 ) = %
Creating and Reporting a Confidence Interval

Conditions and assumptions
o [parameter] [context] of group 1 is independent of [parameter] (context) of group 2
o Should be independent of each other. One [context] should not affect another [context]
o Randomization. Ideally, SRS. If not, assume it is representative.
o population n1 > 10n1 and population n2 > 10n2

ˆ
ˆ
ˆ
o n1 pˆ 1 = ___ > 10 and n2 (1 − p2 ) = ___ > 10; n2 p2 = ___ > 10 and n2 (1 − p2 ) = ___ > 10
Mechanics
o State which “test” you are using: two proportion z interval or, provide the formula
pˆ1 (1 − pˆ1 ) pˆ 2 (1 − pˆ 2 )
+
n1
n2
ˆ
ˆ
ˆ
ˆ
ˆ
ˆ
p

p
p

p
p

p
1
2
1
2
1
2
(%
)± z* SE (%
) , where SE (%
)=%

ˆ
ˆ
o Give values for z*, n1, p1 , n2, p2 (or, alternately, n1 and the number of “successes” in the sample
of group 1 and n2 and the number of “successes” in the sample of group 2 )
o Give the interval. (low, high)
o Use of calculator is completely acceptable, and is strongly encouraged for two samples.
Conclusion
o “I am _______% confident that the true difference in proportion of [context] between [group 1
and group 2] is between ____ and ______.”
o “I am _______% confident that the true (population) proportion of [context] [group 1] is
between ____% and ______% higher than the true (population) proportion of [context] (group
2)”
Interpretation
______%
o If we took a very large number of samples of size n1 for group 1 and n2 for group 2,
of the samples would produce intervals that contained the true difference in proportion of
(context) for group 1 and group 2.
Testing a hypothesis using a confidence interval
o Since zero is included in my C % confidence interval, there does not appear to be a difference
in the population proportion of [context] for group 1 and group 2.
o Since zero is not included in my C % confidence interval, there does appear to be a difference
in the population proportion of [context] for group 1 and group 2. It appears that group 1 has a
higher (or lower) proportion of [context] than group 2.
*As with hypothesis testing, it is important to note what value you deemed statistically significant.
When you mention that zero is or is not included in your confidence interval, be sure to give the
confidence level, as done above.

Chapter 22: Hypothesis Test for two samples of qualitative (categorical) data
A hypothesis test for two proportions is always about the difference between two proportions. If two
homogeneous sample populations had equal proportions of [context], then their difference would be
zero.
Creating and Reporting a Test of Significance

Hypotheses
o The null hypothesis is almost always H0: p1 – p2 = 0
o The alternative hypothesis is that the difference is >0, <0 or ≠ 0.

Conditions and assumptions
o [parameter] [context] of group 1 is independent of [parameter] [context] of group 2
o Should be independent of each other. One [context] should not affect another [context]
o Randomization. Ideally, SRS. If not, assume it is representative.
o population n1 > 10n1 and population n2 > 10n2

n1 pˆ1 = ___ > 10 and n2 (1 − pˆ 2 ) = ___ > 10; n2 pˆ 2 = ___ > 10 and n2 (1 − pˆ 2 ) = ___ > 10
*It’s ok to use and in this instance. If those values are not > 10, try using pˆ pooled.

o

Mechanics
o State which “test” you are using: two proportion z test or provide the formula

z=
%

( pˆ1 − pˆ 2 ) − 0
SE pooled ( pˆ1 − pˆ 2 )

where %
and %

SE pooled ( pˆ1 − pˆ 2 ) =

pˆ pooled =

pˆ pooled (1− pˆ pooled ) pˆ pooled (1− pˆ pooled )
+
n1
n2

success1 + success2
n1 + n2

ˆ
ˆ
o Identify which is group 1 and which is group 2, give n1, p 1, n2, p 2
o Give the interval. (low, high)
o Give P-value in format P(z > z-score) = P-value
o Use of calculator is completely acceptable, and is strongly encouraged for two samples.
Conclusion
o With a P-value of P-value, I fail to reject the null hypothesis at the % α = .05 significance level.
There is not enough evidence to
▪ support (state HA)
▪ suggest that the true population proportion of [context] is statistically different from (or
higher than, or lower than) hypothesized value of p0 .
With
a
P-value
of P-value, I reject the null hypothesis at the % α = .05 significance level. The
o
evidence suggests that (state HA)
Interpretation
o If the null hypothesis is true and (state null in context), P-value is the probability of observing a
ˆ or larger ( or lower, or one more unlike % pˆ ) due to natural sampling
sample proportion of % p
variability.

Why the pooling? Are null hypothesis is that the difference in proportion between the two groups is zero, which
essentially means that the proportion of (whatever) is equivalent for both. If that is the case, then there is no
need for two separate groups. For example, not men and women shoppers, just “shoppers”; not women who
have mammograms and women who do not, just “women with breast cancer.”
Part VI
Ch 18 part 2 and 23 – 25 concepts
1. Make sure you’ve looked at the suggested examples. The evens are all posted on Blackboard. Even if you don’t
do a problem in its entirety, just reading through the solution may be helpful. When possible, I tried to include
multiple ways to do something, such as different acceptable versions of the conclusion statement, or different plots
besides a histogram (Normal probability plot or boxplot) Tip: know what a “Normal” looks like besides histograms.

2. Remember that this unit is about MEANS not proportion. It is all about ! µ and ! x . The only “p”
that shows up is the P-value. This also implies that all values have units (grams, minutes, degrees, etc.)
3. You do not need to memorize the formulas, but you should be able to derive them from the formula chart.
Keep in mind that the ! µ0 is the proposed “true” or population mean. For two sample and paired data, this is
almost always 0 (as in “there is no difference”), but it doesn’t have to be.
One-sample
Confidence interval

Hypothesis test

Sampling distribution
centered around

x ± t* n – 1 x
tn – 1 = µ

sx
n

Two sample

s12 s22
+
n1 n2

(x1 − x2 ) ± t*df x
tdf =

Paired

(x1 − x2 ) − 0
2
1

2
2

s
s
+
n1 n2
% µ1 − µ2

d ± t* n – 1 x

tn – 1 =

sd
n

d −0
sd
n

% µd

3. Be familiar with the concept of the CLT (Central Limit Theorem). For data whose population distribution is not
unimodal and symmetric (Think: commute time for a Mount student, which is skewed right), the sampling distribution
will be approximately Normal as long as the sample size is large enough (and the observations are independent).
If you do not have a large sample (n ≥ 30), then the population distribution must be unimodal and symmetric. If
you are not told that “the assumption of Normality is reasonable,” then you will likely be given data and you will
have to make your own histogram or Normal probability plot.
4. Understand why we use Student’s t (or, the t-distribution) rather than the Normal model. In order to use the
Normal model, we would need to know the population standard deviation, which is ! σ . Since it is unlikely that you
will know this, you will have to estimate the standard error using the sample standard deviation sx. This means that
you are using the t-distribution, which is a family of unimodal and symmetric models.
5. Understand why there is more than one t-distribution. Student’s t is based on degrees of freedom. As the
degrees of freedom increase, the t-distribution approaches a Normal model. So, the larger the sample size, the
closer the t-distribution is to the Normal model.
6. Remember that sampling distributions are all about the truth. For one proportion, they were centered around p,
and for two-proportions they are centered around p1 – p2. See the chart in #3 above for the “truth” or population
parameter.
7. Make sure you understand the following concepts from unit 5
a. Interpretation of a confidence level (“If one were to repeatedly take random samples . . .)
b. Interpretation of a confidence interval (the conclusion: “Based on my data, I am 95% confident that . . .)
c. Interpretation of a P-value (If the null hypothesis is true . . .)
d. Conclusion of test (“With a P-value of . . .)
e. Type I and Type II errors
Ch 18 part 2 and 23 – 25 sample calculations

1. From ch 23 evens: Doritos. The 95% confidence interval calculated for the average weight of a bag of Doritos
is (28.6055, 29.3611) grams.
a.
If this is the only information you are given, you should be able to find % x .
28.9833 grams

b.

If this is the only information you are given, you should be able to find ME(% x )

0.3778 grams

c.

If I tell you that the sample size is 6, you should be able to find sx.

0.36 grams

2. From ch 25 : Gasoline. You are asked to do a 90% confidence interval on a sample of size n = 10. While this
can be done on the calculator, you should be able to figure out the critical value (t*)
1.833
a. be able to do this using “invT” on your calculator
b. be able to do this using your formula chart

3. From ch 23 : “Doing sample size calculations by hand” try the one from the notes sheet Example: If s = 7.8,
how large a sample would you need to estimate a mean to within a 2 point margin of error with 94%
confidence?

n = 57

4. Practice writing the Independent groups condition. Technically, you don’t have to write the “linked” statement,
you just need to think about it. It would be acceptable to say “Independent Groups because you can be a man or
junior or a senior but not both.” For paired data, make sure you explain why/how the data are paired.
Part VII
Chapter 26:

Know whether a situation calls for a chi-square test of goodness-of-fit, homogeneity, or independence
(or, none of the above)

o SRS of Canadians asked to report their eye color. Is this what we would have expected based on
what statistics over the past 75 years indicate about the percentages of people with certain eye
color? Chi-squared test for goodness of fit.
o SRS of Canadians asked to report their eye color; SRS of US residents asked to report their eye
color. Is there a significant difference in the distribution of eye color between the two countries?
Chi-squared test for homogeneity.
o SRS of Canadians asked to report their eye color and hair color. Is there an association between
eye color and hair color? Chi-squared test for independence (or, association.)

Conditions/assumptions necessary for a chi-square test
o Independent data (one person’s eye color should not affect another’s)
o Random sample
o Data are counts
o All expected values are ≥ 5. (Make sure that you clearly give and label expected values, or
indicate that the smallest expected value is ≥ 5.)

How to find the degrees of freedom for a chi-square test of goodness-of-fit

# of categories – 1

How to find the df for a chi-square test of homogeneity (or independence)

(row – 1) * (columns – 1 )

Describe what happens as n increases (Because small deviations are more noticeable, larger samples
make it easier to reject the null hypothesis)

Describe what happens as degrees of freedom increase chi-squared distribution is always skewed right
but becomes less skewed as degrees of freedom increase

Find the expected values for a set of data with a given null hypothesis
o Goodness-of-fit → you do
o Homogeneity or independence → enter observed into matrix A, run"

test, and calculator

populates matrix B with the expected values.

Find the chi-square statistic for a goodness-of-fit test By hand. L1: obs, L2: exp, L3 (obs-exp)2/exp)
[this is from the formula chart], use 1-Var Stats to get sum of L3 (Σx)

Find the P-value for any of the three

Write hypotheses for a chi-square test

tests

cdf
(

test-statistic, 999, df) → P

(

> 8.6) = .074

o Goodness of fit
▪ H0: the distribution of Canadians’ eye color is the same as it has been for 75 years.
▪ HA: the distribution of Canadians’ eye color is different than it has been for 75 years.
o Homogeneity
▪ H0: the distribution of eye color in Can. is the same as the distrib. of eye color in the US
▪ HA: the distribution of eye color in Can. differs from the distrib. of eye color in the US
o Independence
▪ H0: There is no association between hair color and eye color.
▪ HA: There is an association between hair color and eye color.
o Given a set of hypotheses and a p-value, formulate a conclusion (Analyze standardized residuals)
o Goodness of fit
▪ With a high P-value, I fail to reject . . . There is not sufficient evidence to suggest that the
distribution of Canadians’ eye color is different than it has been for 75 years.
▪ With a low P-value I reject . . . There is sufficient evidence to suggest that the distribution
of Canadians’ eye color is different than it has been for 75 years. It appears . . . (don’t
need to do standardized residuals, but you should be able to see where there was a
difference.)
o Homogeneity
▪ With a high P-value, I fail to reject . . . There is not sufficient evidence to suggest that the
distribution of eye color in Canada differs from the distribution of eye color in the US.
▪ With a low P-value, I reject . . . There is sufficient evidence to suggest that the
distribution of eye color in Canada differs from the distribution of eye color in the US.
(generate standardized residuals and analyze so that you can explain the difference.) It
appears that Canadians have more instances of green and blue eyes and less instances of
brown eyes than the US.
o Independence
▪ With a high P-value, I fail to reject . . . There is not sufficient evidence to suggest that
there is an association between hair color and eye color.
▪ With a low P-value, I reject . . . There is sufficient evidence to suggest that there is an
association between hair color and eye color.. (generate standardized residuals and
analyze so that you can explain the difference.) It appears that people with blond hair
are more likely to have blue eyes and less likely to have brown eyes. (see the

“independence&homogeneity” example with NYPD for analysis of standardized
residuals.)

Chapter 27:

How to write the regression equation

How to find the degrees of freedom for a confidence interval of the slope

Conditions/assumptions necessary for inference “The scatterplot shows a moderately strong, positive
linear association with and equal amount of scatter, the residual plot shows no obvious bends or outliers
with an equal amount of scatter, one cereal’s sodium content does not affect another’s, a histogram of
the residuals is reasonably unimodal and symmetric” or “with a large sample size of n = 50 the CLT
guarantees that the sampling distribution of the residuals can be approximated with a Normal model.”

Write hypotheses for a test of the slope
o H0: β1 = 0 (There is no linear relationship between y and x.)
o HA: β1≠ 0 (There is a linear relationship between y and x.)

Find the t-ratio of the slope, given the slope and the standard error of the slope; slope / SE(slope)

Find the standard error of the slope, given the slope and the t-ratio of the slope; slope / t-ratio

Find the P-value of the slope, given the slope and the t-ratio of the slope;
o For positive t-ratio:

tcdf(t-ratio, 999, df)* 2 → 2P(t48 > 12.2 ) < .0001

o For negative t-ratio: tcdf(-999, t-ratio, df)* 2 → 2P(t48 < -12.2 ) < .0001

Given a set of hypotheses and a p-value, formulate a conclusion
o With a high P-value of ____, I fail to reject H0 at α = .05. There is not evidence of a linear
relationship between y and x.
o With a low P-value of ____, I reject H0 at α = .05. There is sufficient evidence of a linear
relationship between y and x. It appears that y increases as x increases.

Find the margin of error for a confidence interval of the slope (t* for C% confidence & df )* SE(b1)

Find the confidence interval of the slope

(b1– ME(b1), b1+ ME(b1))

o Based on the regression, I am 98% confident that fuel economy decreases by 6.592 MPG to
9.835MPG as the vehicle’s weight increases by 1000 pounds.

Identify and interpret the standard deviation of the residuals 80.49 mg measures the typical amount of
variability in the difference between observed sodium and the amount of sodium predicted by the linear
regression using calories of cereal as an explanatory variable.

Identify and interpret the standard error of the slope

The estimated slope of 1.3 mg of sodium per

calorie varies by about .47 mg/cal from sample to sample.

Interpret the slope and y-intercept in the context of a given situation