You are on page 1of 3

N OTATION The foll owing notation is used on this card:

 n sample size σ population stdev x sample mean d paired difference s sample stdev pˆ sample proportion Q j j th quartile p population proportion N population size O observed frequency µ population mean E expected frequency

CHAPTER 3 Descriptive Measures

Sample mean: x x

n

Range: Range Max Min

Sample standard deviation:

s

(x x) 2 n 1

or

s

x 2

( x) 2 /n

n 1

Interquartile range: IQR Q 3 Q 1

Lower limit Q 1 1 . 5 · IQR ,

Upper limit Q 3 + 1 . 5 · IQR

Population mean (mean of a variable): µ x

N

Population standard deviation (standard deviation of a variable):

σ

(x µ) 2

N

or

Standardized variable: z x µ

σ

σ x 2

N

µ 2

CHAPTER 4 Descriptive Methods in Regression and Correlation

S xx , S xy , and S yy :

S xx

(x

x) 2 x 2 ( x) 2 /n

S xy (x x)(y y) xy ( x)( y)/n

y) 2 y 2 ( y) 2 /n

S yy (y

Regression equation: yˆ b 0 + b 1 x , where

b 1

S xy

S xx

and

b 0

1

n ( y b 1 x) y b 1 x

Total sum of squares: SST (y y) 2 S yy

Regression sum of squares: SSR (yˆ − y) 2 S

2 xy /S xx

Error sum of squares: SSE (y y)ˆ 2 S yy S

Regression identity: SST SSR + SSE

2 xy /S xx

Coefﬁcient of determination: r 2 SSR SST

Linear correlation coefﬁcient:

r

1

n 1 (x x)(y y)

s x s y

or

r

S xy

S xx S yy

CHAPTER 5 Probability and Random Variables

Probability for equally likely outcomes:

P

(E)

f

N ,

where f denotes the number of ways event E can occur and

N denotes the total number of outcomes possible.

P (A or B or C or ··· ) P (A) + P (B) + P (C) +···

(

A , B , C ,

mutually exclusive)

Complementation rule: P (E) 1 P (not E)

General addition rule: P (A or B) P (A) + P (B) P (A & B)

Mean of a discrete random variable X : µ xP (X x)

Standard deviation of a discrete random variable X :

σ (x µ) 2 P (X x)

or

σ x 2 P (X x) µ 2

Factorial: k ! k(k 1 ) ··· 2 · 1

Binomial coefﬁcient:

n

x

n ! x ! (n x) !

Binomial probability formula:

P (X x)

n

x

p x ( 1 p) nx ,

where n denotes the number of trials and p denotes the success probability.

Mean of a binomial random variable: µ np

Standard deviation of a binomial random variable: σ np(1 p)

CHAPTER 7 The Sampling Distribution of the Sample Mean

Mean of the variable x : µ x µ

Standard deviation of the variable x : σ x σ/ n

CHAPTER 8 Conﬁdence Intervals for One Population Mean

Standardized version of the variable x :

z

x µ

σ/ n

z -interval for µ ( σ known, normal population or large sample):

x ± z α/ 2 ·

σ

n

Margin of error for the estimate of µ : E z α/ 2 · σ

n

Sample size for estimating µ :

n z α/ 2 · σ

E

2

,

rounded up to the nearest whole number.

Studentized version of the variable x :

t

x µ

s/ n

t -interval for µ ( σ unknown, normal population or large sample):

with df n 1.

x ± t α/ 2 ·

s

n

CHAPTER 9 Hypothesis Tests for One Population Mean

z -test statistic for H 0 : µ µ 0 ( σ known, normal population or large sample):

x µ 0

z

σ/ n

t -test statistic for H 0 : µ µ 0 ( σ unknown, normal population or large sample):

with df n 1.

t

x µ 0

s/ n

CHAPTER 10 Inferences for Two Population Means

Pooled sample standard deviation:

s p (n 1 1 )s

2

1

+ (n 2 1 )s

2

2

n 1 + n 2 2

Pooled t -test statistic for H 0 : µ 1 µ 2 (independent samples, normal populations or large samples, and equal population standard deviations):

t

with df n 1 + n 2 2.

x 1 x 2

s p ( 1 /n 1 ) + ( 1 /n 2 )

Pooled t -interval for µ 1 µ 2 (independent samples, normal populations or large samples, and equal population standard deviations):

(x 1 x 2 ) ± t α/ 2 · s p ( 1 /n 1 ) + ( 1 /n 2 )

with df n 1 + n 2 2.

Degrees of freedom for nonpooled-t procedures:

s

2

1

/n 1 + s 2 /n 2 2

2

s

/n 1 2 n 1 1

2

1

+

/n 2 2 n 2 1

s

2

2

,

rounded down to the nearest integer.

Nonpooled t -test statistic for H 0 : µ 1 µ 2 (independent samples, and normal populations or large samples):

with df .

t

(s

x 1 x 2

1 2 /n 2 )

/n 1 ) + (s

2

2

Nonpooled t -interval for µ 1 µ 2 (independent samples, and normal populations or large samples):

with df .

(x 1 x 2 ) ± t α/ 2 · (s

2

1

/n 1 ) + (s

2

2

/n 2 )

Paired t -test statistic for H 0 : µ 1 µ 2 (paired sample, and normal differences or large sample):

with df n 1.

t

d

s d / n

Paired t -interval for µ 1 µ 2 (paired sample, and normal differences or large sample):

with df n 1.

d ± t α/ 2 ·

s d

n

CHAPTER 11 Inferences for Population Proportions

Sample proportion:

pˆ

x

n ,

where x denotes the number of members in the sample that have the

speciﬁed attribute.

One-sample z -interval for p :

pˆ ± z α/ 2 · p(ˆ 1 p)/nˆ

( Assumption: both x and n x are 5 or greater)

Margin of error for the estimate of p :

E z α/ 2 · p(ˆ 1 p)/nˆ

Sample size for estimating p :

n 0 .25 z α/ 2 2

E

or

n pˆ g ( 1 pˆ g ) z α/ 2

E

2

rounded up to the nearest whole number (g “educated guess”)

One-sample z -test statistic for H 0 : p p 0 :

z

pˆ − p 0

p 0 ( 1 p 0 )/n

(Assumption: both np 0 and n( 1 p 0 ) are 5 or greater)

Pooled sample proportion: pˆ p x 1 + x 2 n 1 + n 2

Two-sample z -test statistic for H 0 : p 1 p 2 :

z

pˆ 1 pˆ 2

pˆ p ( 1 pˆ p ) ( 1 /n 1 ) + ( 1 /n 2 )

(Assumptions: independent samples; x 1 , n 1 x 1 , x 2 , n 2 x 2 are all 5 or greater)

Two-sample z -interval for p 1 p 2 :

(pˆ 1 pˆ 2 ) ± z α/ 2 · pˆ 1 ( 1 pˆ 1 )/n 1 + pˆ 2 ( 1 pˆ 2 )/n 2

( Assumptions: independent samples; x 1 , n 1 x 1 , x 2 , n 2 x 2 are all 5 or greater)

Margin of error for the estimate of p 1 p 2 :

E z α/ 2 · pˆ 1 ( 1 pˆ 1 )/n 1 + pˆ 2 ( 1 pˆ 2 )/n 2

Sample size for estimating p 1 p 2 :

or

n 1 n 2 0 . 5 z α/ 2

E

2

n 1 n 2 pˆ 1 g ( 1 pˆ 1 g ) + pˆ 2 g ( 1 pˆ 2 g ) z α/ 2

E

2

rounded up to the nearest whole number (g “educated guess”)

CHAPTER 12 Chi-Square Procedures

Expected frequencies for a chi-square goodness-of-ﬁt test:

E np

Test statistic for a chi-square goodness-of-ﬁt test:

χ 2 (O E) 2 /E

with df k 1, where k is the number of possible values for the variable under consideration.

Expected frequencies for a chi-square independence test:

E

R · C

n

where R row total and C column total.

Test statistic for a chi-square independence test:

χ 2 (O E) 2 /E

with df (r 1 )(c 1 ) , where r and c are the number of possible values for the two variables under consideration.

CHAPTER 13 Analysis of Variance (ANOVA)

Notation in one-way ANOVA:

 k number of populations n total number of observations x mean of all n observations

n j size of sample from Population j

x j mean of sample from Population j

s

T j sum of sample data from Population j

2

j variance of sample from Population j

Deﬁning formulas for sums of squares in one-way ANOVA:

SST (x x) 2

SSTR n j (x j x) 2

SSE (n j 1 )s

2

j

One-way ANOVA identity: SST SSTR + SSE

Computing formulas for sums of squares in one-way ANOVA:

SST

x 2 ( x) 2 /n

SSTR (T /n j ) ( x) 2 /n

2

j

SSE SST SSTR

Mean squares in one-way ANOVA:

MSTR SSTR ,

k 1

MSE SSE

n k

Test statistic for one-way ANOVA (independent samples, normal populations, and equal population standard deviations):

F MSTR MSE

with df (k 1 , n k) .

CHAPTER 14 Inferential Methods in Regression and Correlation

Population regression equation: y β 0 + β 1 x

Standard error of the estimate: s e

SSE

n 2

Test statistic for H 0 : β 1 0:

with df n 2.

t

b 1

s e / S xx

Conﬁdence interval for β 1 :

with df n 2.

b 1 ± t α/ 2

·

s e

S xx

Conﬁdence interval for the conditional mean of the response variable corresponding to x p :

yˆ p ± t α/ 2 · s e

1

(x p x/n) 2

n

S

xx

+

with df n 2.

Prediction interval for an observed value of the response variable corresponding to x p :

yˆ p ± t α/ 2 · s e 1 +

1

(x p x/n) 2

n

S xx

+

with df n 2.

Test statistic for H 0 : ρ 0:

with df n 2.

t

r

1 r 2 n 2