You are on page 1of 4
P E  F ( ) “and” =  (Multiply) “or” =  (Sum) P
P E  F
(
)
“and” =  (Multiply)
“or” =  (Sum)
P ( E / F ) 
P F
(
)
Are the events mutually exclusive?
P(A) & P(B) = 0
P(A) or P(B) = P(A) + P(B)
with P(F)<>0
Are the events Independent?
P(A) & P(B) = P(A) x P(B)
Example: does P(A/B) = P(A)
P(A or B or C) =
P(A) + P(B) + P(C)
-P(A X B) -P(A X C) -P(B X C)
+ P(A X B X C)
P(A) or P(B) = P(A) + P(B) – [P(A) x P(B)]
Are the events Conditional?
Sampling WITH Replacement
Sampling WITHOUT Replacement
(Is one event dependent on the other)
1) BINOMIAL
HYPERGEOMETRIC
P(A) & P(B) = P(B/A) x P(A)
2) When n is large & p is small use
P(A) & P(B) = P(A/B) x P(B)
Poisson Formula or Tables.

n! = n x (n-1) x (n-2) x
x 2
 
nD  
D   N  n 
2
n !

n !
HYPERGEOMETRIC
2
1
n
POISSON
P 
n
C 
r
r
n r
N 
N    N  1 
e
x
(
n
 r
)!
r n  r
!(
)!
D
N D
p x
(
) 
x
0,1,
   
x !
 
N
x
n
 x
p x 
(
)
x  0,1,2,
min(n, D)
= Population Size
= Size of Sample
BINOMIAL DISTRIBUTION
n
N
 
nD
 
D
= Class of Interest
n
p x
(
)    
x
= # of samples fall into
p
x
(1
p
)
n  x
x 0,1,
N
class of interest
  np  p
2 (1
 
n
x
)
  np
,n
 
Mod–Med-X bar
Xbar-Med-Mod
H n  0.1 N B p  0.1 p  0.9 (The smaller p
H
n
 0.1
N
B
p  0.1
p  0.9
(The smaller
p
and larger
np   10
n
the better)
0.1
p
0.9
P
 15
(thelarger the better)
Let p’=1-p,
the smaller p’
and larger n
the better
N

EXPONENTIAL DISTRIBUTION

 
 

GAMMA DISTRIBUTION

 

r

(

f x

)

e

x

0

2

1

 

1

 

x

2

 

1

e

x

   

   

(

f x

)

 

(

x

)

r

 

x

0

2

r

 

1

1

np

 

(

r

)

2

P x a

 

a

2

np

 

a

2

 

(

)

 np (1  p )   
np
(1
p
)

 np (1  p )   
np
(1
p
)

 

WEIBULL DISTRIBUTION

 
   

1

 

 

P a

x

b

 

b

1

2

np

a

1

 

2

np

 

f x

(

)

 

x

exp

  

 

x      

 

x

0

 

   

 np (1  p )  
np
(1
p
)

 np (1  p )   
np
(1
p
)

2

2

  

1

2

  

1

1  

2

   1

 

1

   

 

P u p ˆ v

 

v p

  

u p

   p (1  p n )/ 
p
(1
p n
)/
  p (1  p n )/  
p
(1
p n
)/
 
pˆ x / n
pˆ x / n

Sample Size Variable Data:

 

n    Zs

E

2

E

 

Z

s  

Z s   n 

n

Z

s

n
n
   

pq   Z

E

2

Sample Size Discrete Data Binomial:

n

Sample Size Discrete Data Poisson:

 

p

Z

E

2

n

 
P x a   1 P x a    P x 
P x a
1
P x a
P x
 

a
P x
a
Px

a Px
a 

Variance of the

o ulation

 
 

2

( x

i

)

2

Variance

 
 

N

 

Variance of a set of data

 
 

x

 

2

 

2

 

i

x

Variance

Sigma Square

 

n

Estimate Variance of a infinite population

   

Variance

s

2

(

')

2

(

x

i

x

)

2

 

n

1

Estimate Variance of a finite population

 
 

2

2

(

x

i

x

)

2

(

N

1)

Variance

s

(

')

   

 
 

n 1

 

N

 

Standard Deviation for a Distribution of averages, Ns= # samples,

n= samples size

 ( x  x ) 2  i x N s
(
x  x
)
2
i
x
N
s

Standard Error

or

s

n
n
( x  x ) Z  )  X s 2 ( x i
( x  x )
Z 
)
 X
s
2
(
x
i 
x
i
Z 
t
 
F 
1
s
s
s
s
2
n
2
n
 N
  
2
(
2
)
2N
2       2)/ 2 e 2 2  ( 
2
 
 
 2)/ 2
e
2
2  (
 2   !
2
2
 
2

2 (

n

1)

s

2

f (

2 )

where

2

95 % Confidence Interval with known Sigma

Confidence Interval with UN-known Sigma

X 1.96

n
n

 1.96

X

n
n

X t

2

s n
s
n

X t

2

s n
s
n
   

Null Hypothesis

 

TRUE

FALSE

Decision Made:

Null Hypothesis (Ho) true

Null hypothesis (Ho) false

Do not reject Ho

No error (1- )

Type II error ()

Reject Ho

Type error ()

No error (1-)

Z  Z  H VARIANCE KNOWN  VARIANCE UNKNOWN 0 / 2 H 
Z
 Z 
H
VARIANCE KNOWN
VARIANCE UNKNOWN
0
/ 2
H
0
0
0
0
Reject Ho if
H
: : 
   
x 
H
: : 
   
Reject Ho if
1
0
Z 
 0
1
0
0
H 1 : :
x

0
if Z 0  Z 
n
1
0
0
0
,
n
1
t 
0
Ho: 0
If If
H
 
 
, , reject reject H H
if if
t t
  t
t
0
S
n
1
1
0
0
0
,
n
1
Reject Ho
0 0
 
P
typeI error
P
reject
H
H
is true
0
0
Reject Ho if Z 0  Z 
Ho:
 
P
typeIIerror
P
fail toreject
H
H
is false
0
0
 n  S 1  2  2  The null hypothesis is rejected
n  S
1
2
2
The null hypothesis is rejected if:
0
2
H
2
2
0
2
2
or if
2
2
0
0
/2,
n
1
0
1

/2,
n
1
H
0 : :  
2
2
corresponding to the upper /2 and lower1- ( /2) percentage points
of theChi -square distribution with n -1df.
1
0
H 1 : :
2
2
2
 
2
H
 
0
0
 
1
 n
,
1
2
2
The null hypothesis is rejected if:
2
2
1
0
0
 n
,
1
The null hypothesis is rejected if:
  
The Probability of Type II error
 
x
Under H 0 Under H 1
  0
1
0
Z 
n 
n 
/
n
0
0
 
 
Z
   
Z
0
/2
 
 
/2
 Z  / 2
 n /
 
Z  / 2

Clasificacion

Nivel Confianza Z

Maradona

90%

1.645

Michel Jordan 95%

1.96

Yao Li

99%

2.58

Sigma

Percentage In PPM Defective

1

68.26

317,300.0

2

95.44

45,500.0

3

99.73

2,700.0

4

99.9937

63.0

5

99.999943

0.57

6

99.999660

0.002

   21  Z for a two- tailed test: H 0 : 
 21 
Z
for
a two- tailed test:
H
0 :
 
H
:
0
0
1
0
P   1

(
Z
)
for
an upper- tailed test:
H
0 :
 
H
:
0
0
1
0
( Z
)
for
a lower - tailed test:
H
 
H
:
0 :
0
0
1
0
 

Confidence Interval on the Variance Normal Dist.

(

1)

n

2

/2,

n

S

1

2

2

(

n

2

1

1)

/2,

S

n

2

1

   

One sided Confidence Interval on the Variance

 
   

2

(

n

1)

2

1

,

n

S

1

2

 

(

n

2

,

1)

S

n

1

2

2

 

INFERENCE ON A POPULATION PROPORTION

 

H

H

:

1 : p p   p p

0

0

0

Reject if:
Reject if:
 

Z

0

 x (  0.5)  np 0  np (1  p ) 
 x
(
0.5)
np
0
np
(1
p
)
0
0

if x

np

0

 

(

x

0.5)

np

0

 

Z

Z

if x

np

0

0

/2

 np (1  p )  0 0
np
(1
p
)
0
0
 

Confidence Interval on a Population Proportion

   

ˆ Z

p

/2

p ˆ(1 ˆ)  p n
p
ˆ(1 ˆ)
 p
n

p

p ˆ Z

/2

p ˆ(1 ˆ)  p n
p
ˆ(1 ˆ)
 p
n
 

Inference for a Difference in Means, Variance Known

 

H 0 : 1 2 0

 

Alternative Hypothesis

Rejection Criterion

Z

0

H

1

: 

1

2

0

Z Z orZ

0

/2

0



Z

/2

 
x  x  1 2 0  2  2 1  2 n
x
 x 
1
2
0
2
2
1
2
n
n
1
2

H

H

1

1

: 

1

2

: 

1

2

0

0

Z

0

Z

Z  Z

0

 

Confidence Interval on a Difference in Means, Variances Known

x

1

x

2

Z

/2

 2  2 1  2 n n 1 2
2
2
1
2
n
n
1
2

1

2

x

1

x

2

Z

/2

 2  2 1  2 n n 1 2
2
2
1
2
n
n
1
2

Case 1:

2

2

2

t 0

1 1 S  p
1
1
S
p
n n 1 2 H :   1 1 H :   1
n
n
1
2
H
: 
1
1
H
: 
1
1
H
: 
1
1

Inference for a Difference in Means, Variance Unknown

0

0

0

t

0

n 1 n

2

t

0



t

/ 2,

n

1

t

0

t

,

n

1

n

2

2

t

0



t

,

n

1

n

2

2

n

2

2

or

2

n 1  n 2  2 n 2   2 or 2 2 S

2

S p

2

2

2







H 0 : 1 2 0

  H 0 :  1   2  0 n  1 S

n 1

 H 0 :  1   2  0 n  1 S 

S

2

2

2

Alternative Hypothesis

Rejection Criterion

Case 2:

1

2

2

2

x  x  1 2 0 S 2 S 2 1  2 n
x  x  1 2 0 S 2 S 2 1  2 n
x  x  1 2 0 S 2 S 2 1  2 n
x  x  1 2 0 S 2 S 2 1  2 n
x  x  1 2 0 S 2 S 2 1  2 n
x
 x 
1
2
0
S
2
S
2
1
2
n
n
1
2

*

t 0

x  x  1 2 0 S 2 S 2 1  2 n n
1 2 0 S 2 S 2 1  2 n n 1 2 * t
1 2 0 S 2 S 2 1  2 n n 1 2 * t
1 2 0 S 2 S 2 1  2 n n 1 2 * t
1 2 0 S 2 S 2 1  2 n n 1 2 * t
2  S 2 1 2 S    2    
2
 S
2
1 2 S 
2
n n
 
1  2
2
(
S
2
/
n
)
2
(
S
2
/
n
)
2
1
1
2
2
n  1
n  1
1
2

1

2

     1 2

x 1

x 

2

0

n 1

1

n n 2

1

2

n  1 1 n  n  2 1 2 2 S  1 n
n  1 1 n  n  2 1 2 2 S  1 n

2

S

1

n

1

2

n

2

   Confidence Interval on a Difference in Means, Variances Unknown 2 2 2
  
Confidence Interval on a Difference in Means, Variances Unknown
2
2
2
1
2
1
1
1
1
x
x
t
S
x
x
t
S
1
 
2
1
2
/ 2,
n
n
2
p
1
2
/ 2,
n
n
2
p
1
2
n
n
1
2
n
n
2
2
1
2
1
2
1
2
S
2
S
2
S
2
S
2
x
 x  t
1
2
x
x
t
1
2
1
2
/2,
2
1
n
n
1
2
/2,
n 1 n
1
2
2

Inference for paired data

 
 

d

t

0

S d / n
S
d /
n

H

0

:

d

0

Reject if:
Reject if:

t

0

t

/2,

n

1

Average difference between data.

Sd is the STD of differences

 

Inference on the Variances of Two Normal

     

Alternative Hypothesis / Test Statistic / Rejection Criterion

2

H :

1

1

2

2

F

0

S

S

2

1

2

F

0

F

0

F or

F

1

/2,

1,

n n

1

2

2

 

1

 

2

(

/ 2),

n

1

1,

n

2

 

S

2

2

1

H 1 :

2

F

0

2

F

0

F

,

n

2

1,

n

1

1

 

2

S

2

1

H :

2

2

 

S

2

F

0

F

1

F

1

,

n

1

1,

n

2

 

1

1

2

0

S

2

2

n 2    1 1 2 0  S 2 2 Confidence Interval on the

Confidence Interval on the Ratio of the Variances of Two Normal Distributions

 

S

S

2

1

2

F

1

/2,

n

2

1,

n

1

1

2

1

2

S

S

2

1

2

F

/2,

n

2

1,

n

1

1

2

1

2

S

S

2

1

2

F

,

n

2

1,

n

1

1

S

S

2

1

2

F

1

,

1,

n n

2

1

1

2

1

2

2

2

2

2

2

2

2

Inference on Two Population Proportions

   

p ˆ

1

p ˆ

2

 

GOODNESS OF FIT

 

Z

0

H

 

H 0 : The shape of the distribution is the proposed

 
         
1  p ˆ(1  ˆ)   p   n 1  
1 
p
ˆ(1  ˆ)
p   n 1
n
1
2

0

: p p

1

2

 

k (

O E

i

i

)

Reject Ho if:

   
 

2

 

2

 

2

 

0

   

E

i

 

 

Alternate Hypothesis

Rejection criteria

   

i 1

 

0

k p

 

,

1

H 1 1

: p

p

Z

 

Z

or

Z



Z

 

Where k = class intervals, p = number of parameters

2

2

H 1 p

H 1 Z

Z

: p

: p

1

1

p

2

0

0

0

Z



Z

/2

0

/2

 

Confidence Interval on the Difference in Two Population Proportions

   

ˆ p ˆ

p

1

2

Z

/2

p ˆ (1  ˆ p ) p ˆ (1  ˆ p ) 1
p
ˆ
(1
 ˆ
p )
p ˆ
(1  ˆ
p
)
1
1
2
2
n
n
1
2

p

1

p

2

p ˆ

1

p ˆ

2

Z

/2

p ˆ (1  ˆ p ) p ˆ (1  ˆ p ) 1
p
ˆ
(1
 ˆ
p )
p ˆ
(1  ˆ
p
)
1
1
2
2
n
n
1
2
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
1  p ˆ 2  Z  /2 p ˆ (1  ˆ p )
If n different in between cells, then the Error degree of freedom is: a*b*(n-1)*(n-1)*(n- 1)…
If n different in between cells, then the Error degree of freedom is: a*b*(n-1)*(n-1)*(n- 1)…
If n different in between cells, then the Error degree of freedom is: a*b*(n-1)*(n-1)*(n- 1)…
If n different in between cells, then the Error degree of freedom is: a*b*(n-1)*(n-1)*(n- 1)…

If n different in between cells, then the Error degree of freedom is:

a*b*(n-1)*(n-1)*(n-1)…

If n different in between cells, then the Error degree of freedom is: a*b*(n-1)*(n-1)*(n- 1)…