You are on page 1of 43

Formula Sheet for CPIT 603 (Quantitative Analysis)

PROBABILITY
Probability of any event: 0 P (event) 1
Permutation: subsets of r elements from n different
elements
n!
Prn n (n 1) (n 2) (n r 1)
(n r )!
Permutation of similar objects: n1 is of one type, n2 is of
second type, among n = n1+n2++nr elements
n!
n1 ! n 2 ! n3 ! n r !
Combinations: subsets of size r from a set of n elements
n
n!
nC r C rn
r r!(n r )!
Independent Events:
P(AB) = P(A or B) = P(A) + P(B) P(A and B)
P(ABC) = P(A) + P(B) + P(C) P(AB) P(AC) - P(AB) = P(A and B) = P(A)P(B)
P(A|B) = P(A)
P(BC) + P(ABC)
P(B|A)=P(B)
P(B) = P(BA) + P(BA) = P(B|A)P(A)+P(B|A)P(A)
Dependent Events:
For Mutually exclusive events AB=:
P(A and B) = P(A) * P(B given A)
P(AB) = P(A or B) = P(A) + P(B)
P(AB)=P(A|B)P(B)=P(BA)=P(B|A)P(A)
P(A and B and C) = P(A) * P(B | A) * P(C given
A and B)
Bayes Theorem
P(AB) = P(AB) = P(A | B) P(B) = P(B | A) P(A)
P(B | A) P( A)
P( AB)
P( A | B )
Conditiona l Probabilit y P( A | B )
P (B | A) P( A) P (B | A) P( A)
P(B )
A, B
= any two events
A
= complement of A
Markovs Inequality
Chebyshevs Inequality
If X is a non-negative random variable with mean , then If X is a random variable with a finite mean
for any constant a > 0
and variance 2, then for any constant a > 0

2
P(X a)
P( X - a) 2
a
a

DECISION ANALYSIS
Criterion of Realism
Expected Monetary Value
Weighted average = (best in row) + (1 )(worst in
EMV(altern ative) = X i P ( X i )
row)
Xi = payoff for the alternative in state of
nature i
For Minimization:
P(Xi) = probability of achieving payoff Xi (i.e.,
Weighted average = (best in row) + (1 )(worst in
probability of state of nature i)
row)
= summation symbol
EMV (alternative i) = (payoff of first state of nature) x
Expected Value with Perfect Information
(probability of first state of nature) + (payoff of second
EVwPI = (best payoff in state of nature i)
state of nature) x (probability of second state of nature) +
(probability of state of nature i)
+ (payoff of last state of nature) x (probability of last
EVwPI = (best payoff for first state of nature) x
state of nature)
(probability of first state of nature) + (best
payoff for second state of nature) x (probability
of second state of nature) + + (best payoff for
last state of nature) x (probability of last state of
nature)

Expected Value of Sample Information

EVSI = (EV with SI + cost) (EV without SI)
Utility of other outcome = (p)(utility of best outcome,
which is 1) + (1 p)(utility of the worst outcome, which
is 0)

Expected Value of Perfect Information

EVPI = EVwPI Best EMV
EVSI
Efficiency of sample informatio n =
100%
EVPI

Y 0 1 X

REGRESSION MODELS
Y b b X
0

Y dependent variable (response)

X independen t variable (predictor or explanator y)

0 intercept (value of Y when X 0)

1 slope of the regression line
random error
Error = (Actual value) (Predicted value)
e Y Y

Y predicted value of Y
b0 estimate of 0 , based on sample results
b1 estimate of 1 , based on sample results

X
Y
b1

n
Y

n
( X X )(Y Y )

(X X)

b0 Y b1 X

Sum of Squares Regression SSR (Y Y ) 2

2
Correlation Coefficient = r r

Standard Error of Estimate s MSE

SSR
MSR
k
k number of independen t variable s in the model

Hypothesis Test H 0 : 1 0
H 1 : 1 0
Reject if Fcalculated F ,df1 , df 2
df 1 k df 2 n k 1
p - value P ( F calculated test statistic )

Reject if p - value
Y b0 b1 X 1 b2 X 2 ... bk X k
Y = predicted value of Y
b0 = sample intercept (an estimate of 0)
bi =
sample coefficient of the i th variable (an
estimate of i)

Sum of Squares Error SSE e 2 (Y Y ) 2

SSR
SSE
1
SST
SST
SSE
Mean Squared Error s 2 MSE
n k 1
Coefficien t of Determinat ion r 2

Generic Linear Model Y 0 1 X

MSR
MSE
degrees of freedom for the numerator = df1 = k
degrees of freedom for the denominator = df2 = n
k1
F Statistic : F

Y = 0 + 1X1 + 2X2 + + kXk +

Y=
dependent variable (response variable)
Xi = ith independent variable (predictor or
explanatory variable)
0 = intercept (value of Y when all Xi = 0)
i = coefficient of the ith independent variable
k=
number of independent variables
=
random error
SSE /( n k 1)
SST /( n 1)

( )( )

yi

i=1

y i x i

Least Square Estimates ^ 0= y ^ 1 x ^ 1= i=1

x 2i

( )
i=1

i=1

S xy
S xx

xi

Where

xi

i=1

1
x =
n

( ) x y=(1 /n) y
i

i =1

i=1

Error of squares , SS E = e = ( y i ^y i) 2
2
i

i=1

i=1

SS
Unbiased estimator , ^ = E
n2

SS E =SST ^ 1 S xy

SS T = ( y i y )

i=1

Estimated Standard error of the slope , se ( ^ 1 )=

^
S xx

[ ]

2
x
2 1
^
^
Estimated Standard error of theintercept , se ( 0 ) =
+
n S xx
Hypothesis Testing ( ttest )
Null hypothesis: H 0 : 1= 1,0
H 1 : 1 1,0
^1 1,0
Test Statistic: : T 0 :
Reject null hypothesis if
^ 2 /S xx

|t 0|>t /2, n2

H 0 : 0 =0,0
H 1 : 0 0,0

Null hypothesis:

Test Statistic: :

T0 :

^ 0 0,0

[ ]

1 x 2
^ 2 +
n S xx

|t 0|>t /2, n2

Analysis of Variance Identity , ( y i y ) = ( ^y i y ) + ( y i ^y i )

2

i=1

i=1

i=1

SS T =SS R + SS E
SS R /1
MS R
Test for signifance of Regression , F 0=
=
follows F 1,n2 distribution
SS E /(n2) MS E
Reject null hypothesis , if f 0> f , 1,n2
^ 1t

100 (1 ) CI of slope 0 is

^ 0t

100 (1 ) CI of slope 1 is

, n2

, n2

^ 2
^ 2
1 ^1 +t
,n2 S
S xx
xx
2

[ ]

[ ]

1 x 2
1 x 2
^ 2 +
0 ^ 0 +t
^ 2 +
, n2
n S xx
n S xx
2

0

^Y x t
0

,n2

[
^ 2

2
1 ( x0 x )
1 (x x )
+
Y x ^Y x +t
^ 2 + 0
, n2
n
S xx
n
S xx
2
0

]
4

where ^ Y x = ^ 0 + ^ 1 x 0 is computed the fitted regression model

100 (1 ) PI on a future observation Y 0 at the value x 0 is
0

2
2
1 ( x0 x )
2 1 ( x 0 x )
^y 0t
^
+
Y 0 ^y 0 +t
^
+
,n2
, n2
n
S xx
n
S xx
2
2
where ^y 0= ^ 0 + ^ 1 x 0 is computed theregression model
2

Residuals ei = y i^y i
d i=e i / ^ 2
Coefficient of determination, R2=

SS R
SS
=1 E
SST
SS T

]
CorrelationCoefficient , =

XY
X Y

Test for zero correlation : H 0 : =0

R n2
T 0 = 2 has t distribution with n2degrees of freedom
1R
Reject null hypothesis , if |t 0|>t
Test for correlation : H 0 : =0

, n2

100 (1 ) CI for correlation coefficient is

z
z /2
tanh arctanh r
tanh arctanh r + 2 where tanh u=( eu eu )/(e u + eu )
n3
n3
1
Fitted logistic expression, ^y =
1+exp [( 0 + 1 x ) ]

Multiple Regression

y= X+
y i= 0 + 1 x i 1+ 2 xi 2 ++ k x ik + i i=1,2, , n

[] [

y1
y
y= 2

yn
n

x
1 x 11 x12
1k
1 x 21 x22
x
X=
2k

1 xn 1 xn 2
x nk

i=1

i=1
n

[]

= 2

[]

= 2

n ^ 0+ ^1 x i1 + ^ 2 xi 2 ++ ^ k x ik = y i
i=1

i=1

i=1
n

i=1
n

i=1

i=1

i=1

^ x + ^ x 2 + ^ x x ++ ^ x x = x y
0
i1
1
i1
2
i1 i2
k
i 1 ik
i1 i
i=1

i=1

i=1

i=1
n

^ 0 x ik + ^1 x ik x i1 + ^ 2 x ik x i 2 ++ ^ k x 2ik= xik yi
Normal Equations : X X ^=X ' Y

i=1

'

n
n

xi 1
i =1
n

xi 1

xi 2

i=1
n

i=1
n

x 2i 1 x i 1 x i 2
n

x i 1 x ik

i=1

i=1

x ik x ik x i 1 xik x i 2
i=1

xik

i=1
n

i=1

i=1

x2ik

i=1

][ ]
n

i=1

[]
^0
^1

^ k

yi

i=1
n

xi 1 yi
i=1

x ik y i
i=1

1
Least square estimate of : ^=( X ' X ) X ' Y
n

e 2i

SS
Estimator of Variance , ^ 2= i=1 = E
n p n p
n

Error of squares , SS E = e = ( y i ^y i) 2=e ' e

2
i

i=1

'

i=1

Covariance ,C=( X X)
Estimated standard error of ^ j =se ( ^ j ) = 2 Cij
Hypothesis of ANOVA Test
Null hypothesis: H 0 : 1= 2== k =0

H 1 : j 0 for at least one j

SS R / k
MS R
Test statistic for ANOVA , F 0=
=
SS E /(np) MS E
Reject null hypothesis , if f 0> f , k ,n p
2

( )
yi

'

SS E = y y

i=1

'

i=1

( )
i=1

yi

Coefficient of determination , R2=

( )
yi

^ X y

SS R = ^ X ' y

SS R
SS
=1 E
SST
SS T

SS E /(n p)
SST /(n1)

H 0 : j= j , 0
H 1 : j j ,0
^ j j , 0 ^ j j , 0
T0 :
=
Reject null hypothesis if
2
se ( ^ )
^ C
Null hypothesis:

Test Statistic: :

ij

|t 0|>t /2, n p

Test for signifance of Regression

H 0 : 1=0
H 1: 1 0
SS ( )/ r
F0 = R 1 2
MS E
Reject null hypothesis , if f 0> f , r ,n p . This concludes that at least one of parameters 1 is not zero .
100 (1 ) CI on theregression coefficient j j=0,1,2, , k is

^ t
j

^ C
2

,n p

ij

j ^ j+t
2

^ C
2

,n p

ij

100 (1 ) CI about the meanresponse at point x 01 , x 02 , , x 0 k , is

^Y x t
0

^ x ( X X )
2

,n p

'
0

'

x 0 Y x ^ Y x + t
0

,n p

x'0 ( X ' X ) x 0

100 (1 ) PI on a future observation Y 0 at the value x 01 , x 02 , , x 0 k is

^y 0t
2

^ ( 1+ x ( X X )
2

,n p

'
0

'

x 0 ) Y 0 ^y 0 +t
2

Residuals ei = y i^y i

, n p

d i=
Studentized Residuals r i =

ei

^ (1h )
2

^ ( 1+ x ( X X )
'
0

'

x 0)

ei
=e i / ^ 2
MS E

Where i=1,2, , n

ii

1
^ H=X ( X ' X ) X '
1
'
'
hii =ith diagonal element of the H xi ( X X ) xi
'
Coo k sDistance
'
( ^ (i ) ^ ) X ' X ( ^ (i) ^ )
Di=
where i=1,2, , n
p ^ 2
r 2 hii
Di= i
where i=1,2, , n
p (1hii )
Stepwise Regression
SS R ( j 1 , 0 )
F j=
MS E (x j , x 1)

SS E (p)
n+ 2 p
2
^
Prediction Error of Squares
n
n
ei
PRESS= ( y i ^y(t )) 2=
i=1
i=1 1hii

C p Statistic ,C p=

( )

Variance Inflation Factor (VIF )

1
VIF ( j )=
Where j=1,2, , k
2
(1R j )

Single-Factor Experiments
a

of squares Identity , ( y ij y .. )2=a ( y i. y .. )2 + ( y ij yi . )2

i=1 j=1

i=1

i=1 j=1

SS T =SS Treatments + SS E
a

E ( SS Treatments ) =( a1 ) + n i
2

i=1

E ( SS E )=a ( n1 )
MS E =SS E /[a ( n1 )]
SS
/(a1) MS Treatments
F0 = Treatments
=
MS E
SS E /[a ( n1 ) ]
2
a
n
y ..
2
SS T = y ij
N
i=1 j=1
2
a
y i . y 2..
SS Treatments =
N
i=1 n
SS E =SST SSTreatments
Y
T = i . i has t distribution witha(n1) degrees of freedom
MS E /n
7

100 (1 ) CI about the meanof the ith treatment i ,is

MS E
MS E
y i .t
i y i . + t
(
)
,a n1
,a (n1)
n
n
2
2
100 (1 ) CI on thedifference two treatment means i j ,is
2 MS E
2 MS E
y i . y j . t
i j y i . y j . +t
, a ( n1)
,a (n1 )
n
n
2
2

t 0=

Fisher ' s Least Significant Difference ( LSD ) method

Null Hypothesis H 0 : i= j wherei j

y i . y j .

2 MS E
n
Reject null hypothesis if | y i . y j .|> LSD
LSD=t
2

,a (n1)

2 MS E
n

If sample sizes are different , LSD =t

2

, N a

MS E

( n1 + n1 )
i

Power of ANOVA test , 1=P { Reject H 0| H 0 is false }=P {F 0 > f ,a1,a (n1)H 0 is false }
a

2=

n 2i
i=1

a 2

MS
SS Treatments
E( Treatments)=E
= 2 +n 2
a1

MS
SS E
E( E)=E
= 2
a (n1)

2
^ =MS E
MS Treatments MS E
^ 2 =
n

a

i=1

j=1

i=1 j=1

SS T =SS Treatments + SS Blocks + SS E

i=1 j=1

E ( MS Treatments ) = 2 +

b 2i
i=1

a1
b

E ( MS Blocks )= 2+

a 2j
j=1

b1
8

E ( MS E ) = 2
SS
MS Treatments= Treatments
a1
SS
MS Blocks = Blocks
b1
SS E
MS E =
(a1)(b1)
2
a
b
y ..
2
SS T = y ij
ab
i=1 j=1
a
y 2..
1
2
SS Treatments = y i .
b i=1
ab
2
b
y..
1
2
SS Blocks = y . j
a j =1
ab
SS E =SST SSTreatments SS Blocks

e ij = y ij ^y ij
^y ij = y i . + y . j y ..

Two Factors experiment

b

y i ..= y ijk
j=1 k=1

y i ..
wherei=1,2, , a
bn

y i ..=

y . j .= y ijk
i =1 k=1

y . j .=

y. j .
where j=1,2, ,b
an
n

y ij. = y ijk
k=1

y ij. =

y ij.
where i=1,2, , a
n
a

y ...= y ijk
i=1 j=1 k=1

y ...=

y ...
where j=1,2, , b
abn
a

SS T = ( y ijk y ... )2
i=1 j=1 k=1

i=1

j=1

i=1 j=1 k=1

SS T =SS A + SS B +SS AB +SS E

i=1 j=1

i=1 j=

E ( MS A ) =E

SS
( a1
)= +
A

bn 2i
i=1

a1

E ( MS B ) =E

an 2j

2
SSB
= + j=1
b1
b1

( )

n ( )2ij

SS AB
E ( MS AB )=E
= + i=1 j=1
(a1)(b1)
(a1)(b1)
2
SS E
E ( MS E ) =E
=
ab (n1)

MS A
MS E
MS B
F Test for Factor B , F 0=
MS E
MS AB
F Test for AB Interaction , F 0=
MS E
2
a
b
n
y ...
2
SS T = y ijk
abn
i=1 j=1 k=1
2
2
a
y
y
SS A = i .. ...
abn
i=1 bn
2
2
b
y
y
SS B = . j . ...
abn
j=1 an
2
2
a
b
y
y
SS AB= ij . ... SS A SS B
abn
i=1 j=1 n
F Test for Factor A , F 0=

SS E =SST SS ABSS ASS B

Three-Factor Fixed Effects Model
Source Sum
Degrees of
of
of
Freedom
Variati
Squar
on
es
A
SSA
a1

Mean
Squa
re
MSA

SSB

b1

MSB

SSC

c1

MSC

AB

SSAB

(a 1)(b 1)

MSAB

AC

SSAC

(a 1)(c 1)

MSAC

BC

SSBC

(b 1)(c 1)

MSBC

Expected Mean Squares

bcn 2i
+
a1
acn 2j
2+
b1
abn 2k
2+
c1
cn ()2ij
2+
(a1)(b1)
2
2 bn ( )ik
+
(a1)( c1)
2
2 an ( ) jk
+
(b1)(c1)
2

F0

MS A
MS E
MS B
MS E
MS C
MS E
MS AB
MS E
MS AC
MS E
MS BC
MS E

10

ABC

SSABC

(a 1)(b 1)(c
1)

MSABC

MS ABC
MS E

n ( )2ijk
(a1)(b1)(c1)

Error
SSE
abc(n 1)
MSE
2
Total
SST
abcn 1
2k Factorial Designs
(l) Represents the treatment combination with both factors at the low level.

a+ ab b+(l) 1

= [ a+abb(l) ]
2n
2n
2n
A+ y
Main Effect of Factor A : A= y
b+ab a+(l) 1
B=

= [ b+ aba(l) ]
2n
2n
2n
B+y
Main Effect of Factor B : B= y
ab+(l ) a+ b 1
Interaction Effect AB: AB=

= [ ab+ (l ) ab ]
2n
2n 2n
Contrast Coefficients are always+11. Contrast A =a+abb(l)
Contrast
Effect=
n 2k1
( Contrast )2
of squares for an effect , SS= n 2k
Y = 0 + 1 x 1 +
y
+
2
y
effect
Coefficienteffect , ^=
=
2
^
1
1
1
Standard error of a coefficient , standard error ^=
+ k1 =^
k1
2 n2
n2
n 2k
y

^
tstatistic for a coefficient , t=
=
standard error ^
2k Factorial Designs for k 3 factors
1
A=
[ a+ ab+ac +abc( 1 ) bcbc ]
4n
A + y
A= y
1
B=
[ b+ab+ bc+ abc( 1 )acac ]
4n
B+ y
B= y
A=

11

C=

1
[ c+ ac+bc +abc (1 )abab ]
4n
C+ y
C= y

1
[ abcbc +abbac+ ca+(1)]
4n
1
AC =
[ ( a ) a+babc +acbc+ abc ]
4n
1
BC =
[ ( 1 ) +ababcac +bc +abc ]
4n
1
ABC =
[ abcbcac+ cab+b +a(1)]
4n
Y = 0 + 1 x 1 + 2 x 2+ 12 x1 x 2+
y F y C 2
n F nC ( y F y C )2
SS Curvature =
=
n F +n C
1 1
+
nF nC
First order model :Y = 0+ 1 x 1+ 2 x2 + k x k +
AB=

( )

i=1

i=1

Second order model :Y = 0 + i x i + ii x i + ij x i x j +

2

i< j

Steepest Ascent ^y = ^ 0+ ^i x i
i=1

i=1

i=1

2
Fitted Second order model ^y = ^ 0+ ^i x i+ ^ii x i + ^ ij x i x j
i< j

12

FORECASTING
(error ) 2
forecast error

Mean Squared Error (MSE)
n
n
error
actual
Mean Absolute Percent Error (MAPE)
100%
n
Y Yt 1 ... Yt n 1
sum of demands in previous n periods
Mean Average Forecast
Ft 1 t
n
n
Weighted Moving Average : Ft 1

(Weight

(Weights )

w1Yt w2Yt 1 ... wnYt n 1

w1 w2 ... wn

New forecast Last period s forecast (Last period s actual demand Last period s forecast)
Exponentia l Smoothing with Trend :
Y b0 b1 X
Ft 1 FITt (Yt FIT t )
where Y predicted value
Tt 1 Tt (Ft 1 FIT t )
b0 intercept
FIT t 1 Ft 1 Tt 1
b1 slope of the line
X time period (i.e., X 1, 2, 3, , n)
Y a b X b X b X b X
Tracking signal

RSFE

(forecast error)

13

INVENTORY CONTROL MODELS

Annual ordering cost Number of orders placed per year
Q
Average inventory level =
2
(Ordering cost per order)
Annual Demand
D

Co Co
Number of units in each order
Q
Annual holding cost Average Inventory
Economic Order Quantity
Annual ordering cost = Annual holding cost
(Carrying cost per unit per year)
D
Q
Co C h
Order quantity

(Carrying cost per unit per year) Q

2
2
2DCo
Q
EOQ Q *
Ch
Ch
2
Total cost (TC) = Order cost + Holding cost
Cost of storing one unit of inventory for one year = Ch =
D
Q
IC, where C is the unit price or cost of an inventory item
TC Co Ch
and I is Annual inventory holding charge as a percentage
Q
2
of unit price or cost
2DCo
Q*
IC
ROP without Safety Stock:
EOQ without instantaneous receipt assumption
Reorder Point (ROP) = Demand per day x Lead
Maximum inventory level (Total produced during the
time for a new order in days
production run) (Total used during the production run)
dL
(Daily production rate)(Number of days production)
Inventory position = Inventory on hand +
(Daily demand)(Number of days production)
Inventory on order
(pt) (dt)

Q
Q
d
pt dt p d Q 1
p
p
p

Total produced Q pt

Q
d
1
2
p
D
Annual setup cost Cs
Q
Average inventory

Production Run Model: EOQ without

instantaneous receipt assumption
Annual holding cost Annual setup cost
Q
d
D
1 Ch Cs
2
p
Q

Q*

2DCs

d
Ch 1
p

Safety Stock

Q
d
1 Ch
2
p
D
Annual ordering cost Co
Q
D = the annual demand in units
Q number of pieces per order, or production run
Quantity Discount Model
2DCo
EOQ
IC
If EOQ < Minimum for discount, adjust the quantity to Q
= Minimum for discount
Total cost Material cost + Ordering cost + Holding cost
D
Q
Total cost DC + Co + Ch
Q
2
Holding cost per unit is based on cost, so Ch = IC
Where I = holding cost as a percentage of the unit cost
(C)
Safety Stock with Normal Distribution
Annual holding cost

14

ROP = Average demand during lead time + Safety

Stock
Service level = 1 Probability of a stockout
Probability of a stockout = 1 Service level

ROP d L Z d L

ROP = (Average demand during lead time) + ZsdLT

Z
= number of standard deviations for a given
service level
dLT = standard deviation of demand during the lead
time
Safety stock = ZdLT
Demand is constant but lead time is variable
ROP dL Z d L

d average daily demand

d standard deviation of daily demand

L standard deviation of lead time

Both demand and lead time are variable

d daily demand
Total Annual Holding Cost with Safety Stock
Total Annual Holding Cost = Holding cost of regular
inventory + Holding cost of safety stock
Q
THC Ch (SS)Ch
2

ROP d L Z L d2 d 2 L2

The expected marginal profit = P(MP)

The expected marginal loss = (1 P)(ML)
The optimal decision rule
Stock the additional unit if P(MP) (1 P)ML
P(MP) ML P(ML)
P(MP) + P(ML) ML
P(MP + ML) ML
ML
P
ML + MP

15

PROJECT MANAGEMENT
t=

a + 4m + b
6

Expected Activity Time

Earliest finish time = Earliest start time + Expected
activity time
EF = ES + t
Latest start time = Latest finish time Expected
activity time
LS = LF t
Slack = LS ES, or Slack = LF EF
Project standard deviation T Project va riance

Value of work completed = (Percentage of work

complete) x (Total activity budget)
Crash cost Normal Cost
Crash cost/Time Period
Normal time Crash time

b a

6
Earliest start = Largest of the earliest finish times of
immediate predecessors
ES = Largest EF of immediate predecessors
Latest finish time = Smallest of latest start times for
following activities
LF = Smallest LS of following activities
Project Variance = sum of variances of activities on the
critical path
Due date Expected date of completion
Z
Variance =

T
Activity difference = Actual cost Value of work
completed

16

WAITING LINES AND QUEUING THEORY MODELS

Single-Channel Model, Poisson Arrivals,
Multichannel Model, Poisson Arrivals,
Exponential Service Times (M/M/1)
Exponential Service Times (M/M/m or M/M/s)
m = number of channels open
= mean number of arrivals per time period (arrival
= average arrival rate
rate)
= mean number of customers or units served per
= average service rate at each channel
time period (service rate)
The probability that there are zero customers in the
The average number of customers or units in the
system
system, L
1
P0
for m
n
m

n = m 1 1
1
m
L

n =0 n! m! m

The average time a customer spends in the system, W

The average number of customers or units in the
1
W
system

( / ) m

The average number of customers in the queue, Lq

L
P
2 0
(m 1)!(m )

2
Lq
The average time a unit spends in the waiting line or
( )
being served, in the system
The average time a customer spends waiting in the
( / ) m
1 L
queue, Wq
W
P
2 0
(m 1)! (m )

Wq
The
average
number
of
customers
or units in line
( )
waiting for service
The utilization factor for the system, (rho), the

Lq L
probability the service facility is being used

in the system

P0 1

The probability that the number of customers in the

system is greater than k, Pn>k
k 1

Pn>k

n n
C=
=

P0=1
Pn=(1) n
Finite Population Model
(M/M/1 with Finite Source)
= mean arrival rate
= mean service rate
N = size of the population
Probability that the system is empty

The average number of customers or units in line

waiting for service
1 L
Wq W q

The average number of customers or units in line
waiting for service (Utilization rate)

()

Total service cost = (Number of channels) x (Cost per

channel)
Total service cost = mCs
m = number of channels
Cs = service cost (labor cost) of each channel
Total waiting cost = (Total time spent waiting by all
arrivals) x (Cost of waiting)
= (Number of arrivals) x (Average wait per arrival)Cw
17

P0

= (W)Cw

1
n

n 0
Average length of the queue

Lq N
1 P0

Average number of customers (units) in the system
L Lq 1 P0
N

N!

(N n)!

Total waiting cost (based on time in queue) =

(Wq)Cw
Total cost = Total service cost + Total waiting cost
Total cost = mCs + WCw
Total cost (based on time in queue) = mCs + WqCw

Average waiting time in the queue

Lq
Wq
(N L)
Average time in the system
1
W Wq

Probability of n units in the system

n

N!
Pn
P for n 0,1,..., N
N n ! 0
Constant Service Time Model (M/D/1)
Average length of the queue

2
Lq
2 ( )
Average waiting time in the queue

Wq
2 ( )
Average number of customers in the system

L Lq

1
W Wq

Littles Flow Equations

L = W
(or W = L/)
L q = Wq
(or Wq = Lq/)
Average time in system = average time in queue +
average time receiving service
W = Wq + 1/

N(t) = Number of customers in queuing system at time t (t >= 0)

Pn(t) = Probability of exactly n customers in queuing system at time t, given number at time 0.
s = number of servers (parallel service channels) in queuing system
n= mean arrival rate (expected number of arrivals per unit time) of new customers when n customers are in
system
= expected interarrival time
n= mean service rate for overall system (expected number of customers completing service per unit time)
when n customers are in system. Represents combined rate at which all busy servers (those serving customers)
achieve service completions
= expected service time
Utilization factor = = /(s)
Pn= probability of exactly n customersin queuing system
18

n=0

Lq=expected queue length( excludes customers being served)= (ns) Pn

n =s

W =E (waiting time system ( includes service time ) for each individual customer )
W q =E( waitingtimequeue ( excludes service time ) for each individual customer)
Littl e' s formula : L=W
Lq= W q
1
W =W q +

Impact of Exponential distribution on Queuing Model

t
Exponential Distribtuio n' s probability density function , f T ( t )= e for t 0
0 for t< 0
t
t
Cumulative Probabilities : P { T t }=1e ; P {T >t }=e
Property 1 : f T (t ) isa strictly decreasing function of t ( t 0 ) ; so P {0 T t }> P {t T t+ t }
Property 2 : Lack of memory ; so P { T > t+ tT > t }=P { T > t }
Property 3 : Minimum of several independent exponential random variables
has an exponential distribution .
U=min {T 1 , T 2 , , T n }

P {U > T } =exp i t
i=1

= i
i=1

Property 4 : Relationship Poisson distribution

n t
( t) e
P { X ( t ) =n }=
, for n=0, 1,2, ; Also mean E { X ( t ) }=t ; where is mean rate
n!
X ( t )=number of occurrences by time t (t 0)
Property 5 : For all positive values of t , P { T t+ tT >t } t , for small t
BirthDeath Process
0
Cn = n1 n2
, for n=1,2,
n n1 1
For n=0, Cn=1
Pn=C n P0 , for n=0, 1, 2,

Pn =1
n=0

P0=

( )
Cn
n=0

rate
long run , = n Pn
the
n=0
T1, T2, be independent service-time random
variables having an exponential distribution with
parameter, and let
Sn+1 = T1 + T2 + + Tn+1, for n = 0, 1, 2,
Sn+1 represents the conditional waiting time given n
customers already in the system. Sn+1 is known to have
Average arrival

M/M/s, with s > 1

19

an Erlang distribution.

n=0

( 1 ) t

, for t 0

n=0

Cn =

n!

()
s

s ! s

()

ns

( )

P0=1/

for n=1,2, , s

=
s ! s ns

()

for n=s , s +1,

s1

( / )n ( / ) s
1
n ! + s ! 1/(s)
n=0

P
if 0 n s
n! 0
P n=
n

P0
if n s
s ! sn s

()
()

P0 ( /)s
Lq =
s !(1 )2
L
W q= q

1
W =W q +

L=Lq +

t (s1 )

P ( /)s 1e
P { E (W ) >t }=et 1 + 0
s ! (1 ) s1/
Wq
P{ E(W q)>t }=(1P {E ( 0 }) es (1 ) t
Wq

)]

s1

P{ E ( 0 }= Pn
n=0

Finite Queue Variation of M/M/s (M/M/s/K model)

Finite Queue: Number of customers in the system
cannot exceed a specified number, K.
Queue capacity is K-s
n = for n=0,1, 2, , K 1
0 for n K
M/M/1/K
n
( / ) = n for n=1, 2, , K
Cn =
0
for n> K
1
P0=
K +1
1
1 n
P n=
, for n=0, 1,2, , K
1 K +1

Finite Calling Population variation of M/M/s

M/M/1 with finite population

N!

Cn = ( Nn ) !
0

()

P0=1/
n=0

for n N
for n> N

( )]

N!

( Nn ) !

N!

P0 ,if n=1, 2, , N
( Nn ) !
N
+
Lq= (n1) P0=N
(1P0 )

n =1

P n=

()

20

L=N (1P0 )

L
W=

L
W q= q

K +1

L=

( K +1)

1
1 K+1
Lq=L(1P 0)
L
W q= q

L
W=

= n Pn= ( NL)

n=0

= P = (1P )
n n
K

M/M/s with finite population and s > 1

n=0

M/M/s/K

( /)n
n!
s
Cn = ( / ) ns ( /)n
=
s ! s
s ! s ns
0 for n> K

for n=1, 2, , s

( )

P0=1/

n=0

n s
K

n!
s ! n=s+1 s

( ) ()

P
n! 0
Pn=
n

P0
s ! sn s
0 for n> K

()
()

ns

( )

N!

( N n ) ! n !

()
C =
N!

(
)
( Nn ) ! s ! s
n

n s

for n=s , s +1, , K

0 for n> K

s1

for n=1, 2, , s

( )]

N!
n
( Nn ) ! n! + ( N nN) !!s ! s ns
n=0
n=s

()
N!

P
(
( Nn ) ! n ! )
P=
N!

(
)P

( N n ) ! s ! s

P0=1/

if 0 n s

ns

if s n N

0 if n> N
N

Lq= (ns) Pn
n=s

for n=s , s+1, , K

s1

s1

L= n Pn + Lq + s 1 Pn
n=0

s1

L= n Pn + Lq + s 1 Pn
n=0

for n=0,1, 2, , s

n=0

n=0

L
W q= q

1 K s(K s) K s
P ( /)s
Lq = 0

s !(1 )2
s1

W=

= P = ( NL)
n n
n=0

L
W q= q

L
W=

M/G/1 Model
P0=1

M/D/s Model
2

PollaczekKhintchine Formula , Lq=

+
2(1)

For M / D/1model , Lq =

2
2(1)

L=+ Lq
L
W q= q

21

For any expected service time 1/ ,

notice that Lq , L , W qW all increase
2
as is increased .
M/Ek/s Model
Erlang distributio n' s probability density function
( k )k k1 kt
f ( t )=
t e , for t 0
( k1 ) !
k are strictly positive
k must be aninteger
Excludinginteger restriction ,
Erlang is same as gamma
1
Mean=

1 1
Standard deviation=
k
T1, T2, , Tk are k independent random variables with
an identical exponential distribution whose mean is 1/
(k).
T = T1 + T2 + + Tk, has an Erlang distribution with
parameters and k. Exponential and degenerate
(constant) are special cases of Erlang distribution with
k=1 and k=respectively.
M/Ek/1 Model
2
2
2
2
/ (k )+ 1+k

Lq =
=
2(1)
2k ()
L=W
1+ k

W q=
2 k ( )
1
W =W q +

W =W q +

Jackson Networks
With m service facilities where facility i (i=1, 2, .,., m)

Nonpreemptive Priorities Model

the system ( including service time ) for a
member of priority class k , W k
1
1
Wk=
+ , for k=1, 2, , N
A Bk1 Bk
s1

Where A=s !
B 0=1

s
rj
+s
r s j=0 j !

B k =1 i=1
s
s=number of servers
=meanservice rate per busy server
i=meanarrival rate for priority classi
N

= i
i=1

r=

i< s
i=1

Lk = k W k , for k =1,2, , N
A=s !
For single server , s=1
A= 2 /
With different exponential
k =mean service rate for priority class k ,
for k=1, 2, , N
ak
1
Wk=
+ , for k =1,2, , N
b k1 b k k
k

Where a k = i2
i=1 i
b0 =1
k
i
bk =1
i=1 i
k

i <1
i=1
i
Preemptive Priorities Model
1/
For s=1,W k =
, for k=1, 2, , N
B k1 B k
22

1. Infinite Queue
2. Customers arriving from outside the
system according to a Poisson input
process with parameters ai
3. si servers with an exponential servicetime distribution with parameter

Lk = k W k , for k =1,2, , N

A Customer leaving facility i is routed next to facility j

(j= 1, 2, , m) with probability pij or departs the
system with probability
m

qi =1 pij
j=1

Jackson network behaves as if it were an independent

M/M/s queueing system with arrival rate
m

j=a j + i pij
i=1

where s j j > j

i= i
si i

23

MARKOV ANALYSIS
(i) =
vector of state probabilities for period Pij = conditional probability of being in state j in the
future given the current state of i
i
P11 P12 P1n
=
(1, 2, 3, , n)
P
where
P22 P2 n
21

P
n
=
number of states

1, 2, , n =
probability of being in state 1,

P
P

P
m
1
m
2
mn

state 2, , state n
For any period n we can compute the state
probabilities for period n + 1
(n + 1) = (n)P
Fundamental Matrix
F = (I B)1
Inverse of Matrix
a b
P

c d
P -1

a b

c d

r
c

b
r
a

Partition of Matrix for absorbing states
I O
P

A B
I = identity matrix
O = a matrix with all 0s

Equilibrium condition
= P
M represent the amount of money that is in each of the
nonabsorbing states
M = (M1, M2, M3, , Mn)
n
= number of nonabsorbing states
M1
= amount in the first state or category
M2
= amount in the second state or category
Mn
= amount in the nth state or category

Computing lambda and the consistency index

n
CI
n 1
Consistency Ratio
CI
CR
RI
Stochastic process {Xt} (t = 0, 1, ) is a Markov chain
if it has the Markovian property.

Stochastic process {Xt} is said to have the

Markovian property if P{Xt+1 = j | X0 = k0, X1 = k1,
, Xt-1 = kt-1, Xt = i} = P{Xt+1=j|Xt=i}, for t=0,1,
and every sequence i, j, k0, k1, , kt-1.
Pij = P{Xt+1 = j | Xt = i}
n-step transition probabilities:
(n)
Pij =P{ X t +n= jX t=i }
(n )
Pij 0, for all i j ; n=0, 1, 2,
M

P(n)
ij =1 for all i ; n=0, 1,2,
j=0

n-step transition matrix:

State
0
(n)
P = 1

0 1 M

P(n)
00

P(n01) P(n)
0M

P(n)
P(n)
P(n)
11
1M
10

(n)
P M 0 P(n)
P(n)
M1
MM

ChapmanKolmogorov Equations

24

P(ijn )= P(ikm ) P(kjnm ) , for all i=0,1, , M ; j=0, 1, , M ; m=1, 2, ,n1 ; n=m+1, m+ 2,
k=0

P(n) =P P(n1)=P(n1) P=P Pn1=P n1 P=Pn

(n)
(n)
(n)
Unconditional state probabilities: P { X n= j }=P { X 0 =0 } P0 j + P { X 0=1 } P1 j ++ P { X 0=M } PMj
(n)
lim p ij = j >0
n

1
lim p(k)
ij = j
n n k=1
M

j= i pij , for j=0,1, , M

i=0

j=1
j=0

1
lim E C (X t ) = j C( j)
n t=1
n
j =0
First Passage Time
(1)
f (1)
ij = pij = pij
(1)
f (2)
ij = pik f kj
k j

f ij = p ik f kj
(n)

(n1)

k j

f (n)ij 1
n=1

Expected first passage time state i state j

ij =

if f ij < 1
(n )

n=1

nf ij
n=1

(n )

if f ij =1
(n)

n=1

if f (n)
ij =1,then ij =1+ p ik kj
n=1

k j

Absorbing States
M

f ik = p ij f jk , for i=0,1, , M ,
j=0

Subject the conditions

f kk=1
f ik=0,if state iis recurrent i k

25

STATISTICAL QUALITY CONTROL

Upper control limit (UCL) x z x
UCL x x A2 R
Lower control limit (LCL) x z x
LCL x x A2 R
x = mean of the sample means
R = average of the samples
z = number of normal standard deviations (2 for
A2 = Mean factor
95.5% confidence, 3 for 99.7%)
x = mean of the sample means
x = standard deviation of the sampling distribution

x
of the sample means = n
UCL R D4 R

LCL R D3 R
UCLR = upper control chart limit for the range
LCLR = lower control chart limit for the range
D4 and D3 = Upper range and lower range

p-charts
UCL p p z p
LCL p p z p
p = mean proportion or fraction defective in the sample

Total number of errors

Total number of records examined
z = number of standard deviations
p

c-charts
The mean is c and the standard deviation is equal to
c

= standard deviation of the sampling distribution

p

is estimated by p
Estimated standard deviation of a binomial distribution
p (1 p )
p
n
where n is the size of each sample
Range of the sample = Xmax - Xmin

To compute the control limits we use c 3 c (3 is

used for 99.7% and 2 is used for 95.5%)
UCL c c 3 c

LCL c c 3 c
Control Chart Model
k is the distance of control limits from the
center line, expressed in Standard Deviation
units. Common choice is k = 3.

UCL=W + k W
CL=W
LCL= W k W
W isthe mean of the sample W
W is the standard deviationof sample W

W =
n

X Control Chart
UCL=+3 / n
LCL=3 / n
CL=
Estimate of meanof the population , grand mean,
m
1

^= X = X i
m i=1
X isthe center line on X control chart
X control chart R
UCL=x + A2 r
CL=x
LCL=x A 2 r

26

R Control Chart

S Control Chart
m

1 Ri
Estimate of mean R is R=
m i=1
R
Estimate of , is ^ =
d2
Constant d 2 istabulated for various sample sizes
+ 3 R
UCL= X
d2 n
3

LCL= X
R
d2 n
3
A 2=
d 2 n
UCL=D 4 r
CL=r
LCL=D3 r
Where r isthe sample average range .
D3D4 are tabulated for various sample sizes
Moving Range Control Chart
m
1

MR=
|X X i1|
m1 i=2 i

MR
MR
Estimate of , is ^ =
=
d 2 1.128
CL, UCLLCL for control chart for individuals
mr

mr

UCL=x +3
=x +3
d2
1.128
CL=x
mr

mr

LCL=x 3
=x 3
d2
1.128
Control chart for moving ranges
UCL=D 4 mr=3.267

mr

CL=mr

LCL=D3 mr=0

as D3 is 0 for n=2.

PChart (Control Chart for Proportions)

m
m
= 1 Pi = 1 D i
P
m i=1
mn i=1
p (1 p )
UCL= p +3
n
CL= p
p (1 p )
LCL=p 3
n
Where p isthe observed value of average .

Average Run Length , ARL=

1
p

S
c4
Constant c 4 is tabulated for various sample sizes
s
UCL=s +3 1c 24
c4
CL=s
s
LCL=s 3 1c24
c4
Estimate of , is ^ =

X control chart S
s
UCL=x +3
c 4 n
CL=x
s
LCL=x 3
c4 n

ProcessCapability Ratio ( PCR )=

USLLSL
6 ^

r
^ =
d2
Onesided PCR , is PCR k =min

USL LSL
,
3
3

LSL
)

USL
P ( X >USL ) =P (Z >
)

U Chart (Control Chart for Defects per Unit)

m
1 Ui
U=
m i=1
u
UCL=u + 3
n
CL=u
u
LCL=u 3
n
Where u isthe average number of defects per unit

CUSUM Chart
(Cumulative Control Chart )
27

Where p isthe probability that a normally

distributed point falls outside the limits
when the process iscontrol
Exponentially Weighted Moving Average Control
Chart ( EWMA )
z t= x t + (1 ) z t1 for each time t

UCL=0 +3
[ 1(1)2 t ]
n 2
CL=0

LCL= 03
[ 1(1)2t ]
2
n

R
^ = ^ = S
^0= X
d2
c4

MR
For n=1, ^ 0= X ^ =
1.128

Upper onesided CUSUM for period i

S H (i)=max [ 0, x i( 0+ K ) +s H (i1) ]
Lower onesided CUSUM for period i
S L (i)=max [ 0, ( 0K )x i +s L (i1) ]
Where starting values s H ( 0 )=s L ( 0 )=0
H is called decision interval

Reference value , K=
2
1=0 +
K=k ; H =h
s (i)
0 + K + H , if s H ( i ) > H
nH
^=
s (i)
0K L , if s L ( i )> H
nL

28

OTHERS
Computing lambda and the consistency index
The input to one stage is also the output from
another stage
n
CI
sn1 = Output from stage n
n 1
The transformation function
Consistency Ratio
tn = Transformation function at stage n
CI
CR
General formula to move from one stage to
RI
another using the transformation function
sn1 = tn (sn, dn)
The total return at any stage
fn = Total return at stage n
Transformation Functions
sn 1 an sn bn d n cn
Return Equations
rn an sn bn d n cn
Probability of breaking even
Fixed cost
Break - even point (units)
break - even point
Price/unit Variable cost/unit
Z

sv
P(loss) = P(demand < break-even)
Price Variable cost
EMV

(Mean demand)
P(profit) = P(demand > break-even)
unit
unit

Fixed costs
Using
the
unit normal loss integral, EOL can be
K(break
even
point

X)for
X

BEP

Opportunit y Loss
computed using
\$0for X BEP

EOL = KN(D)
where
EOL = expected opportunity loss
K = loss per unit when sales are below the break-even
K = loss per unit when sales are below the breakpoint
even point
X = sales in units
= standard deviation of the distribution
N(D) = value for the unit normal loss integral for a
given value of D
break even point
D

a

AB b d
c

a b

c d

e bd be C
cd ce

d

f

e f ae bg af bh

g h ce dg cf dh

a b
c d
Determinant Value = (a)(d) (c)(b)
a b c
d e f
g h i
Determinant Value = aei + bfg + cdh gec hfa
idb
Numerical value of numerator determinan t
X
Numerical value of denominato r determinan t

29

a b

Original matrix
c d
Determinan t value of original matrix ad cb
d c

Matrix of cofactors
b a
d b

c a
Equation for a line
Y = a + bX
where b is the slope of the line
Given any two points (X1, Y1) and (X2, Y2)
Change in Y Y Y2 Y1
b

Change in X X X 2 X 1

a b

c d

d
c

a

For the Nonlinear function

Y = X2 4X + 6
Find the slope using two points and this equation
Change in Y Y Y2 Y1
b

Change in X X X 2 X 1

Y1 aX 2 bX c

Y C

Y 0

Y2 a ( X X ) 2 b( X X ) c

Y Xn

Y nX n 1

Y Y2 Y1 b( X ) 2aX (X ) c(X ) 2
Y b(X ) 2aX (X ) c(X ) 2

X
X
X (b 2aX cX )

b 2aX cX
X

Y cX n

Y cnX n 1

1
Xn
Y g ( x ) h( x )

Total cost (Total ordering cost) + (Total holding cost)

+ (Total purchase cost)
D
Q
TC C o + C h DC
Q
2
Q = order quantity
D = annual demand
Co = ordering cost per order
Ch = holding cost per unit per year
C = purchase (material) cost per unit

Name

Y g ( x ) h( x )
Economic Order Quantity
dTC DCo Ch

dQ
Q2
2
2DCo
Q
Ch

n
X n 1
Y g ( x ) h( x)
Y g ( x ) h( x)

d 2TC DCo
3
dQ 2
Q

When to Use

Approximations /
Probability M
Conditions
an
E(X) is Expected Value = Mean; Xi = random variables possible values; P(Xi) = Probability of each of the random
n

i 1

i 1

E X X i P X i Variance 2 [ X i E (X)] 2 P (X i )
Uniform
(Discrete)

Equal probability
Finite number of possible values

x

Cumulative F

For a series of n values, f(x) = 1/ n, a b

For a range that starts from a and ends with b (
b

(b a)
2

2 Variance

(b a 1) 2 1
12
30

Binomial /
Bernoulli
(Discrete)

Bernoulli Trials:
Each trial is independent
Probability of success in a trial is
constant
Only two possible outcomes
Unknown: Number of successes
Known: Number of trials
Number of trials that result in a
success

If n is large (np > 5, n(1-p)

> 5), approximate binomial
to normal. P(X x) = P(X
x+0.5)
P(x X) = P(x-0.5 X)
If n is large & p is small,
approximate to Poisson as
= np
Binomial expansion

Probability of r s

x = 0,1,,n

nC r C

Expected valu
Variance =

n r nr
p q
k 0 r
n

a b n
Geometric
(Discrete)

Bernoulli trial; Memoryless

Number of trails until first success

Negative
Geometric
(Discrete)

Unknown: Number of trials

Known: Number of success
Number of trials required to obtain r
successes

Hypergeom
etric
(Discrete)

Poisson
(Discrete)

Name

x = 1,2,,n
Expected value (mean) = E(X) = = 1/p
= (1 p)/p2

Poisson Process:
Probability of more than one event
in a subinterval is zero.
Probability of one event in a
subinterval is constant &
proportional to length of subinterval
Event in each subinterval is
independent
Number of events in the interval

E X

xf ( x)dx

Uniform
(Continuo
us)

f ( x) (1 p ) x 1 p

Trials are not independent

Without replacement
Number of success in the sample

When to Use

Equal probability
0

F ( x ) ( x a ) /( b a)
1

n!
p r q n r
r!(n r )!

x 1
1 p x r p r
r 1

f (x )

x = r,r+1,r+2,, 0

E(X) = = r/p
V(X) = V(X) of binomial *
((N-n)/(N-1)) where ((N-n)/
(N-1)) is called finite
population correction factor
n << N or (n/N) < 0.1,
hypergeometric is equal to
binomial.
Approximated to normal if
np > 5, n(1-p) > 5 and (n/N)
< 0.1
Arrival rate does not change
over time; Arrival pattern
pattern; Arrival of disjoint
time intervals are
independent.
Approximated to normal if
>5

Z X

= Variance =

f (x )

x =max(0,n-N+k
K objects classed
objects classified
of n objects
E(X) =

f ( x) P( X x )

P(X) = probabilit
occurrences
Expected
Taylor

Approximations / Conditions

Probability
a
P(x1 X x2) = P(x1 X x2) = P(x1 X x2) = P(x1 X x2)

( x ) f ( x )dx
2

xa
axb
bx

f (u )du

; for

For a series of n values, f(x) = 1/ (b a); where a x b

For a range that starts from a and ends with b (a, a+1, a+2

( a b)

(b a ) 2
Variance V ( X )
12
2

31

Normal
(Continuo
us)

Notation: N(,)
X is any random variable
Cumulative (z) = P(Z < z); Z
is standard normal

If n is large (np > 5, n(1-p) > 5), binomial

is approximated to normal.

x 0.5 np
P ( X x) P( X x 0.5) P Z

np(1 p )

x 0.5 np

P ( x X ) P ( x 05 X ) P
Z
np (1 p )

Adding or subtracting 0.5 is called

continuity correction.
Normal is approximated to Poisson if > 5

X
Z

f(
- < x

E(X

Standard norm
variance =

P ( X x) P

Cumulative
normal

- < < +
Exponenti
al
(Continuo
us)

Memoryless

P ( X t1 t 2 | X t1 ) P ( X t 2 )

distance between
successive events of Poisson
process with mean > 0
length until first count in a
Poisson process

f (x ) e

for 0 x f ( x ) 0 for x < 0

P ( X x ) 1 F ( x ) e x
P ( X x ) F ( x ) 1 e x
1
Expected value = = Average ser
P ( a X b ) F (b ) F ( a )

The probability that an exponentially distributed time (X) re

is less than or equal to time t is given by the formula, P (X

Erlang
(Continuo
us)

r shape
scale
Time between events are
independent
length until r counts in a
Poisson process or
exponential distribution

For mean and variance: Exponential multiplied by r gives Er

Variance =

r
2

f (x )

r x r 1 e x
for x 0 an
(r 1)!

P(X>0.1) = 1 F(0.1)
If r = 1, Erlang random variable is an exponential random v
Gamma
(Continuo
us)

For r is an integer (r=1,2,),

gamma is Erlang
Erlang random variable is
time until the rth event in a
Poisson process and time
between events are
independent
For = , r = , 1, 3/2, 2,
gamma is chi-square

Gamma Function

f (x)

r 1 x

x e
(r )
r

Mean r

(r ) x r 1 e x dx, for r 0
0

Variance 2 r 2

32

Weibull
(Continuo
us)

Includes memory property

- Scale; - Shape
Time until failure of many
different physical systems

=1, Weibull is identical to exponential

=2, Weibull is identical to Raleigh

f (x )

Cumu

E( X )

2
Lognorma
l
(Continuo
us)

Includes memory property

X = exp(W); W is normally
distributed with mean and
variance
ln(X) = W; X is lognormal
Easier to understand than
Weibull

Weibull can be approximated to lognormal with and

ln( x)

F ( x ) P X x P exp(W ) x P W ln( x ) P Z

F ( X ) 0, for x 0
f (x )

(ln x ) 2
exp
for 0 x
2 2
x 2

E ( X ) e
Beta
(Continuo
us)

Flexible but bounded over a

finite range

E( X )
Power
Law
(Continuo
us)

Central
Limit
Theorem

Called as heavy-tailed
distribution.
f(x) decreases rapidly with x
but not as rapid as
exponential distribution.

V ( X ) e 2 e 1
( ) 1
f (x)
x (1 x ) 1 for 0 x 1,
( )( )

V (X )

A random variable described by its minimum value x min and

said to obey the power law distribution if its density

( 1) x

f (x)
xmin xmin

Normalize the function for a given set of parameters to

If X1, X2, , Xn is a random sample of size n taken from a population (either finite or infin

variance 2, and if X is the sample mean, the limiting form of the distri

X
/ n

33

Name
Two or more
Discrete
Random
Variables

f X (x ) P( X x)
y

Probability Density function, Mean and Variance

f XY ( x, y )
f Y (y ) P (Y y ) f XY ( x, y )
f
x

E Y | x y f Y |x ( y)

V (Y | x) ( y Y | x ) 2 f Y | x ( y )

f XY ( x, y ) f X ( x) f Y ( y )

f Y ( y ); f XY ( x, y ) f X ( x) f Y ( y
f X ( x)
f X ( x)
f X 1 X 2 X p ( x1 , x 2 , , x p ) P ( X 1 x1 , X 2 x 2 , , X p x p )

Independence f Y | x ( y )
Joint Probability Mass Fn:

for all po

of X1,X2,,Xp
Joint Probability Mass For subset:

f X 1 X 2 X k ( x1 , x 2 , , x k ) P ( X 1 x1 , X 2 x 2 , , X k x k ) P ( X 1 x1 , X 2 x 2 , , X k x k )
X1,X2,,Xp for which X1=x1, X2=x2,, Xk=xk
Marginal Probability Mass Function:

f X i ( x i ) P( X i x i ) f X 1 X 2 X p ( x1

E ( X i ) x i f X 1 X 2 X p ( x1 , x 2 , , x p )

Multinomial
Probability
Distribution

Two or more
Continuous
Random
Variables

V ( X i ) xi X i

Mean:
Variance:
The random experiment that generates the probability distribution consists of a s
However, the results from each trial can be categorized into one of k classes.

P( X 1 x1 , X 2 x 2 , , X k x k )

n!
p1x1 p 2x2 p kxk
x1 ! x 2 ! x k !
E X i np i

Marginal Probability Density Function:

f Y |x (y )

for x1 x 2 x k n and p1

V ( X i ) np i (1 p i )

f X (x ) f XY ( x, y ) dy

f Y (y )

f XY ( x, y )
V (Y | x) ( y
for f X ( x) 0 E Y | x y f Y | x ( y ) dy
f X ( x)
y
y
f Y | x ( y ) f Y ( y ); f X | y ( x ) f X ( x) f XY ( x, y ) f X ( x) f Y ( y ) fo

Independence:

Joint Probability Density Fn:

P ( X 1 x1 , X 2 x 2 , , X p x p ) B f X 1 X 2 X p ( x1 ,
B

( x , x , , x ) f

( x , x , , x )dx1 d

k
X1 X 2 X p
1
2
p
Joint Probability Mass For subset: X 1 X 2 X k 1 2
range of X1,X2,,Xp for which X1=x1, X2=x2,, Xk=xk

f X i ( xi ) f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dxi 1

R
Marginal Probability Density Function:
is over all points of X1,X2,,Xp for which Xi=xi

Mean:

E ( X i ) x i f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p

V ( X i ) xi X i

Varia

f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p

Covariance is a measure of linear relationship between the random variables. If the relationship betwe
nonlinear, the covariance might not be sensitive to the relationship.
Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the cor
linear relationship between random variables.
Covariance: XY E[( X X )(Y Y )] E ( XY ) X Y

XY
Correlation:

cov( X , Y )
V ( X )V (Y )

34

If X and Y are independent random variables, XY XY 0

Bivariate
Normal

1 ( x x ) 2 2 ( x X )(
f XY ( x, y; X , Y , X , Y , )
exp

XY
X2
2(1 2 )
2 X Y 1 2
for - < x < and - < y < , with parameters x > 0, y > 0, - < X < , - <
1

Marginal Distribution: If X and Y have a bivariate normal distribution with joint pr

y;X,Y,X,Y, ), the marginal probability distribution of X and Y are normal with me
deviation x and y, respectively.
Conditional Distribution: If X and Y have a bivariate normal distribution with joint p
y;X,Y,X,Y, ), the conditional probability distribution of Y given X = x is

Y |x Y X

Linear
Functions of
random
variables

Y Y

X
X X
and variance

Correlation: If X and Y have a bivariate normal distribution with joint probability

y;X,Y,X,Y, ), the correlation between X and Y is
If X and Y have a bivariate normal distribution with = 0, X and Y are
Given random variables X1, X2, , Xp and constants c1,c2, , cp, Y = c1X1 + c2X2+ + c
X1, X2, , Xp
Mean E(Y) = c1E(X1) + c2E(X2) + + cpE(Xp)
Variance:

i j

If X1, X2, , Xp are independent, variance:

Mean and variance on average:

E ( X ) ;

E (Y ) c1 1 c 2 2 c p p ;
General
Functions of
random
variables

2 Y | X Y2 1

V (Y ) c V ( X 1 ) c 22V ( X 2 )
2
1

V (X ) 2 p

with E ( X i )

V (Y ) c12 12 c 22 22 c 2p

and the absolute value of J is used

35

Confidence Interval

Sample Size
x is an estimator of ; S 2 is sample variance

One-Sided Confidence Boun

Error: Rejecting the null Hypothesis H 0 when it is true; Type II Error: Failing to reject null hypothesis H 0 when it is
Probability of Type I Error = = P(type I error) = Significance level = -error = -level = size of the test.
Probability of Type II Error = = (type II error)
= Probability of rejecting the null hypothesis H 0 when the alternative hypothesis is true = 1 - = Probability of co
rejecting a false null hypothesis.
P-value = Smallest level of significance that would lead to the rejection of the null hypothesis H 0.
100 (1 ) CI on is
100 (1 ) upper confident that the error 100(1-)% upper confidence bound for

xz /2 / n x+ z/ 2 / n
z /2 isthe upper 100 /2
percentage point of standard
normal distribution

when sample
Z /2 2
n=
E

100(1-)% lower confidence bound for

xz / n=l

Large scale sample size n: Using central limit theorem, X has approximately a normal distribution with me
variance /n.

xz /2

S/ n

Z0

Probability of Type II Error for a two-sided t

X 0

/ n
Test Statistic:
Alternative
p-value
hypothesis
Probability above |z0|
H1: 0
& below -|z0|,
P = 2[1 - (|z0|)]
Probability above z0
H1: 0
P = 1 - (z0)
Probability below z0
H1: 0
P = (z0)

S
S
x+ z / 2
n
n
= z /2

Rejection
Criteria

n=

2 2
( z /2 + z )

where =0

n=

z 0> z
z 0<z

) (

n
n
z / 2

2 2
( z + z )

where =0

Large set: for n > 40, replace sample standard d

for .
Parameter for Operating Characteristic Cu

d=

| 0| ||

t distribution (Similar to normal in symmetry and unimodal. But t distribution is heavier tails than normal) w
degrees of freedom. k = n-1

X
S/ n

f ( x )=

[( k +1) /2]
1
.
2
k (k /2) [ ( x /k ) +1 ](k+1)/ 2

Finding s: Can be obtained only

100 (1 ) CI on is
using trial and error as s is
xt /2,n 1 s / n x+t / 2,n1 s/ n

t /2 ,n1 isthe upper 100/2

percentage point of t distribution
with n degrees of freedom

collected.

Mean = 0

100(1-)% upper confidence bound for

u=x+ t ,n1 s / n
100(1-)% lower confidence bound for

xt , n1 s/ n=l
Probability of Type II Error for a two-sided

36

=P (t /2,n1 T 0 t / 2,n1 0 ) =P (t /2, n1 T '0

When the true value of mean = + , the di

X 0
T0
S/ n
Test Statistic:
Alternative
hypothesis
H1: 0
H1: 0
H1: 0

p-value

for T0 is called the noncentral t distribution w

degrees of freedom and noncentrality para
n/ . If = 0, the noncentral t distribution
the usual central t distribution. T '0 denot
noncentral t random variable.

Rejection Criteria

t 0> t /2,n1t 0 <t /2, n1

Probability above
|t0| & below -|t0|
Probability above
t0
Probability below
t0

t 0> t ,n 1
t 0<t ,n1

d=

| 0| ||
=

Chisquare ( 2 ) distribution with n-1 degrees of freedom. k = n-1

(n 1) S 2
2

f ( x )=

k /2

1
(k /2)1 x /2
x
e
(k /2)

100 (1 ) CI on 2 is
(n1) s 2 2 ( n1)s 2
2
2 / 2,n1
1 / 2,n1

Mean = k

2 /2,n1 21 /2, n1 are the

upper and lower 100/2 percentage points of chi-square
Null hypothesis:
Test Statistic:
Alternative hypothesis
H 1 :

(n1)s
2
2
,n1

H0 : 2 02

02

(n 1) S 2
02
Rejection Criteria

2
0

2
0

(n1)s2
21 ,n1

Variance = 2k

H 1 :

2 02

20 > 2 , n1

H 1 :

2 02

20 < 2 ,n1

d=

| 0| ||

X np

np (1 p )

p p
p (1 p )
n

is approximately standard normal. p is the proportional population. Mean = p. Variance = p(1-p)/n

37

100 (1 ) CI on proportion p is
Z 2
n= /2 p (1 p)
^p ( 1p )
^p (1p) E
^pz
p ^p + z /2
n
np can be computed as
2

z /2 isthe upper 100 /2

percentage point of standard
normal distribution

( )

^p from a
preliminary sample or use the
maximum value of p, which is 0.5.

H1: p p0

p-value

^p ( 1p )
p
n

p0 p+ z /2 p0 (1 p0 )/n

p (1 p)/n

) (

p0 p

n=

z 0> z /2z 0 <z / 2

]
]

z /2 p0 (1 p0)+ z p(1 p)
pp 0

Sample size for a one-sided tes

z p (1 p0 )+ z p( 1 p)
n= 0
p p0

z 0> z
z 0<z

100(1-)% upper confidence bound for

100 (1) CI on 12 is

100 (1) upper confident that

the errorestimating
12 by
z/ 2
21 22
x 1x 2
12 x 1x 2 + zx/2
/
+
x
1
2
n1 n2
21 22
will
not
exceed a specific amount E
+
n1 n2
when sample
z /2 isthe upper 100 /2
Z /2 2 2 2
n=
( 1 + 2 )
percentage point of standard
E
normal distribution

Rejection Criteria

Probability above
|z0| & below -|z0|,
P = 2[1 - (|z0|)]
Probability above
z0
P = 1 - (z0)
Probability below
z0
P = (z0)

^p ( 1p )
n

X np0
np0 (1 p0 )

Test Statistic:

H0 : p p0

z0

H1: p p0

p ^p + z

^pz

Null hypothesis:

Alternative
hypothesis
H1:p p0

12 x 1x 2 + z

( )

21 22
+
n1 n2

x 1x 2z

21 22
+
n1 n 2 1 2

X 1 X 2 1 2

12 22

n1 n2

Probability of Type II Error for a two-sided test

= z /2

21 22
+
n1 n2

)(

z /2

21+ 22
n= 2
Sample size for a two-sided test, with n 1n2,
1 /n1+ 22 /n2
2
2
2
( z /2 + z ) ( 1 + 2 )
n=
Sample size for a two-sided test, with n1n2,
2
( 0 )
2

Sample size for a one-sided test, with n 1=n2,

21 22
+
n1 n 2

( z + z ) ( 1 + 2 )
n=
2
( 0 )
38

d=

|1 2 0| | 0|

Null hypothesis: H0:= 0

Z0

+
2
1

2
2

X 1 X 2 0

12 22

n1 n2

Test Statistic:
Alternative
p-value
hypothesis
Probability above
H1: 0
|z0| & below -|z0|,
P = 2[1 - (|z0|)]
Probability above
H1: 0
z0
P = 1 - (z0)
Probability below
H1: 0
z0
P = (z0)

100 (1) CI on 12 with assumed equal varianceis

1 1
1 1
x 1x 2t /2, n +n 2 S p
+ 12 x 1x 2 +t /2, n +n 2 S p
+
n1 n2
n1 n2
t /2, n +n 2 is theupper /2
percentage point of t distribution
with n1+ n22 degrees of freedom

21 + 22

Rejection
Criteria

z 0> z
z 0<z

100 (1) CI on 12 with assumed unequal v

x 1x 2t /2,

s 21 s22
+ x x + t
n1 n2 1 2 1 2 /2,

X 1 X 2 1 2
1 1

n1 n2

2
p

Pooled estimator of , denoted by

2

2
p

S =

( n1 1 ) S 21+ ( n21 ) S 22
n1+ n22

, is:

H1: 0

T0
Test Statistic:

X 1 X 2 0
1 1
Sp

n1 n2

Alternative
hypothesis
H1: 0
H1: 0
H1: 0

has t distribution with n1+n2-2 degrees of freedom; Called as pooled t-t

p-value
Rejection Criteria
Probability above
t 0> t /2,n +n 2 t 0 <t / 2,n +n 2
|t0| & below -|t0|
Probability above
t 0> t ,n +n 2
t0
Probability below
t 0<t ,n + n 2
t0
If variances are not assumed equal
1

39

T0*
If H0:= 0 is true, the statistic

S12 S 22

n1 n2

with t degrees of freedom given by

2 2
2

2
1

S S
+
n1 n2

( S21 / n1 ) ( S22 / n2 )
n11

X 1 X 2 0

If is not an integer , round down the nearest integer

n 21

(Oi Ei )
Ei
i 1
Where Oi is the observed frequency and Ei is the expected frequency in the ith class.
Approximated to Chi-square distribution with k-p-1 degrees of freedom. p represents the number of parameters. If test statistic

reject null hypothesis. P-value is P( k p12 > 20 ) .

r
1 c
1 c
1 r
ui Oij v j Oij
Eij nui v j Oij Oij
n j 1
n j 1 i 1
n i 1
Expected Frequency of each cell
k

X 02

P(( r1)(c1)2> 20)

.

(Oij Eij ) 2

0
Eij
i 1 j 1
For large n, the statistic
P-value is
100 (1) predictioninterval on a single future observation a normal distribution is
1
1
xt /2,n 1 s 1+ X n+1 x+ t /2,n1 s 1+
n
n
Prediction interval for Xn+1 will always be longer than the CI for because there is more variability associated

prediction error than with the error of estimation.

Tolerance interval for capturing at least of the valuesa normal distribution with cofidence level 100 ( 1) is
xks , x + ks

Where k is a tolerance interval factor for the given confidence.

X i ~0
Number of positive
differences = r+ If P-value is less than some preselected level , we will reje

z0

Normal approximation for sign test statistic:

~ ~
H
:

H : ~ ~0
0
Null hypothesis: 0
One-sided hypothesis: 0
H1 : ~ ~0
H1 : ~ ~0

P R r when p
2
P-value:

P R r when p
2
P-value:

R 0.5n
0.5 n

H
Two-sided hypothesis: 0
H1 : ~ ~0
If r+ < n/2, P-value

2 P R r when p

If r+ > n/2, P-value

2 P R r when p

H0 : 0

0
Sort based on i
differences; Give the ranks the signs of th
corresponding differences.
Sum of Positive Ranks: W+; Absolute value of Sum of Negative Ranks: W-. W = min(W+,W-)

Null hypothesis:

H1 : 0

w w*
40

H1 : 0

w w*

H1 : 0

Reject Null hypothesis, if observed value of statistic

w w*

W n(n 1) / 4
n(n 1)( 2n 1) / 24

z0

Normal approximation for Wilcoxon signed-rank test statistic:

Arrange all n1 and n2 observations in ascending order of magnitude and assign ranks to them. If two or m
observations are tied(identical), use the mean of the ranks that would have assigned if the observations diffe
sum of ranks in smaller sample. W2 is sum of ranks in other sample.

W 2=

( n1 +n 2 )( n1+ n2 +1 )
2

W 1

For one-sided hypothesis: H1 : 1 2 reject H0 if w1 wFor H1 : 1 2 reject H0 if w2 w

Normal approximation when n1 and n2 > 8,

Z0=

W 1w
w

dt
S D / n D d +t
SD/ n
t
2

2
, n1

,n1

, n1

is theupper /2 percentage point of t distribution withn1 degrees of freedom

Null hypothesis: H0:D= 0

T0
Test Statistic:

D 0
S D / n where

is the sample average of n differences D1, D2, , Dn,

and SD is the sample standard deviation of these differences.
Alternative
p-value
Rejection Criteria
hypothesis
Probability above
t 0> t /2,n1t 0 <t /2, n1
H1:D 0
|t0| & below -|t0|
Probability above
t 0> t ,n 1
H1:D 0
t0
Probability below
t 0<t ,n1
H1:D 0
t0
Let W and Y be independent chi-square random variables with u and v degrees of freedom respective
Ratio

F=

W /u
Y /v

f ( x )=

Mean

u +v

u
v

u/ 2

u
(
)1
x2

( )( )
u
v u
( ) ( ) ( ) x +1
]
2
2 [ v

for >2
2

(u +v)/2

Variance

F Distribution

F=

S21 / 21
S22 / 22

2 (u+ 2)
, for > 4
2
u ( 2 ) (4)
1
f 1 , u , =
f , ,u

2=

, 0< x <

n1-1 numerator degrees of freedom and n2-1 denominator degrees of freedo

Null hypothesis:

H 0 : 21= 22
41

F0 =

Test Statistic:
Alternative
hypothesis
2
2
H1: 1 2
2

H1: 1

S21
S22

Rejection Criteria

f 0> f

,n11,n21

f 0 <f

1 ,n11,n21
2

f 0> f , n 1, n 1

22

f 0< f 1 ,n 1,n 1
H1: 1 2
P-value is the area (probability) under the F distribution with n 1-1 and n2-1 degrees of freedom that lies bey
computed value of the test statistic f0.
2

100 (1) CI on the ratio

21
is
22

s 21
21 s12
f
f
s 22 1 2 ,n 1,n 1 22 s 22 2 ,n 1,n 1
f
f
are theupper lower /2 percentage point of F distribution withn21 numerator n11 denominator
2

, n21, n11

1 ,n21,n11
2

H 0 : p1 =p 2
H 1 : p1 p2
^
^ 2 ( p1 p2 )
P 1 P

Null hypothesis:

Z=

+
n1
n2
^
^
P1 P
2
Z0=
Test Statistic:
1 1
^
P ( 1 ^
P)
+
n1 n2

Alternative
hypothesis
H1:p1 p2

p-value

Rejection Criteria for

Fixed-Level tests

Probability above |z0| & below -|z0|,

P = 2[1 - (|z0|)]
Probability above z0
P = 1 - (z0)
Probability below z0
P = (z0)
Probability of Type II Error for a two-sided test

H1: p1 p2
H1: p1 p2

] [

z 0> z /2z 0 <z / 2

z 0> z
z 0<z

z /2 pq
( 1 /n 1+1 /n 2 )( p 1 p2)
z /2 pq
( 1/n1+ 1/n2 )( p 1 p2)

P1P2
P1P2
n1 p 1+ n2 p2
n1 (1 p1 )+ n2 (1 p2 )
Where p=
and q =
n1+ n2
n 1+ n2

Sample size for a two-sided test

[ z /2 ( p 1+ p 2)(q 1+ q2 )/2+ z p 1 q 1+ p2 q 2 ]
n=

(p 1 p2 )2
100 (1 ) CI on the differencethe true proportions p1 p2 is

where q1 = 1 p1 and q2 = 1 p2

42

^p1 ( 1^p 1 ) ^p2 ( 1^p2 )

^p 1 ( 1^p 1 ) ^p2 ( 1^p2 )
+
p1 p2 ^p1 ^p2 + z /2
+
n1
n2
n1
n2
z /2 isthe upper /2 percentage point of standard normal distribution
^p1 ^p2z /2

Inferences:
1. Population is normal: Sign test or t-test.
a. t-test has the smallest value of for a significance level , thus t-test is superior
to other tests.
2. Population is symmetric but not normal (but with finite mean):
a. t-test will have the smaller (or a higher power) than sign test.
b. Wilcoxon Signed-rank test is comparable to t-test.
3. Distribution with heavier tails:
a. Wilcoxon Signed-rank test is better than t-test as t-test depends on the sample
mean, which is unstable in heavy-tailed distributions.
4. Distribution is not close to normal:
a. Wilcoxon Signed-rank test is preferred.
5. Paired observations:
a. Both sign test and Wilcoxon Signed-rank test can be applied. In sign test, median
of the differences is equal to zero in null hypothesis. In Wilcoxon Signed-rank test,
mean of the differences is equal to zero in null hypothesis.
6.

43