You are on page 1of 6

𝑏

Monte Carlo Integral: 𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥, x follows uniform(a,b) with sample size n


1
Estimated integral 𝐹 𝑛 = (𝑏 − 𝑎) ∑𝑛𝑖=1 𝑓(𝑥𝑖 )
𝑛
𝑑
Forward Euler: 𝑌(𝑡𝑛+1 ) = 𝑌(𝑡𝑛 ) + ℎ × 𝑌(𝑡𝑛 )
𝑑𝑡
𝑑
Backward Euler: (Gear 1): 𝑌(𝑡𝑛+1 ) = 𝑌(𝑡𝑛 ) + ℎ × 𝑌(𝑡𝑛+1 )
𝑑𝑡
ℎ 𝑑 𝑑
Trapezoidal Rule: 𝑌(𝑡𝑛+1 ) = 𝑌(𝑡𝑛 ) + [ 𝑌(𝑡𝑛+1 ) + 𝑌(𝑡𝑛 )]
2 𝑑𝑡 𝑑𝑡
4 1 2 𝑑
Gear 2: 𝑌(𝑡𝑛+1 ) = 𝑌(𝑡𝑛 ) − 𝑌(𝑡𝑛−1 ) + ℎ × 𝑌(𝑡𝑛+1 ) Where 𝑠𝑡𝑒𝑝𝑠𝑖𝑧𝑒 = 𝑡𝑛+1 − 𝑡𝑛
3 3 3 𝑑𝑡
Newton-Raphson method:
𝑓′(𝑎) 𝑓′′(𝑎) 2
𝑓 (𝑛) (𝑎)
𝑓(𝑥) = 𝑓(𝑎) + (𝑥 − 𝑎) + (𝑥 − 𝑎) +. . . + (𝑥 − 𝑎)𝑛
1! 2! 𝑛!
𝑓(𝑥) = 𝑓(𝑎) + 𝑓′(𝑎)(𝑥 − 𝑎)
(1) (0)
𝑓(𝑥 (0) )
𝑥 =𝑥 −
𝑓′(𝑥 (0) )
Review On Probability:

Expectation: Discrete: 𝐸[𝑋] = ∑𝑖 𝑥𝑖 𝑃{𝑋 = 𝑥𝑖 } Continuous: 𝐸[𝑋] = ∫−∞ 𝑥𝑓(𝑥)𝑑𝑥

Discrete: 𝐸[𝑔(𝑋)] = ∑𝑥 𝑔(𝑥)𝑃(𝑥) Continuous: 𝐸[𝑔(𝑋)] = ∫−∞ 𝑔(𝑥)𝑓(𝑥)𝑑𝑥
𝐸[𝑎𝑋 + 𝑏] = 𝑎𝐸[𝑋] + 𝑏
𝑛 𝑛

𝐸 [∑ 𝑋𝑖 ] = ∑ 𝐸[𝑋𝑖 ]
𝑖=1 𝑖=1

Variance: 𝑉𝑎𝑟(𝑋) = 𝐸[(𝑋 − 𝜇)2 ] = 𝐸[𝑋 2 ] − 𝜇2


𝑉𝑎𝑟(𝑎𝑋 + 𝑏) = 𝑎2 𝑉𝑎𝑟(𝑋)
𝑉𝑎𝑟(𝑋 + 𝑌) = 𝑉𝑎𝑟(𝑋) + 𝑉𝑎𝑟(𝑌) + 2𝐶𝑜𝑣(𝑋, 𝑌)

Covariance: 𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸[(𝑋 − 𝜇𝑥 )(𝑌 − 𝜇𝑦 )] If X, Y are independent 𝐶𝑜𝑣(𝑋, 𝑌) = 0


Markov’s and Chebyshev’s Inequalities:
If X takes only non-negative values, then for any value a > 0
𝐸[𝑋]
𝑃{𝑋 ≥ 𝑎} ≤
𝑎
If X is a random variable having mean 𝜇 and variance 𝜎 2 , then for any value k > 0
1
𝑃{|𝑋 − 𝜇| ≥ 𝑘𝜎} ≤
𝑘2
Weak Law of Large Numbers: for a sequence of RVs having mean 𝜇. then for any 𝜀 > 0
𝑋1 +. . . +𝑋𝑛
𝑃 {| − 𝜇| > 𝜀} → 0 𝑎𝑠 𝑛 → ∞
𝑛
𝑋1+...+𝑋𝑛
Strong Law of Large Numbers: 𝑙𝑖 𝑚𝑛−>∞
𝑛
= 𝜇
Discrete Random Variables
Constant RV: PMF: 𝑃(𝑋) = 1 𝑖𝑓 𝑥 = 𝑐, 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
CDF: 𝐹(𝑋) = 0 𝑖𝑓 𝑥 < 𝑐, 1 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1
Var = 0
Discrete Uniform RV:
𝑏+𝑎 (𝑏−𝑎+1)2 −1
PMF: 𝑃(𝑋) = 1/𝑛, 𝑤ℎ𝑒𝑟𝑒 𝑛 𝑖𝑠 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑣𝑎𝑙𝑢𝑒𝑠 Mean = Var =
2 12
Bernoulli RVs: with parameter p
PMF: 𝑃(𝑋 = 𝑘) = 𝑝𝑘 (1 − 𝑝)1−𝑘 , 𝑘 = 0,1
𝑝 = 𝑃(𝑋 = 1)
𝑞 = 1 − 𝑝 = 𝑃(𝑋 = 0)
Mean: 𝜇 = 𝑝 Variance: 𝜎2 = 𝑝𝑞
Binomial RVs: bernoulli trials with parameters 0 < 𝑝 < 1 and 𝑛 = 1,2, . .. is 𝑏(𝑥 𝑠𝑢𝑐𝑐𝑒𝑠𝑠𝑒𝑠; 𝑛 𝑡𝑟𝑖𝑎𝑙𝑠; 𝑝)
𝑛 𝑛
𝑃(𝑥) = 𝑝 𝑥 (1 − 𝑝)𝑛−𝑥 𝑤ℎ𝑒𝑟𝑒 = 𝑛𝐶𝑥 Mean: 𝜇 = 𝑛𝑝 Variance: 𝜎2 = 𝑛𝑝𝑞
𝑥 𝑥

Geometric RVs:
Z is number of trials upto and including the 1st success
PMF: 𝑝𝑍 (𝑖) = 𝑞𝑖−1 𝑝 = 𝑝(1 − 𝑝)𝑖−1 , 𝑖 (𝑡𝑟𝑖𝑎𝑙𝑠 𝑏𝑒𝑓𝑜𝑟𝑒 1𝑠𝑡 𝑠𝑢𝑐𝑐𝑒𝑠𝑠) = 1,2,3, . . ..
⌊𝑡⌋
CDF: 𝐹𝑍 (𝑡) = ∑𝑖=1 𝑝(1 − 𝑝)𝑖−1 = 1 − (1 − 𝑝)⌊𝑡⌋ , 𝑡 ≥ 0
1−𝑝
Mean: 𝜇 = 𝐸(𝑋) = 1𝑝 Variance: 𝜎2 = 𝑉𝑎𝑟(𝑋) =
𝑝2
𝑃(𝑍=𝑛+𝑖 𝑎𝑛𝑑 𝑍>𝑛)
Conditional PMF: 𝑞𝑖 = 𝑃(𝑌 = 𝑖|𝑍 > 𝑛) = = 𝑝𝑞𝑖−1 = 𝑝𝑧 (𝑖)
𝑃(𝑍>𝑛)
n trials completed with all failures, Y additional trials are performed before success
Modified Geometric RVs: X is number of trials upto (excluding) the 1st success (Z=X+1)
PMF: 𝑝𝑋 (𝑖) = 𝑞𝑖 𝑝 = 𝑝(1 − 𝑝)𝑖 , 𝑖 = 0,1,2,3, . . ..
⌊𝑡⌋
CDF: 𝐹𝑋 (𝑡) = ∑𝑖=1 𝑝(1 − 𝑝)𝑖−1 = 1 − (1 − 𝑝)⌊𝑡+1⌋ , 𝑡 ≥ 0
Negative Binomial RVs: X is number of trials until r successes occur with parameters 0 < 𝑝 < 1 and 𝑟 = 1,2,3, . ..
𝑥−1
PMF: 𝑓(𝑥) = (1 − 𝑝)𝑥−𝑟 𝑝𝑟 , 𝑥 = 𝑟, 𝑟 + 1, 𝑟 + 2, . ..
𝑟−1
Mean: 𝜇 = 𝐸(𝑋) = 𝑟/𝑝 Variance: 𝜎2 = 𝑉(𝑋) = 𝑟(1 − 𝑝)/𝑝2
Hyper-Geometric RVs:
x is number of success in a sample of size n objects selected randomly from N with K objects classified as success
𝐾𝑁−𝐾
𝑥 𝑛−𝑥 𝑛
PMF: 𝑓(𝑥) = 𝑁 , 𝑥 = 𝑚𝑎𝑥{0, 𝑛 + 𝑘 − 𝑁} 𝑡𝑜 𝑚𝑖𝑛{𝑘, 𝑛}; 𝑤ℎ𝑒𝑟𝑒 = 𝑛𝐶𝑥
𝑥
𝑛
𝑁−𝑛
Mean: 𝜇 = 𝐸(𝑋) = 𝑛𝑝 Variance: 𝜎2 = 𝑉(𝑋) = 𝑛𝑝(1 − 𝑝) , p=K/N
𝑁−1
Poisson RVs: number of k arrivals in interval t; 𝜆is number of occurrences per unit (time)
𝑒 −𝜇 (𝜇)𝑘
PMF: 𝑓(𝑘, 𝜆𝑡) = , 𝑘 = 0,1,2, . ..
𝑘!
𝜇 = 𝜎2 = 𝜆𝑡 (Average event rate) or (mean)
Probability Generating Function PGF: Helps in dealing with operations (e.g. sum) on rv’s
𝐺𝑋 (𝑧) = ∑∞
𝑘=0 𝑝𝑘 𝑧 𝑘 = 𝑝0 + 𝑝1 𝑧1 +. . +𝑝𝑘 𝑧 𝑘 +..
Moment Generating Function:
Discrete: 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑋 ) = ∑𝑥 𝑒 𝑡𝑥 𝑓(𝑥)

Continuous: 𝑀𝑋 (𝑡) = 𝐸(𝑒 𝑡𝑋 ) = ∫−∞ 𝑒 𝑡𝑥 𝑓(𝑥)𝑑𝑥
Properties: 𝑀𝑋+𝑎 (𝑡) = 𝑒 𝑎𝑡 𝑀𝑋 (𝑡) 2) 𝑀𝑎𝑋 (𝑡) = 𝑀𝑋 (𝑎𝑡)

2
3) 𝑖𝑓 𝑋1 , . . . , 𝑋𝑛 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑎𝑛𝑑 𝑌 = 𝑋1 + 𝑋2 +. . . +𝑋𝑛 → 𝑀𝑌 (𝑡) = 𝑀𝑋1 (𝑡) ⋅ 𝑀𝑋2 (𝑡) ⋅. . .⋅ 𝑀𝑋𝑛 (𝑡)
Continuous Random Variables.
1
Continuous Uniform RV: PDF: 𝑃(𝑋) = , 𝑎 <= 𝑥 <= 𝑏
𝑏−𝑎
𝑏+𝑎 (𝑏−𝑎)2
Mean = Var =
2 12
𝑥−𝑎
CDF = [𝑎, 𝑏]
𝑏−𝑎

−(𝑥−𝜇)2
1
Normal Distribution RV: PDF: 𝑃(𝑋) = 𝑒 2𝜎2 , −∞ < 𝑥 < ∞ Mean = 𝜇 Var = 𝜎 2
√2𝜋 𝜎
𝑋−𝜇
𝑍 = , Z is a standard normal random variable with E(Z) = 0 and V(Z) = 1
𝜎
𝑋−𝜇 𝑥−𝜇
𝑃(𝑋 ≤ 𝑥) = 𝑃( ≤ ) = 𝑃(𝑍 ≤ 𝑧)
𝜎 𝜎
𝑋−𝜇
Central limit theorem: 𝑋 is the sample mean, the limiting form of the distribution of 𝑍 = 𝜎/ 𝑛 as n-> inf, is the

standard normal distribution
Exponential Distribution RV:
1 1
PDF: 𝑃(𝑋) = 𝜆𝑒 −𝜆𝑥 , 0 ≤ 𝑥 < ∞ Mean = Var =
𝜆 𝜆2
𝑃(𝑋 < 𝑡1 + 𝑡2 | 𝑋 > 𝑡1) = 𝑃(𝑋 < 𝑡2)
: 1 − 𝑒 −𝜆𝑥
Erlang Distribution RV: X is the time interval until r counts occur in a poisson process (generalization for
exponential)
𝜆𝑟 𝑥 𝑟−1 𝑒 −𝜆𝑥
PDF: 𝑃(𝑋) = , 𝑥 > 0, 𝑟 = 1,2 , . ..
(𝑟−1)!
𝑟 𝑟
Mean = Var =
𝜆 𝜆2
Gamma Distribution RV: Generalization for Erlang, r can be a fraction
𝜆𝑟 𝑥 𝑟−1 𝑒 −𝜆𝑥 ∞
PDF: 𝑃(𝑋) = , 𝑥 > 0, 𝑟 = 1,2 , . .. 𝛤(𝑟) = ∫0 𝑥𝑟−1𝑒−𝑥 𝑑𝑥, 𝑓𝑜𝑟 𝑟 > 0
𝛤(𝑟)
𝑟 𝑟
𝛤(1/2) = √𝜋 & 𝛤(𝑟 + 1) = 𝑟𝛤(𝑟) Mean = Var =
𝜆 𝜆2
Random numbers generation

Pseudorandom number generation from 0 to m-1: 𝑋𝑛 = (𝑎𝑋𝑛−1 + 𝑐) 𝑚𝑜𝑑 𝑚, 𝑛 ≥ 1


The polar method for Normal RVs:
𝑋 = 𝑅 𝑐𝑜𝑠(𝛩) = √−2 𝑙𝑜𝑔 𝑈1 𝑐𝑜𝑠(2𝜋𝑈2)
𝑌 = 𝑅 𝑠𝑖𝑛(𝛩) = √−2 𝑙𝑜𝑔 𝑈1 𝑠𝑖𝑛(2𝜋𝑈2)
Inverse transform method: 𝑋 = 𝐹 −1 (𝑈)
𝑃𝑗 𝑓(𝑥)
Rejection method: 𝑐 = 𝑚𝑎𝑥( ) for pdf ….. 𝑐 = 𝑚𝑎𝑥( )
𝑞𝑗 𝑔(𝑥)

Composition method: 𝑃𝑗 = 𝛼𝑃𝑗 (1) + (1 − 𝛼)𝑃𝑗 (2)


Goodness of fit:
Significance level or α-error: 𝛼 = 𝑷(𝒕𝒚𝒑𝒆 𝑰 𝒆𝒓𝒓𝒐𝒓) = 𝑷(𝒓𝒆𝒋𝒆𝒄𝒕 𝑯𝟎 𝒘𝒉𝒆𝒏 𝑯𝟎 𝒊𝒔 𝒕𝒓𝒖𝒆)

3
β-error: 𝛽 = 𝑷(𝒕𝒚𝒑𝒆 𝑰𝑰 𝒆𝒓𝒓𝒐𝒓) = 𝑷(𝒇𝒂𝒊𝒍 𝒕𝒐 𝒓𝒆𝒋𝒆𝒄𝒕 𝑯𝟎 𝒘𝒉𝒆𝒏 𝑯𝟎 𝒊𝒔 𝒇𝒂𝒍𝒔𝒆)
(𝑵𝒊 −𝒏𝒑𝒊 )𝟐
Statistic T given by: 𝑻 = ∑𝒌𝒊=𝟏
𝒏𝒑𝒊
Wait cost = (Entity.HoldCostRate + ResCostRateEnt ) * Wait Time
Where ResCostRateEnt: the sum of the busy cost rates for all resources held, Entity.HoldCostRate: is specified in
the Entity module and is user assignable.

4
5
Steady-state Formulas for M/M/c

Steady-state Formulas for M/G/∞

You might also like