You are on page 1of 6

Exam 1/P Formula Sheets

WWW.P ROBABILITY E XAM . COM

S ETS I NTEGRALS C ONT. C OUNTING T ECHNIQUES


De Morgan’s Law Substitution Multiplication Rule
Z b Z g(b)
(A [ B) = A \ B c c c
(A \ B) = A [ B c c c
0 A compound experiment consisting of 2 sub-experiments with a
f (g(x))g (x) dx = f (u) du
Inclusion-Exclusion Principle a g(a) and b possible outcomes resp. has a · b possible outcomes.
Integration by Parts Permutations
|A [ B| = |A| + |B| |A \ B|
Z b Z b
|A [ B [ C| = |A| + |B| + |C| |A \ B| |B \ C| |A \ C| + |A \ B \ C| b An ordered arrangement of k elements from an n element set
u dv = uv a v du
a a n!
Count: n Pk = n(n 1) · · · (n (k 1)) =
D ERIVATIVES Other Useful Identities (n k)!
Z b  cx cx b Combinations
xe e
Function Derivative cx
xe dx = c 6
= 0
a c c 2
a A k-element subset of an n element set
c 0 Z 1 ✓ ◆
n! n n!
xr rxr 1 n
x e cx
dx = n+1 n 2 N, c > 0 Count: n Ck = =
c k k!(n k)!
cf (x) cf 0 (x) 0

f (x) + g(x) f 0 (x) + g 0 (x) Properties of Binomial Coefficients


✓ ◆ ✓ ◆ ✓ ◆
f (x) · g(x) f 0 (x) · g(x) + f (x) · g 0 (x) P ROBABILITY A XIOMS n
X n
=2 n n
=
n
f (g(x)) f 0 (g(x)) · g 0 (x) k k n k
e x
e x Probability Function Definition k=0
✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ ◆
1 n 1 n n n 1 n 1
ln x x 1. P (S) = 1 n =k = +
k 1 k k k 1 k
ax x
a · ln a
2. P (A) 0 for all A ✓ ◆ X r ✓ ◆✓ ◆
m+n m n
=
3. For mutually disjoint events A1 , A2 , . . . r k r k
I NTEGRALS !
k=0

1
[ X1 Counting Rules
Properties of Integrals P Ai = P (Ai )
Z b i=1 i=1 # of ways to select k elements from n total elements:
c dx = c · (b a)
a Axiom Consequences
Z b Z b Z b order matters order doesn’t matter
[f (x) + g(x)] dx = f (x) dx + g(x) dx P (;) = 0 ✓ ◆
a a a c n+k 1
Z b Z b
P (A ) = 1 P (A) replace n k
k
cf (x) dx = c f (x) dx A ✓ B =) P (B \ A ) = P (B) c
P (A) ✓ ◆
a a n! n
Z Z Z P (A [ B) = P (A) + P (B) P (A \ B) don’t replace
b c b (n k)! k
f (x) dx = f (x) dx + f (x) dx
a a c
P (A [ B [ C) = P (A) + P (B) + P (C)
P (A \ B) P (B \ C) P (A \ C)
Applications of the FTC
+ P (A \ B \ C)
Z b r+1 b Z b b
r x 1 |A|
x dx = (r 6= 1) dx = ln |x| If S is finite with equally likely outcomes then P (A) =
a r+1 a a x a |S|
Z b b Z b x b
x x x c
e dx = e c dx =
a a a ln c a
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

C ONDITIONAL P ROBABILITY R ANDOM VARIABLES C ONT. S UMMARY S TATISTICS C ONT.


Definition Discrete Random Variable Variance
P (A \ B) X has finite or countably infinite (listable) support. Var(X) = E[(X µ)2 ] = E[X 2 ] E[X]2
P (A | B) = provided P (B) > 0
P (B) Probability Mass Function: p(x) = P (X = x) Variance Properties
Conditional Probabilities are Probabilities X
A valid PMF has p(x) 0 and p(x) = 1 Var(X) 0 Var(X) = 0 () P (X = µ) = 1
P (Ac | B) = 1 P (A | B) x
Var(X + c) = Var(X) Var(cX) = c · Var(X)
2

P (A [ B | C) = P (A | C) + P (B | C) P (A \ B | C) Jumps in the CDF are the probabilities (values of the PMF). X, Y independent =) Var(X + Y ) = Var(X) + Var(Y )
Multiplication Rule P (A) > 0, P (B) > 0 Continuous Random Variable
Standard Deviation Coefficient of Variation
P (A \ B) = P (A | B) · P (B) = P (B | A) · P (A) X has continuous CDF differentiable except at finitely many points. p X
X = Var(X) CV(X) =
Probability Density Function: f (x) = F 0 (x) µX
Bayes’ Theorem
Z 1
Skew
P (B | A) · P (A) A valid PDF has f (x) 0 and f (x) = 1
P (A | B) = provided P (A) > 0, P (B) > 0 1
"✓ ◆3 #
P (B) X µ E[X 3 ] 3µ 2
µ3
Z Skew(X) = E = 3
b
Law of Total Probability P (a  X  b) = f (x) dx = F (b) F (a)
If A1 , A2 , . . . , An partition S with P (Ai ) > 0, then
a
Jensen’s Inequality
Mixed Type Random Variable
P (B) = P (B | A1 ) · P (A1 ) + · · · + P (B | An ) · P (An ) g 00 (x) 0 =) E[g(X)] g(E[X])
X’s CDF is a weighted average of a continuous and discrete CDF: 00
Independent Events g (x)  0 =) E[g(X)]  g(E[X])
F (x) = ↵ · FC (x) + (1 ↵) · FD (x) 0<↵<1
Definition: P (A \ B) = P (A) · P (B) P (g(X) = a + bX) = 1 () E[g(X)] = g(E[X])

(A, B) indep. pair =) (A, B c ), (Ac , B), (Ac , B c ) also indep. pairs Mode
S UMMARY S TATISTICS A mode is an x value which maximizes the PMF/PDF.
Expected Value It is possible to have 0, 1, 2, . . . or infinite modes.
R ANDOM VARIABLES Z
X 1
A random variable, X, is a function from the sample space S to R E[X] = x · p(x) E[X] = x · f (x) dx Discrete: any values with the largest probability
1
Cumulative Distribution Function
x Continuous: check end points of interval and where f 0 (x) = 0
Law of the Unconscious Statistician (LOTUS)
F (x) = P (X  x)
Z Percentile
X 1
E[g(X)] = g(x) · p(x) E[g(X)] = g(x) · f (x) dx c is a (100p)th percentile of X if P (X  c) p and P (X c) 1 p
x 1
A 50th percentile is called a median
Expected Value Linearity
Discrete: look for smallest c with F (c) p
E[aX + b] = a · E[X] + b E[X + Y ] = E[X] + E[Y ] Continuous: solve for c in F (c) = p
Survival Shortcut
1
X
If X is nonnegative integer-valued, then E[X] = P (X > k)
k=0
A valid CDF is nondecreasing, right-continuous and Z 1
If X is nonnegative continuous, then E[X] = [1 F (x)] dx
lim F (x) = 0, lim F (x) = 1 0
x! 1 x!1
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

C OMMON D ISCRETE D ISTRIBUTIONS

Distribution Description P (X = x) Expected Value Variance MGF Properties

2 at (b+1)t
1 a+b (b a + 1) 1 e e
DUniform({a, . . . , b}) Equally likely values a, . . . , b
b a+1 2 12 (b a + 1)(1 et )

P (X = 1) = p
Bernoulli(p) 1 trial w/ success chance p p p(1 p) 1 p + pe t
P (X = 0) = 1 p
✓ ◆
# of successes in n indep. n x
Binomial(n, p) p (1 p) n x
np np(1 p) (1 p + pe ) t n
np 2 N =) np = mode = median
Bernoulli(p) trials x
# w/ property chosen w/ K N K ✓ ◆
x n x K K K N n Resembles Binomial(n, K
N)
HyperGeom(N, K, n) out replacement from N n n 1 ugly
N N N N N 1 with large N relative to n
where K have property n

x
e (et 1) Approximates Binomial(n, p)
Poisson( ) Common frequency dist. e
x! when = np, n large, p small

# of failures before 1 p 1 p p
Geometric(p) (1 p) p x
Only memoryless discrete dist.
first probability p success p p2 1 (1 p)et
✓ ◆ ✓ ◆r
# of failures before x+r 1 r(1 p) r(1 p) p
NegBin(r, p) r
p (1 p) x
Sum of r iid Geometric(p)
r probability p success
th
r 1 p p2 1 (1 p)et

C OMMON C ONTINUOUS D ISTRIBUTIONS

Distribution f (x) F (x) Expected Value Variance MGF Properties

1 x a a+b (b a)2 ebt eat


Uniform(a, b) Probabilities are proportional to length
b a b a 2 12 t(b a)
✓ ◆
1 (x µ)2 x µ 2 2
µ 2
N (µ, 2
) p e 2 2 µ 2
e µt+ t /2
Approximates sum of n iid rv’s w/ mean n and variance n
2⇡
1 1
Exp( ) e x
1 e x
2
Only memoryless continuous distribution
t
x ↵ 1
✓ ◆↵
e ( x) ↵ ↵
Gamma(↵, ) ugly 2
Sum of ↵ independent Exp( ) for integer ↵ > 0
(↵) t
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

D ISCRETE D ISTRIBUTIONS C ONT. C ONTINUOUS D ISTRIBUTIONS C ONT. M OMENT G ENERATING F UNCTIONS


Bernoulli Uniform Definition
A Bernoulli r.v. is also an indicator of the occurrence of A ✓ S Us ⇠ Uniform(0, 1) =) U = (b a) · Us + a ⇠ Uniform(a, b) MX (t) = E[e tX
]
d c Properties
(c, d) ✓ (a, b) =) P (c  U  d) = ,
X ⇠ Bernoulli(p) =) Y = (a b)X + b takes values a and b b a
with probabilities p and 1 p, respectively. Uniquely determines a distribution
U | U 2 (c, d) ⇠ Uniform(c, d)
n (n)
E[Y ] = p · a + (1 p) · b Normal E[X ] = MX (0)
2 MaX+b (t) = ebt MX (at)
Var(Y ) = (a 2
b) · p · (1 p) Z ⇠ N (0, 1) =) X = Z + µ ⇠ N (µ, )
( z) = 1 (z) X, Y independent =) MX+Y (t) = MX (t) · MY (t)
Binomial
X ⇠ N (µX , 2
X ), Y ⇠ N (µY , 2
) independent, then Variance Shortcut (Cumulant)
If X ⇠ Binomial(n, p), Y ⇠ Binomial(m, p) then Y

X + Y ⇠ N (µX + µY , 2
+ 2
) d2
n X ⇠ Binomial(n, 1 p) X Y Var(X) = 2 ln (M (t))
dt
Central Limit Theorem t=0
X, Y independent =) X + Y ⇠ Binomial(n + m, p)
X1 , . . . , Xn iid each with mean µ and variance 2
, then
Z | X ⇠ Binomial(X, q) =) Z ⇠ Binomial(n, pq)
X1 + · · · + Xn ⇠
˙ N (nµ, n 2
) P ROBABILITY G ENERATING F UNCTIONS
Poisson Defined for nonnegative integer-valued random variables
Approximating consecutive integer-valued X with CLT: Definition
If X ⇠ Poisson( ), Y ⇠ Poisson() then
✓ 1 ◆ ✓ 1 ◆
x+ 2 µ x 2 µ GX (t) = E[t ]X
P (X = x + 1) = P (X = x) · P (X  x) ⇡ , P (X < x) ⇡
x+1 Properties
2 N =) , 1 are modes Exponential
Uniquely determines a distribution
X, Y independent =) X + Y ⇠ Poisson( + ) X ⇠ Exp( ) =) P (a  X  b) = e a
e b
(k)
P (X = k) = GX (0)/k!
Z | X ⇠ Binomial(X, p) =) Z ⇠ Poisson( · p) The continuous analog of the Geometric (rounding & limit) b a
GaX+b (t) = t GX (t )
P (X s+t|X s) = P (X t) [memoryless]
Geometric X, Y independent =) GX+Y (t) = GX (t) · GY (t)
Time between events in Poisson process with rate
If X ⇠ Geometric(p) then X + 1 counts trials until 1 success st
Moments
1 1 p Xi ⇠ Exp( i ) indep. =) min{X1 , . . . , Xn } ⇠ Exp( 1 + ··· + n) 
E[X + 1] = Var(X + 1) = (n) X!
p p2 GX (1) =E = E[X(X 1) . . . (X n + 1)]
Gamma (X n)!
P (X n+m|X m) = P (X n) [memoryless] 0
With integer ↵, X ⇠ Gamma(↵, ) is E[X] = GX (1)
a sum of ↵ independent Exp( ) Var(X) = G00X (1) (G0X (1))2 + G0X (1)

the time until ↵ th


event in a Poisson process with rate

The continuous analog of the Negative Binomial


Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

J OINT D ISTRIBUTIONS J OINT D ISTRIBUTIONS C ONT.


Cumulative Distribution Function (CDF) Covariance
F (x, y) = P (X  x, Y  y) Definition: Cov(X, Y ) = E[(X µX ) · (Y µY )] = E[XY ] E[X] · E[Y ]
Probability Mass Function (PMF) Probability Density Function (PDF)
Cov(X, X) = Var(X) Cov(X, Y ) = Cov(Y, X) Cov(X + c, Y ) = Cov(X, Y )
@2
p(x, y) = P (X = x, Y = y) f (x, y) = F (x, y) Cov(X, c) = 0 Cov(cX, Y ) = c · Cov(X, Y ) Cov(X + Y, Z) = Cov(X, Z) + Cov(Y, Z)
@x@y
Var(aX + bY ) = a · Var(X) + b · Var(Y ) + 2ab · Cov(X, Y )
2 2
Marginal PMF Marginal PDF
Z 1 Coefficient of Correlation
X ✓ ◆
pX (x) = p(x, y) fX (x) = f (x, y) dy X µX Y µY Cov(X, Y )
y 1 ⇢X,Y = Cov , = 1  ⇢X,Y  1
X Y X · Y
Conditional PMF Conditional PDF Consequences of Independence
p(x, y) f (x, y)
pX|Y (x | Y = y) = fX|Y (x | Y = y) = E[g(X) · h(Y )] = E[g(X)] · E[h(Y )] Cov(X, Y ) = 0
pY (y) fY (y)
MX,Y (s, t) = MX (s) · MY (t) ⇢X,Y = 0
Independence Criteria (X and Y are independent if any hold)
Bivariate Continuous Uniform
F (x, y) = FX (x) · FY (y) 1 < x, y < 1 F (x, y) = FX (x) · FY (y) 1 < x, y < 1
1
p(x, y) = pX (x) · pY (y) 1 < x, y < 1 f (x, y) = fX (x) · fY (y) 1 < x, y < 1 f (x, y) = Probabilities are proportional to areas
Area of support
pX|Y (x | Y = y) = pX (x) where pY (y) > 0 fX|Y (x | Y = y) = fX (x) where fY (y) > 0
Multinomial
pY |X (y | X = x) = pY (y) where pX (x) > 0 fY |X (y | X = x) = fY (y) where fX (x) > 0
p(x, y) = g(x) · h(y) for any g, h 0 f (x, y) = g(x) · h(y) for any g, h 0 (X1 , . . . , Xk ) ⇠ Multinomial(n, p1 , . . . , pk ) if n indep. trials performed, each with k possible outcomes
(with respective probabilities p1 , . . . , pk ) and Xi is the number of trials resulting in outcome i.
2D LOTUS
XX Z 1 Z 1 n! n1 n2 nk
P (X1 = n1 , . . . , Xk = nk ) = · p1 p2 · · · p k
E[h(X, Y )] = h(x, y) · p(x, y) E[h(X, Y )] = h(x, y) · f (x, y) dx dy n1 !n2 ! . . . nk !
1 1
x y
Xi ⇠ Binomial(n, pi ) i 6= j =) Cov(Xi , Xj ) = npi pj
Conditional Expectation Bivariate Normal
X Z 1
E[X | Y = y] = x · pX|Y (x | Y = y) E[X | Y = y] = x · fX|Y (x | Y = y) dx (X, Y ) ⇠ BivNormal(µX , µY , 2
X,
2
Y , ⇢X,Y ) if aX + bY is normal for all a, b 2 R.
x 1 Y 2 2
Y | X = x ⇠ N (µY + ⇢ (x µX ), Y (1 ⇢ ))
X
Law of Total Expectation/Variance

E[X] = E[E[X | Y ]] Var(X) = E[Var(X | Y )] + Var(E[X | Y ])


Joint Moment Generating Function
n+m
sX+tY n m @
MX,Y (s, t) = E[e ] E[X Y ] = n m MX,Y (s, t) MX (s) = MX,Y (s, 0)
@s @t s=t=0
Exam 1/P Formula Sheets
WWW.P ROBABILITY E XAM . COM

T RANSFORMATIONS T RANSFORMATIONS C ONT. R ISK AND I NSURANCE


Transformation of Discrete X Convolution Theorem (X and Y independent) Ordinary Deductible d
X Z 1 (
P (g(X) = y) = P (X = x) X
pX+Y (t) = pX (x)·pY (t x) fX+Y (t) = fX (x)·fY (t x) dx 0, Xd
x|g(x)=y Payment: (X d)+ =
x 1 X d, X>d
Transformation of Continuous X Z 1 Z 1
Order Statistics Mean: (x d) · fX (x) dx = SX (x) dx
Find sets Ay so g(X)  y () X 2 Ay d d
X(i) is the i smallest of iid continuous X1 , . . . , Xn w/ dist. F, f
th
✓ ◆
Compute FY (y) = P (g(X)  y) = P (X 2 Ay ) ✓ ◆ d
n
X n Payment w/ inflation r: (1 + r) · X
fY (y) = FY0 (y) FX(j) (x) = k
F (x) (1 F (x)) n k 1+r +
k
k=j
Strictly Monotone Transformation of Continuous X ✓ ◆ Policy Limit u
n 1 (
1 1 0 fX(j) (x) = n f (x)F (x)j 1 (1 F (x))n j
fY (y) = fX (g (y)) · |(g ) (y)| j 1 X, Xu
Payment: X ^ u =
u, X>u
1-1 Transformation of Continuous X, Y
( ( FX(1) (x) = 1 (1 F (x)) n
FX(n) (x) = F (x) n Z u Z u
g1 (x, y) = u x = h1 (u, v) Mean: x · fX (x) dx + u · SX (u) = SX (x) dx
If has unique solution and Mixture Distribution 0 0
g2 (x, y) = v y = h2 (u, v) ✓ ◆
X has PMF/PDF f (x) = ↵1 · f1 (x) + · · · + ↵n · fn (x) u
@ h1 @ h1 Payment w/ inflation r: (1 + r) · X ^
@u @v @ h1 @ h2 @ h1 @ h2 1+r
J= = 6= 0 (↵i positive and sum to 1, fi are valid PMF/PDFs)
@ h2 @ h2 @u @v @v @u Ordinary Deductible d and Policy Limit u Simultaneously
@u @v

Then the joint density of U = g1 (X, Y ) and V = g2 (X, Y ) is 8


FX (x) = ↵1 · F1 (x) + · · · + ↵n · Fn (x) > Xd
<0,
g(u, v) = f (h1 (u, v), h2 (u, v)) · |J| SX (x) = ↵1 · S1 (x) + · · · + ↵n · Sn (x) Payment: Xdu = X d, d < X  u + d
>
:
k
E[X ] = ↵1 · k
E[X1 ] + · · · + ↵n · k
E[Xn ] u, X >u+d
Z u+d Z u+d
MX (t) = ↵1 · MX1 (t) + · · · + ↵n · MXn (t)
Mean: (x d) · fX (x) dx + u · SX (u + d) = SX (x) dx
d d
u
Payment w/ inflation r: (1 + r) · X 1+r
d
1+r

Loss Given Positive Loss


If XC = X | X > 0 and ↵ = P (X > 0), then
2 2
E[X] = ↵ · E[XC ] E[X ] = ↵ · E[XC ]
Var(X) = ↵ · Var(XC ) + ↵ · (1 ↵) · E[XC ] 2

You might also like