You are on page 1of 8

# Ardon Pillay - 2016

Probability Notes

• Recall that 𝑃 𝑋 ≥ 𝑦 = 1 − 𝑃(𝑋 ≤ 𝑦 − 1 )
• If you see E(Y2) anywhere as a given value remember
!
𝑉𝑎𝑟(𝑌) = 𝐸(𝑌 ! ) – 𝐸 𝑌
• If the word, “differs,” is used, do the modulus of the variable in the P
brackets, e.g. X.
• Remember that if 𝑋~ 𝐺𝑒𝑜 𝑝 , 𝑌 = 𝑋! + 𝑋! + ⋯ 𝑋! , Y consists of r
independent observations of X, therfore, 𝑌~ 𝑁𝐵(𝑟, 𝑝)
• For questions asking for the 𝑉𝑎𝑟(𝑋 + 𝑌) or the expectational equivalent,
find the variance or expectation of each. For Y= 𝑋! + 𝑋! + 𝑋! … 𝑋! , find
variance just for X and then do 𝑉𝑎𝑟(𝑌) = 𝑛𝑉𝑎𝑟(𝑋), because n
independent observations of X.
• If it’s something with only 1-5 possible values, do a table like this, e.g. for
! !
𝐺! 𝑡 = + ! ,
! !!

𝑋 −2 1
𝑃(𝑋 = 𝑥) ½ ½

Therefore, we can find 𝐸(𝑋) and 𝑉𝑎𝑟(𝑋) = 𝐸(𝑋 ! ) – 𝐸 𝑋 !

• 𝐸 𝑋 𝑋 − 1 𝑋 − 2 … 𝑋 − 𝑘 = 𝐺 (!!!) (1), because
𝐺 𝑡 = 𝐸 𝑡! = 𝑡!𝑃 𝑋 = 𝑥
!
𝐺 𝑡 = 𝐸 𝑥𝑡 ! = 𝑥𝑡 !!! 𝑃 𝑋 = 𝑥
!
𝐺 𝑡 = 𝐸 𝑥 𝑥 − 1 𝑡! = 𝑥 𝑥 − 1 𝑡 !!! 𝑃 𝑋 = 𝑥
!
𝐺 𝑡 = 𝐸 𝑥 𝑥 − 1 … . (𝑥 − 𝑘 − 1 )𝑡 ! = 𝑥 𝑥 − 1 𝑥 − 2 … (𝑥 − 𝑘 − 1 )𝑡 !!! 𝑃 𝑋 = 𝑥

!!! !
𝐺 1 = 𝐸 𝑥 𝑥 − 1 … . (𝑥 − 𝑘) = 𝑥 𝑥−1 𝑥−2 … 𝑥− 𝑘 𝑃 𝑋=𝑥
= 𝐸(𝑥 𝑥 − 1 … 𝑥 − 𝑘 )

Distributions

1. Poisson
a. 𝑋~ 𝑃𝑜(λ)
b. 𝐸 𝑋 = λ, 𝑉𝑎𝑟 𝑋 = λ
! !! !!
c. 𝑃 𝑋 = 𝑥 = !! 𝑓𝑜𝑟 𝑋~ 𝑃𝑜 𝜆 , 0 ≤ 𝑥 ≤ ∞
d. A Poisson is a entity that relates means to a quantity, such as the
mean number of calls to a centre is λ, therefore it follows 𝑋~ 𝑃𝑜 λ
e. The PGF for a Poisson is 𝐺 𝑡 = 𝑒 !(!!!)
f. Know the proof well, it is found in written notes, be aware of
! !! !!
!!
+ !!
+ ⋯ !! = 𝑒 ! , except 𝑥 here is just λt.

we must show the PGF for 1 Bernoulli trial. The PGF for the Binomial Distribution is 𝑞 + 𝑝𝑡 ! . Normal questions are clearly marked. 𝜇 = ! . Easy to prove. Normal Distribution a. = 𝐺! 𝑡 = 𝑞 + 𝑝𝑡 ! f. i. i. 𝑝 . 𝑃 𝑋 = 𝑥 = ⋅ 𝑝 ! ⋅ 1 − 𝑝 !!! 𝑓𝑜𝑟 𝑋~ 𝐵 𝑛. 𝜎 ! ). 𝑃 𝑋 − 𝜇 > 1 = 𝑃 −1 > 𝑋 − 𝜇 > 1 = 1 − 𝑃(−1 < 𝑋 − 𝜇 < 1) !!! j. so each have two possible outcomes (each one is an independent Bernoulli trial) e.e. they must have the same p for the combined prob function to have a binomial distribution. Write the normal as follows. This can be proven by PGFs. “Normally distributed. 𝐼𝑓 𝑃 𝑋 < 𝑎 = 𝑃(𝑋 > 𝑏). To find any p. must write “approx by CLT. g. Symmetrical around 𝜇. their means need not be identical for 𝑋 + 𝑌~𝑃𝑜(𝑚𝑒𝑎𝑛). because you would use what’s in the brackets when you do the probability calculations f. which is the mean. Continuous random variable b. 𝑋 + 𝑌. 0 ≤ 𝑥 ≤ 𝑛 𝑥 d. For a combined Poisson. Total area under the graph of a normal is 1 d.Ardon Pillay - 2016 g. 𝑉𝑎𝑟 𝑋 = 𝑛𝑝(1 − 𝑝) 𝑛 c. Binomial a. 𝑋~ 𝐵(𝑛. 𝑝) b. 3. A Binomial consists of trials. A binomial consists of n independent Bernoulli trials. When typing into calculator. When we combine binomial distributions. always says. the square around stdev is really useful. 𝑏𝑒𝑐𝑎𝑢𝑠𝑒 lim 1 + = 𝑒! !→! 𝑛 !→! 𝑛 𝑇ℎ𝑖𝑠 𝑖𝑠 𝑡ℎ𝑒 𝑃𝐺𝐹 𝑓𝑜𝑟 𝑎 𝑃𝑜𝑖𝑠𝑠𝑜𝑛. 𝐸 𝑋 = 𝜇 = 𝑛𝑝 ! 𝜇 𝜇 𝜇𝑡 ! 𝜇 𝑡−1 𝑝 = . mode and median c. Assuming Y was one Bernoulli trial. 𝐸𝑥𝑝 𝑜𝑓 𝑃 𝑋 − 𝜇 > 1 𝑜𝑟 𝑃(𝑋 − 𝜇 > 1) is 0. 𝑄𝐸𝐷. we use the normcdf function g.” If it does not and a normal style method is required. always square root variance to ensure that we get the right answer. MUST ALWAYS BE IN STDEV FORM!! e. 𝑋~ 𝑁(𝜇. the mean of X + mean of Y= mean of 𝑋 + 𝑌. 𝐺 𝑡 = 𝑞 + 𝑝𝑡 ! . draw the graph as the proof .” h. ! 𝐺! 𝑡 = 𝐺!! !!! !⋯!! 𝑡 = 𝐺! 𝑡 ⋅ 𝐺! 𝑡 ⋅ 𝐺! 𝑡 ⋅ … . 2. ℎ𝑒𝑛𝑐𝑒 𝐺 𝑡 = 1 − + = 1+ 𝑛 𝑛 𝑛 𝑛 𝐿𝑒𝑡 𝑛 𝑏𝑒 𝑎 𝑙𝑎𝑟𝑔𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 ! 𝜇 𝑡−1 1 ! 𝐺 𝑡 = lim 1 + = 𝑒 ! !!! . Approximating a Poisson distribution 𝑋~𝐵 𝑛𝑝 . 𝐸 𝑋 = 𝑛𝑝. to prove it.

if we take sample observations of the value a certain discrete distribution takes on. If 𝑋~ 𝑁𝐵 𝑟. Negative Binomial CDF – just add up all the probs for each individual value. use the table function to locate which X gives the highest probability. 𝑝 X = number of trials until rth success ! b. 𝑝 . 𝑉𝑎𝑟 𝑋 = ! 4. 1 ≤ 𝑥 ≤ ∞ !" ! e. 𝑡 < (!!!) f.Ardon Pillay - 2016 k. Just use a binomial with parameters 𝑌~ 𝐵 𝑋 − 1. g. in order to use inversenorm. 𝐸 𝑆 = 𝑛𝜇. 𝑃 𝑋 = 𝑥 = 𝑝𝑞 !!! 𝑓𝑜𝑟 𝑋~ 𝐺𝑒𝑜 𝑝 . (Must be more than 50 though). 𝑉𝑎𝑟 𝑋 = !! d. !!! If not. 𝑡 < (!!!) f. 𝐺 𝑡 = !!!". 𝑆 = 𝑋! + 𝑋! + 𝑋! … 𝑋! . 𝐸 𝑋 = ! !" c. Be careful with this one. we can approximate it to the normal distribution. we use the standardised normal distribution (𝑍 = ! ). 𝑉𝑎𝑟 𝑆 = 𝑛𝜎 ! !! !!! !!! …!! !! ii. 𝑟 ≤ 𝑥 ≤ ∞ 𝑟−1 !" ! ! e. 𝑝 . provided we know 𝜇 and 𝜎. To find 𝑃 𝑋 ≥ 𝑦 = 0.99. because the CDF is out of the course. because its formula is really complex. Now we can plug it into inversenorm. We must always write the following. X= number of trials until first success ! b. Negative Binomial a. just plug the prob function into the GDC and set X as the unknown. For CLT. therefore. 𝑋 = ! . To find the mode of this. 𝐸 𝑋 = 𝜇. 5. 𝑃 𝑋 = 𝑥 = ⋅ 𝑝 ⋅ 1 − 𝑝 !!! 𝑓𝑜𝑟 𝑋~ 𝑁𝐵 𝑟. “Approx by CLT. where Z follows a normal distribution with mean 0 and a standard deviation of 1. and set 𝑌 = 𝑟 − 1. just plug the prob function into the GDC and set X as the unknown. l. 𝑝 . we must convert the more than into a less than. Multiply the binom pdf for this by p to find the value of the probability h. To find the mode of this. use the table function to locate which X gives the highest prob. If 𝑋~ 𝐺𝑒𝑜 𝑝 .01. i. 𝑉𝑎𝑟 𝑋 = !! 𝑥−1 ! d. 𝐺 𝑡 = !!!" if 𝑋~ 𝑁𝐵 𝑟. then 𝐸 𝑎𝑋 ± 𝑏𝑌 = 𝑎𝐸 𝑋 ± 𝑏𝐸 𝑌 . Example: If we want someone to take more than 5 throws to hit his 3rd home run . Geometric a. 𝐸 𝑋 = ! ! c.” i. 𝑉𝑎𝑟 𝑎𝑋 ± 𝑏𝑌 = 𝑎! 𝑉𝑎𝑟 𝑋 + 𝑏 ! 𝑉𝑎𝑟(𝑌) m. 𝑃 𝑋 < 𝑦 = 0. If X and Y are independent normal distributions.

and it is easier later when you do the derivative of 𝑉𝑎𝑟 (𝑉). 𝑋~𝑁𝐵(3. . find it in terms of k. do E(X). markschemes award marks for this.2) 𝑃 𝑋>5 This means that he can get a maximum of 2 hits within the 5 throws he has available. 0. where V is the estimator under investigation. 𝑆 ! is a biased estimator of 𝜎 ! .942 = 𝑃(𝑋 ≥ 5) Estimators • An estimator is not unbiased if 𝐸 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑜𝑟 ≠ 𝑡ℎ𝑒 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟 𝑖𝑡 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒𝑠 • 𝑆 ! ! is an biased estimator for population variance 𝑋! 𝑆!! = − 𝑋 ! 𝑛 𝑋! 𝐸 𝑆!! = 𝐸 − 𝑋 ! 𝑛 𝑋! 1 𝐸 − 𝐸 𝑋 ! = 𝐸 𝑋! ! + 𝑋! ! … + 𝑋! ! − 𝐸 𝑋 ! 𝑛 𝑛 𝐸 𝑆 ! = 𝐸 𝑋 ! − 𝐸 𝑋 ! − (1) ! 𝜎! 𝑅𝐸𝐶𝐴𝐿𝐿 → 𝑉𝑎𝑟 𝑋 = = 𝐸 𝑋 ! − 𝐸 𝑋 ! 𝑛 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑖𝑛𝑔 𝜎! + 𝐸 𝑋 !=𝐸 𝑋 ! − 2 𝑛 Subbing 1 into 2 𝜎! 𝜎! 𝜎! 𝐸 𝑆!! = 𝐸 𝑋! − − 𝐸 𝑋 ! = 𝐸 𝑋! − 𝐸 𝑋 ! − = 𝑉𝑎𝑟 𝑋 − 𝑛 𝑛 𝑛 ! ! 𝜎 𝜎 (𝑛 − 1) 𝐸 𝑆!! = 𝜎! − = 𝑛 𝑛 Therefore. so do the !!! derivative and prove it is the minimum using !! ! . 𝑌~𝐵 5. hence 𝑌 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 ℎ𝑖𝑡𝑠 𝑤𝑖𝑡ℎ𝑖𝑛 5 𝑡ℎ𝑟𝑜𝑤𝑠.Ardon Pillay - 2016 𝑋 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡ℎ𝑟𝑜𝑤𝑠 𝑢𝑛𝑡𝑖𝑙 3𝑟𝑑 ℎ𝑜𝑚𝑒 𝑟𝑢𝑛. 𝑃 𝑌 ≤ 2 = 0. 0. because 𝐸 𝑆 ! ≠ 𝜎 . • If possible. set 𝑉𝑎𝑟 𝑋 𝑡𝑜 𝜎 ! and factorise it out. • If asked to create an estimator for k or something like that.2 . to convert it ! ! ! to an unbiased estimator • Note the following 𝑋! (𝑋 ! − 𝑋)! 𝑆!! = − 𝑋 ! = 𝑛 𝑛 (𝑋 ! − 𝑋)! = 𝑋! − 𝑛 𝑋 ! • The most efficient estimator is the one with minimum variance. then make k the subject.

if we have a T distribution to work with.96 = 0. this will be equal to 2𝑋.96 < 𝑋 − 𝜇 < 1. 𝑖𝑓 𝑤𝑒 𝑎𝑟𝑒 𝑎𝑠𝑘𝑒𝑑 𝑡𝑜 𝑓𝑖𝑛𝑑 𝑠𝑎𝑚𝑝𝑙𝑒 𝑠𝑖𝑧𝑒.025.Ardon Pillay - 2016 Confidence Intervals • When we have a k% CI. 𝑋 + 1.96 = 0.95 𝑛 𝜎 𝜎 𝑃 −1. The 𝜎 symbol is only used if the sample comes from a normal distribution.95. 𝑢𝑠𝑒 𝑍! 𝑛𝑜𝑡 𝑇! o 𝐻𝑜𝑤𝑒𝑣𝑒𝑟. 33.96 = 0.96 < 𝜎 < 1. o Margin of error = ±𝑍! !! o Width of CI = 2 ⋅ 𝑍! !! or 𝑢𝑝𝑝𝑒𝑟 𝑝𝑜𝑖𝑛𝑡 − 𝑙𝑜𝑤𝑒𝑟 𝑝𝑜𝑖𝑛𝑡 o If given an interval like 32. Can do sum of upper and lower point. E.k.96 . Otherwise.a we are not explicitly given the variance).96 = 0. 𝑋 + 𝑍! ! for k%. o 𝑍! is found by working backwards from the CI. k of them will have our variable in them • It is taken from a sample of size n • Derivation 𝑃 −1.g.5.96 = 0. hence can find sample mean .96 𝑛 𝑛 • Mean (𝜇) ! ! o General CI for 𝜇 = 𝑋 − 𝑍! ! . we are saying that there is a k% chance that the variable being investigated is in that interval • Out of 100 samples. Then apply inversenorm to find the value of a = 𝑍! o 𝑁𝑜 𝑚𝑎𝑡𝑡𝑒𝑟 𝑤ℎ𝑎𝑡. use width to find either 𝜎 or 𝑛.95 𝑛 𝑛 𝜎 𝜎 𝑃 𝑋 − 1.!" ! = 0. This means that 𝑃 −𝑎 < 𝑍 < 𝑎 = 0. if it is 95%.95 = 𝑋 − 1.96 < −𝜇 < −𝑋 + 1. the square root of the unbiased estimator of population variance.96 < 𝑍 < 1.95 𝑛 𝑛 𝜎 𝜎 ∴ 𝐶𝐼 𝑓𝑜𝑟 0.95 𝑋−𝜇 𝑃 −1.5 . we define the standard deviation as 𝑆!!! .96 < 𝜇 < 𝑋 + 1.95 𝑛 𝑛 𝜎 𝜎 𝑃 −𝑋 − 1. hence 𝑃 𝑍 < −𝑎 = !!!. This is only if we can obtain an unbiased estimate for the variance (a.

where t is a dummy ! ! variable For 1 ≤ 𝑥 ≤ 3 ! 3 3 1 + 𝑡 − 3 ! 𝑑𝑡 5 ! 20 ! 3 1 3 𝑥−3 ! 8 𝑥−3 ! + 𝑡−3 ! = + − − = + 1 5 ! 20 5 20 20 20 ∴ . Basics !""#\$ !"#"! a. and then apply the following CI formula.v has the pdf 0 𝑓𝑜𝑟 𝑥 ≤ 0 3 𝑓 𝑥 = . !"#\$% !"#"\$ 𝑓 𝑥 𝑑𝑥 = 1 !""#\$ !"#"\$ b. 𝑉𝑎𝑟 𝑋 = !"#\$% !"#"\$ 𝑥 ! 𝑓 𝑥 𝑑𝑥 − !"#\$% !"#"\$ 𝑥 𝑓 𝑥 𝑑𝑥 !"!"#\$ d.50 !"! !"#\$"%&'(" ! e. Mode is the x value that begets the highest vertical axis value (differentiate) h. ! !!! ! !!! o 𝐶𝐼 𝑓𝑜𝑟 𝑘% = 𝑝 − 𝑍! ! . If the c. 𝑋~𝑁(𝑝. 1 < 𝑥 < 3 20 For 0 ≤ 𝑥 ≤ 1 !! ! ! ! ! ! 𝑑𝑡 = 𝑡 = ! 𝑥 𝑓𝑜𝑟 0 ≤ 𝑥 ≤ 1 . where 𝑝 is the sample proportion o 𝑊ℎ𝑒𝑛 𝑤𝑟𝑖𝑡𝑖𝑛𝑔 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑢𝑠𝑒 𝑡ℎ𝑒 𝑓𝑜𝑙𝑙𝑜𝑤𝑖𝑛𝑔 o 𝑋 = 𝑝𝑟𝑜𝑝𝑜𝑟𝑡𝑖𝑜𝑛 𝑜𝑓 𝑔𝑖𝑟𝑙𝑠 𝑤ℎ𝑜 𝑑𝑟𝑖𝑣𝑒 𝑤𝑖𝑡ℎ 𝑎 𝑠𝑒𝑎𝑡𝑏𝑒𝑙𝑡 𝑜𝑛. !"#\$% !"#"\$ 𝑓 𝑥 𝑑𝑥 = 0. Interquartile range: 75𝑡ℎ 𝑝𝑒𝑟𝑐𝑒𝑛𝑡𝑖𝑙𝑒 − 25𝑡ℎ 𝑝𝑒𝑟𝑐𝑒𝑛𝑡𝑖𝑙𝑒 g. 𝑝(1 − 𝑝) ! !!! o 𝐸𝑟𝑟𝑜𝑟 𝑓𝑜𝑟 𝐶𝐼 𝑜𝑓 𝑘% = ±𝑍! ! Continuous Random Variables 1. !"#\$% !"#"\$ 𝑓 𝑥 𝑑𝑥 = !"" f. For CDF i.Ardon Pillay - 2016 • Proportion (p) o To find the CI of a proportion.r. we use 𝑇! or 𝑍! . 𝐸 𝑋 = !"#\$% !"#"\$ 𝑥 𝑓 𝑥 𝑑𝑥 !""#\$ !"#"\$ !""#\$ !"#"\$ ! c.0 ≤ 𝑥 ≤ 1 5 3 𝑥 − 3 !. 𝑝 + 𝑍! ! .

𝑅𝑒𝑚𝑒𝑚𝑏𝑒𝑟 𝑡ℎ𝑎𝑡 𝑃𝐺𝐹𝑠 𝑎𝑟𝑒 𝑜𝑛𝑙𝑦 𝑓𝑜𝑟 𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒! g. We then can use that pattern to produce a general term for the coefficients. Find the CDF of X. Finding the Probability Density Function (PDF) a.Ardon Pillay - 2016 0 𝑓𝑜𝑟 𝑥 ≤ 0 3 𝑓𝑜𝑟 0 ≤ 𝑥 ≤ 1 𝑥 𝐹 𝑥 =𝑃 𝑋≤𝑥 = 𝑥−3 ! + 1 𝑓𝑜𝑟 1 < 𝑥 < 3 20 1 𝑓𝑜𝑟 𝑥 > 3 𝐼𝑓 𝑎𝑠𝑘𝑒𝑑 𝑓𝑜𝑟 𝐹 0. This gives us a CDF for Y. If 𝑌 = ! ! ! . 𝑓𝑖𝑛𝑑 𝑝𝑑𝑓 𝑜𝑓 𝑦 𝑎𝑛𝑑 𝑝𝑟𝑜𝑏 𝑜𝑓 𝑌 > ! For 1 < 𝑥 < 3 ! ! 3𝑡 ! 3𝑡 ! 𝒙𝟑 𝟏 𝐹 𝑥 = 𝑑𝑡 = = − ! 26 ! 3(26) (𝟐𝟔) 𝟐𝟔 1 1 1 𝑃 𝑌≤𝑦 =𝑃 ≤ 𝑦 = 𝑃 𝑋𝑦 ≥ 1 = 𝑃 𝑋 ≥ =1−𝑃 𝑋 ≥ 𝑋 𝑦 𝑦 1 1 1 27 1 1 𝑃 𝑌 ≤𝑦 =1−𝑃 𝑋 ≥ =1− ! − = − ! . where a is an expression in terms of y. VI: Bringing in Y.75 . See below !! ! Question: 𝑃 𝑋 = 𝑥 = 𝑓 𝑥 = !" . 𝑙𝑜𝑜𝑘 𝑓𝑜𝑟 𝑟𝑎𝑛𝑔𝑒 0. 𝐺 1 = 1 = 1! 𝑃 𝑋 = 𝑥 = 𝑃 𝑋 = 𝑥 = 1 c. use the Maclauren Series: 𝑃(𝑋 = 𝑛) = !! f. 𝑇𝑜 𝑓𝑖𝑛𝑑 𝑃(𝑋 = 𝑛). which can be differentiated with respect to y to give a PDF for Y. First. we expand 𝑡 ! 𝑃 𝑋 = 𝑥 and look for a pattern within the coefficients of t. 𝑓𝑜𝑟 1 ≤ ≤3 𝑦 26 𝑦 26 26 26 𝑦 𝑦 𝟏 New bounds 𝟏 ≥ 𝒚 ≥ 𝟑 3 3 27 1 𝟑𝟕 𝑃 𝑌> =1−𝑃 𝑌 ≤ =1− + ! = 4 4 26 3 𝟕𝟎𝟐 26 4 PGFs – the sub-topic to rule them all 1. which as we know. 𝐺 ! 1 = 𝐸 𝑋 = 𝑥 1 !!! 𝑃 𝑋 = 𝑥 = 𝑥𝑃 𝑋 = 𝑥 ! ! ! ! d. 𝐺 𝑡 = 𝐸 𝑡 ! = 𝑡 ! 𝑃 𝑋 = 𝑥 b. 𝑆𝑒𝑒 𝑛𝑜𝑡𝑒𝑏𝑜𝑜𝑘 𝑓𝑜𝑟 𝑝𝑟𝑜𝑜𝑓𝑠 2. We must change X into 𝑃(𝑋 < 𝑥). . then change 𝑃(𝑌 < 𝑦) into 𝑃(𝑋 < 𝑎). The Basics a. 𝑓𝑜𝑟 1 < 𝑥 < 3. 𝑡ℎ𝑒𝑛 𝑢𝑠𝑒 𝑡ℎ𝑎𝑡 𝑒𝑥𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 i.75 𝑓𝑎𝑙𝑙𝑠 𝑖𝑛. will equal 𝑃 𝑋 = 𝑥 . 𝑉𝑎𝑟 𝑋 = 𝐺 1 +𝐺 1 − 𝐺 1 !! ! e.

Expectation and Variance 1. and PGFs have been used in the question prior to it. Poisson - 𝐸 𝑋 = λ. 𝑉𝑎𝑟 𝑋 = λ 2. if we combine 2 poissons.Ardon Pillay - 2016 3. 𝑉𝑎𝑟 𝑋 = !! . Binomial - 𝐸 𝑋 = 𝑛𝑝. if they have the same p. 𝑉𝑎𝑟 𝑋 = !! ! !" 5. If asked to prove the relationship between 2 variables. 𝑉𝑎𝑟 𝑋 = 𝑛𝑝(1 − 𝑝) 3. Negative Binomial - 𝐸 𝑋 = !. Normal - 𝐸 𝑋 = 𝜇. Geometric - 𝐸 𝑋 = !. use PGFs to prove f. 𝑉𝑎𝑟 𝑋 = 𝜎! ! ! 4. No matter what. See notebook for proof for why 𝑉𝑎𝑟 𝑎𝑋 + 𝑏 = 𝑎! 𝑉𝑎𝑟(𝑋). really easy to make careless mistakes so keep your head and be confident. produce a binomial when combined c. Tricks a. it’ll be a Poisson d. The coefficient of 𝑡 ! is simply 𝑃(𝑋 = 𝑟) b. 2 binomials. 𝐺! 𝑡 = 𝑡 ! e.