You are on page 1of 8

Ardon Pillay - 2016

Probability Notes

β€’ Recall that 𝑃 𝑋 β‰₯ 𝑦 = 1 βˆ’ 𝑃(𝑋 ≀ 𝑦 βˆ’ 1 )
β€’ If you see E(Y2) anywhere as a given value remember
!
π‘‰π‘Žπ‘Ÿ(π‘Œ) = 𝐸(π‘Œ ! ) – 𝐸 π‘Œ
β€’ If the word, β€œdiffers,” is used, do the modulus of the variable in the P
brackets, e.g. X.
β€’ Remember that if 𝑋~ πΊπ‘’π‘œ 𝑝 , π‘Œ = 𝑋! + 𝑋! + β‹― 𝑋! , Y consists of r
independent observations of X, therfore, π‘Œ~ 𝑁𝐡(π‘Ÿ, 𝑝)
β€’ For questions asking for the π‘‰π‘Žπ‘Ÿ(𝑋 + π‘Œ) or the expectational equivalent,
find the variance or expectation of each. For Y= 𝑋! + 𝑋! + 𝑋! … 𝑋! , find
variance just for X and then do π‘‰π‘Žπ‘Ÿ(π‘Œ) = π‘›π‘‰π‘Žπ‘Ÿ(𝑋), because n
independent observations of X.
β€’ If it’s something with only 1-5 possible values, do a table like this, e.g. for
! !
𝐺! 𝑑 = + ! ,
! !!

𝑋 βˆ’2 1
𝑃(𝑋 = π‘₯) Β½ Β½

Therefore, we can find 𝐸(𝑋) and π‘‰π‘Žπ‘Ÿ(𝑋) = 𝐸(𝑋 ! ) – 𝐸 𝑋 !

β€’ 𝐸 𝑋 𝑋 βˆ’ 1 𝑋 βˆ’ 2 … 𝑋 βˆ’ π‘˜ = 𝐺 (!!!) (1), because
𝐺 𝑑 = 𝐸 𝑑! = 𝑑!𝑃 𝑋 = π‘₯
!
𝐺 𝑑 = 𝐸 π‘₯𝑑 ! = π‘₯𝑑 !!! 𝑃 𝑋 = π‘₯
!
𝐺 𝑑 = 𝐸 π‘₯ π‘₯ βˆ’ 1 𝑑! = π‘₯ π‘₯ βˆ’ 1 𝑑 !!! 𝑃 𝑋 = π‘₯
!
𝐺 𝑑 = 𝐸 π‘₯ π‘₯ βˆ’ 1 … . (π‘₯ βˆ’ π‘˜ βˆ’ 1 )𝑑 ! = π‘₯ π‘₯ βˆ’ 1 π‘₯ βˆ’ 2 … (π‘₯ βˆ’ π‘˜ βˆ’ 1 )𝑑 !!! 𝑃 𝑋 = π‘₯
∴
!!! !
𝐺 1 = 𝐸 π‘₯ π‘₯ βˆ’ 1 … . (π‘₯ βˆ’ π‘˜) = π‘₯ π‘₯βˆ’1 π‘₯βˆ’2 … π‘₯βˆ’ π‘˜ 𝑃 𝑋=π‘₯
= 𝐸(π‘₯ π‘₯ βˆ’ 1 … π‘₯ βˆ’ π‘˜ )

Distributions

1. Poisson
a. 𝑋~ π‘ƒπ‘œ(Ξ»)
b. 𝐸 𝑋 = Ξ», π‘‰π‘Žπ‘Ÿ 𝑋 = Ξ»
! !! !!
c. 𝑃 𝑋 = π‘₯ = !! π‘“π‘œπ‘Ÿ 𝑋~ π‘ƒπ‘œ πœ† , 0 ≀ π‘₯ ≀ ∞
d. A Poisson is a entity that relates means to a quantity, such as the
mean number of calls to a centre is Ξ», therefore it follows 𝑋~ π‘ƒπ‘œ Ξ»
e. The PGF for a Poisson is 𝐺 𝑑 = 𝑒 !(!!!)
f. Know the proof well, it is found in written notes, be aware of
! !! !!
!!
+ !!
+ β‹― !! = 𝑒 ! , except π‘₯ here is just Ξ»t.

we must show the PGF for 1 Bernoulli trial. The PGF for the Binomial Distribution is π‘ž + 𝑝𝑑 ! . Normal questions are clearly marked. πœ‡ = ! . Easy to prove. Normal Distribution a. = 𝐺! 𝑑 = π‘ž + 𝑝𝑑 ! f. i. i. 𝑝 . 𝑃 𝑋 = π‘₯ = β‹… 𝑝 ! β‹… 1 βˆ’ 𝑝 !!! π‘“π‘œπ‘Ÿ 𝑋~ 𝐡 𝑛. 𝜎 ! ). 𝑃 𝑋 βˆ’ πœ‡ > 1 = 𝑃 βˆ’1 > 𝑋 βˆ’ πœ‡ > 1 = 1 βˆ’ 𝑃(βˆ’1 < 𝑋 βˆ’ πœ‡ < 1) !!! j. so each have two possible outcomes (each one is an independent Bernoulli trial) e.e. they must have the same p for the combined prob function to have a binomial distribution. Write the normal as follows. This can be proven by PGFs. β€œNormally distributed. 𝐼𝑓 𝑃 𝑋 < π‘Ž = 𝑃(𝑋 > 𝑏). To find any p. must write β€œapprox by CLT. g. Symmetrical around πœ‡. their means need not be identical for 𝑋 + π‘Œ~π‘ƒπ‘œ(π‘šπ‘’π‘Žπ‘›). because you would use what’s in the brackets when you do the probability calculations f. which is the mean. Continuous random variable b. 𝑋 + π‘Œ. 0 ≀ π‘₯ ≀ 𝑛 π‘₯ d. For a combined Poisson. Total area under the graph of a normal is 1 d.Ardon Pillay - 2016 g. π‘‰π‘Žπ‘Ÿ 𝑋 = 𝑛𝑝(1 βˆ’ 𝑝) 𝑛 c. Binomial a. 𝑋~ 𝐡(𝑛. 𝑝) b. 3. A Binomial consists of trials. A binomial consists of n independent Bernoulli trials. When typing into calculator. When we combine binomial distributions. always says. the square around stdev is really useful. π‘π‘’π‘π‘Žπ‘’π‘ π‘’ lim 1 + = 𝑒! !β†’! 𝑛 !β†’! 𝑛 π‘‡β„Žπ‘–π‘  𝑖𝑠 π‘‘β„Žπ‘’ 𝑃𝐺𝐹 π‘“π‘œπ‘Ÿ π‘Ž π‘ƒπ‘œπ‘–π‘ π‘ π‘œπ‘›. 𝐸 𝑋 = πœ‡ = 𝑛𝑝 ! πœ‡ πœ‡ πœ‡π‘‘ ! πœ‡ π‘‘βˆ’1 𝑝 = . mode and median c. Assuming Y was one Bernoulli trial. 𝐸π‘₯𝑝 π‘œπ‘“ 𝑃 𝑋 βˆ’ πœ‡ > 1 π‘œπ‘Ÿ 𝑃(𝑋 βˆ’ πœ‡ > 1) is 0. 𝑄𝐸𝐷. we use the normcdf function g.” If it does not and a normal style method is required. always square root variance to ensure that we get the right answer. MUST ALWAYS BE IN STDEV FORM!! e. 𝑋~ 𝑁(πœ‡. the mean of X + mean of Y= mean of 𝑋 + π‘Œ. 𝐺 𝑑 = π‘ž + 𝑝𝑑 ! . draw the graph as the proof .” h. ! 𝐺! 𝑑 = 𝐺!! !!! !β‹―!! 𝑑 = 𝐺! 𝑑 β‹… 𝐺! 𝑑 β‹… 𝐺! 𝑑 β‹… … . 2. β„Žπ‘’π‘›π‘π‘’ 𝐺 𝑑 = 1 βˆ’ + = 1+ 𝑛 𝑛 𝑛 𝑛 𝐿𝑒𝑑 𝑛 𝑏𝑒 π‘Ž π‘™π‘Žπ‘Ÿπ‘”π‘’ π‘›π‘’π‘šπ‘π‘’π‘Ÿ ! πœ‡ π‘‘βˆ’1 1 ! 𝐺 𝑑 = lim 1 + = 𝑒 ! !!! . Approximating a Poisson distribution 𝑋~𝐡 𝑛𝑝 . 𝐸 𝑋 = 𝑛𝑝. to prove it.

if we take sample observations of the value a certain discrete distribution takes on. If 𝑋~ 𝑁𝐡 π‘Ÿ. Negative Binomial CDF – just add up all the probs for each individual value. use the table function to locate which X gives the highest probability. 𝑝 X = number of trials until rth success ! b. 𝑝 . π‘‰π‘Žπ‘Ÿ 𝑋 = ! 4. 1 ≀ π‘₯ ≀ ∞ !" ! e. 𝑑 < (!!!) f.Ardon Pillay - 2016 k. Just use a binomial with parameters π‘Œ~ 𝐡 𝑋 βˆ’ 1. g. in order to use inversenorm. 𝐸 𝑆 = π‘›πœ‡. 𝑃 𝑋 = π‘₯ = π‘π‘ž !!! π‘“π‘œπ‘Ÿ 𝑋~ πΊπ‘’π‘œ 𝑝 . (Must be more than 50 though). π‘‰π‘Žπ‘Ÿ 𝑋 = !! d. !!! If not. 𝑑 < (!!!) f. 𝐺 𝑑 = !!!". 𝑆 = 𝑋! + 𝑋! + 𝑋! … 𝑋! . 𝐸 𝑋 = ! !" c. Be careful with this one. we can approximate it to the normal distribution. we use the standardised normal distribution (𝑍 = ! ). π‘‰π‘Žπ‘Ÿ 𝑆 = π‘›πœŽ ! !! !!! !!! …!! !! ii. π‘Ÿ ≀ π‘₯ ≀ ∞ π‘Ÿβˆ’1 !" ! ! e. 𝑝 . provided we know πœ‡ and 𝜎. To find 𝑃 𝑋 β‰₯ 𝑦 = 0.99. because the CDF is out of the course. because its formula is really complex. Now we can plug it into inversenorm. We must always write the following. X= number of trials until first success ! b. Negative Binomial a. just plug the prob function into the GDC and set X as the unknown. For CLT. therefore. 𝑋 = ! . To find the mode of this. 𝐸 𝑋 = πœ‡. 5. 𝑃 𝑋 = π‘₯ = β‹… 𝑝 β‹… 1 βˆ’ 𝑝 !!! π‘“π‘œπ‘Ÿ 𝑋~ 𝑁𝐡 π‘Ÿ. β€œApprox by CLT. where Z follows a normal distribution with mean 0 and a standard deviation of 1. and set π‘Œ = π‘Ÿ βˆ’ 1. just plug the prob function into the GDC and set X as the unknown. l. 𝑝 . we must convert the more than into a less than. Multiply the binom pdf for this by p to find the value of the probability h. To find the mode of this. use the table function to locate which X gives the highest prob. If 𝑋~ πΊπ‘’π‘œ 𝑝 .01. i. π‘‰π‘Žπ‘Ÿ 𝑋 = !! π‘₯βˆ’1 ! d. 𝐺 𝑑 = !!!" if 𝑋~ 𝑁𝐡 π‘Ÿ. then 𝐸 π‘Žπ‘‹ Β± π‘π‘Œ = π‘ŽπΈ 𝑋 Β± 𝑏𝐸 π‘Œ . Example: If we want someone to take more than 5 throws to hit his 3rd home run . Geometric a. 𝐸 𝑋 = ! ! c.” i. π‘‰π‘Žπ‘Ÿ π‘Žπ‘‹ Β± π‘π‘Œ = π‘Ž! π‘‰π‘Žπ‘Ÿ 𝑋 + 𝑏 ! π‘‰π‘Žπ‘Ÿ(π‘Œ) m. 𝑃 𝑋 < 𝑦 = 0. If X and Y are independent normal distributions.

and it is easier later when you do the derivative of π‘‰π‘Žπ‘Ÿ (𝑉). 𝑋~𝑁𝐡(3. . find it in terms of k. do E(X). markschemes award marks for this.2) 𝑃 𝑋>5 This means that he can get a maximum of 2 hits within the 5 throws he has available. 0. where V is the estimator under investigation. 𝑆 ! is a biased estimator of 𝜎 ! .942 = 𝑃(𝑋 β‰₯ 5) Estimators β€’ An estimator is not unbiased if 𝐸 π‘’π‘ π‘‘π‘–π‘šπ‘Žπ‘‘π‘œπ‘Ÿ β‰  π‘‘β„Žπ‘’ π‘π‘Žπ‘Ÿπ‘Žπ‘šπ‘’π‘‘π‘’π‘Ÿ 𝑖𝑑 π‘’π‘ π‘‘π‘–π‘šπ‘Žπ‘‘π‘’π‘  β€’ 𝑆 ! ! is an biased estimator for population variance 𝑋! 𝑆!! = βˆ’ 𝑋 ! 𝑛 𝑋! 𝐸 𝑆!! = 𝐸 βˆ’ 𝑋 ! 𝑛 𝑋! 1 𝐸 βˆ’ 𝐸 𝑋 ! = 𝐸 𝑋! ! + 𝑋! ! … + 𝑋! ! βˆ’ 𝐸 𝑋 ! 𝑛 𝑛 𝐸 𝑆 ! = 𝐸 𝑋 ! βˆ’ 𝐸 𝑋 ! βˆ’ (1) ! 𝜎! 𝑅𝐸𝐢𝐴𝐿𝐿 β†’ π‘‰π‘Žπ‘Ÿ 𝑋 = = 𝐸 𝑋 ! βˆ’ 𝐸 𝑋 ! 𝑛 π‘…π‘’π‘Žπ‘Ÿπ‘Ÿπ‘Žπ‘›π‘”π‘–π‘›π‘” 𝜎! + 𝐸 𝑋 !=𝐸 𝑋 ! βˆ’ 2 𝑛 Subbing 1 into 2 𝜎! 𝜎! 𝜎! 𝐸 𝑆!! = 𝐸 𝑋! βˆ’ βˆ’ 𝐸 𝑋 ! = 𝐸 𝑋! βˆ’ 𝐸 𝑋 ! βˆ’ = π‘‰π‘Žπ‘Ÿ 𝑋 βˆ’ 𝑛 𝑛 𝑛 ! ! 𝜎 𝜎 (𝑛 βˆ’ 1) 𝐸 𝑆!! = 𝜎! βˆ’ = 𝑛 𝑛 Therefore. so do the !!! derivative and prove it is the minimum using !! ! . π‘Œ~𝐡 5. hence π‘Œ = π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ β„Žπ‘–π‘‘π‘  π‘€π‘–π‘‘β„Žπ‘–π‘› 5 π‘‘β„Žπ‘Ÿπ‘œπ‘€π‘ .Ardon Pillay - 2016 𝑋 = π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘β„Žπ‘Ÿπ‘œπ‘€π‘  𝑒𝑛𝑑𝑖𝑙 3π‘Ÿπ‘‘ β„Žπ‘œπ‘šπ‘’ π‘Ÿπ‘’π‘›. 𝑃 π‘Œ ≀ 2 = 0. 0. because 𝐸 𝑆 ! β‰  𝜎 . β€’ If possible. set π‘‰π‘Žπ‘Ÿ 𝑋 π‘‘π‘œ 𝜎 ! and factorise it out. β€’ If asked to create an estimator for k or something like that.2 . to convert it ! ! ! to an unbiased estimator β€’ Note the following 𝑋! (𝑋 ! βˆ’ 𝑋)! 𝑆!! = βˆ’ 𝑋 ! = 𝑛 𝑛 (𝑋 ! βˆ’ 𝑋)! = 𝑋! βˆ’ 𝑛 𝑋 ! β€’ The most efficient estimator is the one with minimum variance. then make k the subject.

if we have a T distribution to work with.96 = 0. this will be equal to 2𝑋.96 < 𝑋 βˆ’ πœ‡ < 1. 𝑖𝑓 𝑀𝑒 π‘Žπ‘Ÿπ‘’ π‘Žπ‘ π‘˜π‘’π‘‘ π‘‘π‘œ 𝑓𝑖𝑛𝑑 π‘ π‘Žπ‘šπ‘π‘™π‘’ 𝑠𝑖𝑧𝑒.025.Ardon Pillay - 2016 Confidence Intervals β€’ When we have a k% CI. 𝑋 + 1.96 = 0.95 𝑛 𝜎 𝜎 𝑃 βˆ’1. The 𝜎 symbol is only used if the sample comes from a normal distribution.95. 𝑒𝑠𝑒 𝑍! π‘›π‘œπ‘‘ 𝑇! o π»π‘œπ‘€π‘’π‘£π‘’π‘Ÿ. 33.96 = 0.96 < 𝜎 < 1. o Margin of error = ±𝑍! !! o Width of CI = 2 β‹… 𝑍! !! or π‘’π‘π‘π‘’π‘Ÿ π‘π‘œπ‘–π‘›π‘‘ βˆ’ π‘™π‘œπ‘€π‘’π‘Ÿ π‘π‘œπ‘–π‘›π‘‘ o If given an interval like 32. Can do sum of upper and lower point. E.k.96 . Otherwise.a we are not explicitly given the variance).96 = 0. 𝑋 + 𝑍! ! for k%. o 𝑍! is found by working backwards from the CI. k of them will have our variable in them β€’ It is taken from a sample of size n β€’ Derivation 𝑃 βˆ’1.g.5.96 = 0. hence can find sample mean .96 𝑛 𝑛 β€’ Mean (πœ‡) ! ! o General CI for πœ‡ = 𝑋 βˆ’ 𝑍! ! . we are saying that there is a k% chance that the variable being investigated is in that interval β€’ Out of 100 samples. Then apply inversenorm to find the value of a = 𝑍! o π‘π‘œ π‘šπ‘Žπ‘‘π‘‘π‘’π‘Ÿ π‘€β„Žπ‘Žπ‘‘. use width to find either 𝜎 or 𝑛.95 𝑛 𝑛 𝜎 𝜎 𝑃 𝑋 βˆ’ 1.!" ! = 0. This means that 𝑃 βˆ’π‘Ž < 𝑍 < π‘Ž = 0. if it is 95%.95 = 𝑋 βˆ’ 1.96 < βˆ’πœ‡ < βˆ’π‘‹ + 1. the square root of the unbiased estimator of population variance.96 < 𝑍 < 1.95 𝑛 𝑛 𝜎 𝜎 ∴ 𝐢𝐼 π‘“π‘œπ‘Ÿ 0.95 π‘‹βˆ’πœ‡ 𝑃 βˆ’1.5 . we define the standard deviation as 𝑆!!! .96 < πœ‡ < 𝑋 + 1.95 𝑛 𝑛 𝜎 𝜎 𝑃 βˆ’π‘‹ βˆ’ 1. hence 𝑃 𝑍 < βˆ’π‘Ž = !!!. This is only if we can obtain an unbiased estimate for the variance (a.

where t is a dummy ! ! variable For 1 ≀ π‘₯ ≀ 3 ! 3 3 1 + 𝑑 βˆ’ 3 ! 𝑑𝑑 5 ! 20 ! 3 1 3 π‘₯βˆ’3 ! 8 π‘₯βˆ’3 ! + π‘‘βˆ’3 ! = + βˆ’ βˆ’ = + 1 5 ! 20 5 20 20 20 ∴ . Basics !""#$ !"#"! a. and then apply the following CI formula.v has the pdf 0 π‘“π‘œπ‘Ÿ π‘₯ ≀ 0 3 𝑓 π‘₯ = . !"#$% !"#"$ 𝑓 π‘₯ 𝑑π‘₯ = 1 !""#$ !"#"$ b. π‘‰π‘Žπ‘Ÿ 𝑋 = !"#$% !"#"$ π‘₯ ! 𝑓 π‘₯ 𝑑π‘₯ βˆ’ !"#$% !"#"$ π‘₯ 𝑓 π‘₯ 𝑑π‘₯ !"!"#$ d.50 !"! !"#$"%&'(" ! e. Mode is the x value that begets the highest vertical axis value (differentiate) h. ! !!! ! !!! o 𝐢𝐼 π‘“π‘œπ‘Ÿ π‘˜% = 𝑝 βˆ’ 𝑍! ! . If the c. 𝑋~𝑁(𝑝. 1 < π‘₯ < 3 20 For 0 ≀ π‘₯ ≀ 1 !! ! ! ! ! ! 𝑑𝑑 = 𝑑 = ! π‘₯ π‘“π‘œπ‘Ÿ 0 ≀ π‘₯ ≀ 1 . where 𝑝 is the sample proportion o π‘Šβ„Žπ‘’π‘› π‘€π‘Ÿπ‘–π‘‘π‘–π‘›π‘” π‘‘π‘–π‘ π‘‘π‘Ÿπ‘–π‘π‘’π‘‘π‘–π‘œπ‘› 𝑒𝑠𝑒 π‘‘β„Žπ‘’ π‘“π‘œπ‘™π‘™π‘œπ‘€π‘–π‘›π‘” o 𝑋 = π‘π‘Ÿπ‘œπ‘π‘œπ‘Ÿπ‘‘π‘–π‘œπ‘› π‘œπ‘“ π‘”π‘–π‘Ÿπ‘™π‘  π‘€β„Žπ‘œ π‘‘π‘Ÿπ‘–π‘£π‘’ π‘€π‘–π‘‘β„Ž π‘Ž π‘ π‘’π‘Žπ‘‘π‘π‘’π‘™π‘‘ π‘œπ‘›. !"#$% !"#"$ 𝑓 π‘₯ 𝑑π‘₯ = 0. Interquartile range: 75π‘‘β„Ž π‘π‘’π‘Ÿπ‘π‘’π‘›π‘‘π‘–π‘™π‘’ βˆ’ 25π‘‘β„Ž π‘π‘’π‘Ÿπ‘π‘’π‘›π‘‘π‘–π‘™π‘’ g. 𝑝(1 βˆ’ 𝑝) ! !!! o πΈπ‘Ÿπ‘Ÿπ‘œπ‘Ÿ π‘“π‘œπ‘Ÿ 𝐢𝐼 π‘œπ‘“ π‘˜% = ±𝑍! ! Continuous Random Variables 1. !"#$% !"#"$ 𝑓 π‘₯ 𝑑π‘₯ = !"" f. For CDF i.Ardon Pillay - 2016 β€’ Proportion (p) o To find the CI of a proportion.r. we use 𝑇! or 𝑍! . 𝐸 𝑋 = !"#$% !"#"$ π‘₯ 𝑓 π‘₯ 𝑑π‘₯ !""#$ !"#"$ !""#$ !"#"$ ! c.0 ≀ π‘₯ ≀ 1 5 3 π‘₯ βˆ’ 3 !. 𝑝 + 𝑍! ! .

π‘…π‘’π‘šπ‘’π‘šπ‘π‘’π‘Ÿ π‘‘β„Žπ‘Žπ‘‘ 𝑃𝐺𝐹𝑠 π‘Žπ‘Ÿπ‘’ π‘œπ‘›π‘™π‘¦ π‘“π‘œπ‘Ÿ π‘‘π‘–π‘ π‘π‘Ÿπ‘’π‘‘π‘’! g. We then can use that pattern to produce a general term for the coefficients. Find the CDF of X. Finding the Probability Density Function (PDF) a.Ardon Pillay - 2016 0 π‘“π‘œπ‘Ÿ π‘₯ ≀ 0 3 π‘“π‘œπ‘Ÿ 0 ≀ π‘₯ ≀ 1 π‘₯ 𝐹 π‘₯ =𝑃 𝑋≀π‘₯ = π‘₯βˆ’3 ! + 1 π‘“π‘œπ‘Ÿ 1 < π‘₯ < 3 20 1 π‘“π‘œπ‘Ÿ π‘₯ > 3 𝐼𝑓 π‘Žπ‘ π‘˜π‘’π‘‘ π‘“π‘œπ‘Ÿ 𝐹 0. This gives us a CDF for Y. If π‘Œ = ! ! ! . 𝑓𝑖𝑛𝑑 𝑝𝑑𝑓 π‘œπ‘“ 𝑦 π‘Žπ‘›π‘‘ π‘π‘Ÿπ‘œπ‘ π‘œπ‘“ π‘Œ > ! For 1 < π‘₯ < 3 ! ! 3𝑑 ! 3𝑑 ! π’™πŸ‘ 𝟏 𝐹 π‘₯ = 𝑑𝑑 = = βˆ’ ! 26 ! 3(26) (πŸπŸ”) πŸπŸ” 1 1 1 𝑃 π‘Œβ‰€π‘¦ =𝑃 ≀ 𝑦 = 𝑃 𝑋𝑦 β‰₯ 1 = 𝑃 𝑋 β‰₯ =1βˆ’π‘ƒ 𝑋 β‰₯ 𝑋 𝑦 𝑦 1 1 1 27 1 1 𝑃 π‘Œ ≀𝑦 =1βˆ’π‘ƒ 𝑋 β‰₯ =1βˆ’ ! βˆ’ = βˆ’ ! . where a is an expression in terms of y. VI: Bringing in Y.75 . See below !! ! Question: 𝑃 𝑋 = π‘₯ = 𝑓 π‘₯ = !" . π‘™π‘œπ‘œπ‘˜ π‘“π‘œπ‘Ÿ π‘Ÿπ‘Žπ‘›π‘”π‘’ 0. 𝐺 1 = 1 = 1! 𝑃 𝑋 = π‘₯ = 𝑃 𝑋 = π‘₯ = 1 c. use the Maclauren Series: 𝑃(𝑋 = 𝑛) = !! f. π‘‡π‘œ 𝑓𝑖𝑛𝑑 𝑃(𝑋 = 𝑛). which can be differentiated with respect to y to give a PDF for Y. First. we expand 𝑑 ! 𝑃 𝑋 = π‘₯ and look for a pattern within the coefficients of t. π‘“π‘œπ‘Ÿ 1 ≀ ≀3 𝑦 26 𝑦 26 26 26 𝑦 𝑦 𝟏 New bounds 𝟏 β‰₯ π’š β‰₯ πŸ‘ 3 3 27 1 πŸ‘πŸ• 𝑃 π‘Œ> =1βˆ’π‘ƒ π‘Œ ≀ =1βˆ’ + ! = 4 4 26 3 πŸ•πŸŽπŸ 26 4 PGFs – the sub-topic to rule them all 1. which as we know. 𝐺 ! 1 = 𝐸 𝑋 = π‘₯ 1 !!! 𝑃 𝑋 = π‘₯ = π‘₯𝑃 𝑋 = π‘₯ ! ! ! ! d. 𝐺 𝑑 = 𝐸 𝑑 ! = 𝑑 ! 𝑃 𝑋 = π‘₯ b. 𝑆𝑒𝑒 π‘›π‘œπ‘‘π‘’π‘π‘œπ‘œπ‘˜ π‘“π‘œπ‘Ÿ π‘π‘Ÿπ‘œπ‘œπ‘“π‘  2. We must change X into 𝑃(𝑋 < π‘₯). . then change 𝑃(π‘Œ < 𝑦) into 𝑃(𝑋 < π‘Ž). The Basics a. π‘“π‘œπ‘Ÿ 1 < π‘₯ < 3. π‘‘β„Žπ‘’π‘› 𝑒𝑠𝑒 π‘‘β„Žπ‘Žπ‘‘ 𝑒π‘₯π‘π‘Ÿπ‘’π‘ π‘ π‘–π‘œπ‘› i.75 π‘“π‘Žπ‘™π‘™π‘  𝑖𝑛. will equal 𝑃 𝑋 = π‘₯ . π‘‰π‘Žπ‘Ÿ 𝑋 = 𝐺 1 +𝐺 1 βˆ’ 𝐺 1 !! ! e.

Expectation and Variance 1. and PGFs have been used in the question prior to it. Poisson - 𝐸 𝑋 = Ξ». π‘‰π‘Žπ‘Ÿ 𝑋 = Ξ» 2. if we combine 2 poissons.Ardon Pillay - 2016 3. π‘‰π‘Žπ‘Ÿ 𝑋 = !! . Binomial - 𝐸 𝑋 = 𝑛𝑝. if they have the same p. π‘‰π‘Žπ‘Ÿ 𝑋 = !! ! !" 5. If asked to prove the relationship between 2 variables. π‘‰π‘Žπ‘Ÿ 𝑋 = 𝑛𝑝(1 βˆ’ 𝑝) 3. Negative Binomial - 𝐸 𝑋 = !. Normal - 𝐸 𝑋 = πœ‡. Geometric - 𝐸 𝑋 = !. use PGFs to prove f. π‘‰π‘Žπ‘Ÿ 𝑋 = 𝜎! ! ! 4. No matter what. See notebook for proof for why π‘‰π‘Žπ‘Ÿ π‘Žπ‘‹ + 𝑏 = π‘Ž! π‘‰π‘Žπ‘Ÿ(𝑋). really easy to make careless mistakes so keep your head and be confident. produce a binomial when combined c. Tricks a. it’ll be a Poisson d. The coefficient of 𝑑 ! is simply 𝑃(𝑋 = π‘Ÿ) b. 2 binomials. 𝐺! 𝑑 = 𝑑 ! e.