You are on page 1of 22
DISCRETE PROBABILITY FUNCTIONS CONTINUED {) More Discrete Probability Functions @ Parameters of Probability Distribution Families 6-1. Cumulative Distribution Functi Random Variable n for a Discrete Itis often of interest to know the probability that the value of a random variable X is less than or equal to some fixed number x. This quantity is called the cwnulative distribution function {or the disiribueion function or the cumulative distribution} for the random variable X Definition 6.1 (Cumulative Distribution Function): Given a random variable X with probability function f(x). The function of x given by 4S FQ) = PIX S x) @ is called the cumulative distribution function for the random variable X, The definition as given by Eq. (a) is general and applies also to random variables that are continuous [see Section 8-1], or that are a mixture of diserete and continuous. For discrete random variables, since P(X < x) equals the sum of discrete probabilities for all numbers less than or equal to x, that is since PIX sx) = fit), we have Fo= F fo ® as the equation for F(x) for a discrete faridom variable X with probability function f(x), notes {a] The reader should not be confused by the “?" that appears in Eq. (b). This ¢ is merely a symbol for a variable whose values are less than or equal to the fined value x. In Example 6-1 below, we shall use xp to indicate a fixed value of the random variable X, and we shall replace Eq, (b) by the following equivalent form, in which, additionally, «is replaced by x: Fro) = ES) a) (b) Often, we shall abbreviate cumiative distribution function as edf. Also, we Shall occasionally abbreviate probability function as pland random variable (©) The cumulative distribution function evaluated at xp, namely Flxo), should be interpreted as & measure of the! aceurmulation of probability up to and including xy Cumulative Distribution Function for a Discrete Random Variable a 134 Probability and Statistics EXAMPLE 6-1: Refer to Examples 5-14 and 5-15, which dealt with the binomi random vatiable X for n = 2 and p = 1/2. In those examples, X represented the number of heads obtained in two tosses ofa fair coin, The table forthe probability function of X [known as flx) or b(x; 2 1/2)] is as follows: x |O;1]2 fo) [4 [ava Taya Here, we laid out this table horizontally, while in Example 5-15 it was laid out vertically. The corresponding bar [line] chart is reproduced below in Figure 6-1a, Determine the cumulative distribution function F(x) in terms of x for all x. Also, draw the graph for F(x) in tesms of x 100 aw i a5 COFFE ia eR EEELA 7 ® ——e-0 eee eee urc-aaaceee ) Figure 6-1, Bar chart and graph for cumulative distribution function for Exemple 8-1. (a) Bar chart, (b) Cumulative distribution function graph, Sotution: Lat us tentatively indicate a fixed value of random variable X by the symbol x. Thus, we have Fp) = P(X < x) o ‘where the right side is read as “the probability that random variable X is less than or equal to x9.” As an example, let us choose xo equal to —1.3. Refer to Figure 6-1a, where the number ~ 1.3 is marked on the x scale. We see that F(-13) = PIX < -13)=0 ® That is, no probability has been accumulated up to and including ~ 1.3. Now, an equation like Eq. 2) holds forall xq values which are ess than 0. For other examples, consider xo = —10, x9 = —6.25, xy = ~.03, etc. Summarizing, we | can write Fe) = 0 forall x) <0 Now, let xo = 0. Then, we have Discrete Probability Functions Continued 135 FO) = PIX <0) = f(0) = 1/4 o ‘That is, F(0) = 1/4 since for x» equal to zero the probability accumulated up to rtinsluding zer0 is the probability at zero, and that probability is 1/4. Now, Jet us set %o eguall 10.6, Refer again to Figure 6-1a, where xo = 6 is marked off on the x-axis, We have F(6) = P(X <6) = fi0) = 1/4 (6) since the only value of X less than or equal to .6 where a positive probability Secuts is X = 0. Equations like Eqs. (4) and (5) hold for all x,'s such that Oe ty < L, For example, consider xo =.25, xy = 436, and x = .9994. Sum rmarizing, we can write Fes) 4 for O 20 and p <.05, and a very good approximation when n > 100 and np < 10. In the approximation by the Poisson distribution, 2 is set equal to the product, op. EXAMPLE 6-4: Refer to Eq, (1) preceding Definition 6.2. Compare the values of (6; 50,000, 0001) and the corresponding value of the Poisson probability function with the same value of ‘X, namely X=6, and with 4=np= (50,000)(.0001) = 5, ‘Solution: Let's use some of the symbolism of Chapter 5, Thus, we can write: (6; 50,000,.0001) = K- P(A) a where 50,000! __50,000-49,999-...-49.995 a 149.9941 ce and P(A) = (0001)%(9999)899% @ We find that b(n, p) = b(6; $0,000, 0001) = .146230 @) Now, for x = Gand 2 = 5, we sec that lx, 8) = p65) = 2 = 146223 6 ‘Thus, in this situation, we see that the Poisson probability function provides an approximation of the binomial probability function which is accurate to four significant digits. It should bé clear that the calculation involved in Eq, (5) is ‘much casier and quicker than the-calculation involved in Eq. (4), The very good accuracy of the approximation is expected sisice m > 100 and np = 2 < 10, We can look'ip Poison probably values in Appendix A, Table IV. An illustration: of the limiting behavior of binomial probabilities with rp eld constant a 10, ad x eld constant at 11's provided in Problem 6-2 Discrete Probability Functions Continued 19 140 Probability and Statistics In that problem, is allowed to increase without bound with p simultaneously approaching zero, but with the product np held constant at 10. ‘Theorem 6.3 (Poisson Probability Function as a Limit of the Binomial Probability Function): The limit of the binomial probability function bxsn,p) a8 n+ co and p> 0, with np held constant and x held constant, is the Poisson probability function p(x;/) with the same value of x, and with anne. ‘The proof of Theorem 6.3 that is done in Example 6-5 depends on employing the limit concept of calculus. EXAMPLE 6. Proof: In the proof that follows, we shall make usc of the following two limit results [from calculus], which are indicated as Eqs. (a) and (b). fim (1-2! =e @) : Prove Theorem 63. lim (1-2 = 1, where x is a constant ) [Recall that ¢ denotes a universal constant whose value is 2.71828... From the equation for b(x;m,p) as given in Theorem 5.5, we obtain the following: 2: » blximp) (y(t = pyre a [Note that in the fractional expression of Eq. (1), in which xt occurs in the denominator, there are x factors in the numerator. Now, if we factor i out of each of the x factors in the numerator of the fractional expression of Eq. (1), we obtain wo oot avtasnaae(i-!(1-2)oo(0- @) Now, forming the product of n* with p*, we get (np). Also, let us write (1 ~ p)'"* as(1— py'/{t — p)*. With these modifications, Eq. (1) becomes creme —py ‘Now, lot us take the limit in Eq. (3) a8 x + co and p — 0, but with the product np held constant, and with x held constant. First, let A=np 4) Also, let us rewrite the last term of Eq. (3) as follows: =p = C0 — py = (0 - pry 6 [Hiere, we made use of the algebraic result that x"* = (x"), for the case where x= (1—pha= =I/p, b= —np,and ab = n.] Now, if we take the limit in Eq (G)in the way indicated above [that is let n~» 20 and p -+0, but with the product np held constant, and with x held constant], we obtain inf(-DC Ze er jim) ~ BF 6) beanp) lim Bein, p) = ‘Now, employing Eqs. (a) and (b) and the final form for (1 ~ p)* as given in Eq. (5), we get 2) (2 I o lim (1 = py i ® and : ti (= y= Hn C= pH = Oy Observe that Es. (8) and (9) follow from applying Eqs. (b) and (a) above with z =p. Now, applying the limit results from Eqs. (7), (8), and (9) to Eq. (6) results in fim b(x;, p) = (10) Hence, the limiting probability function is the Poisson probability function PA a EXAMPLE 6-6: In the production of fuses in a factory, it is known that 5% of them will be defective, Ifa box of 20 fuses is selected, what is the probability that (a) exactly two of the fuses are defective, (b) at most two of the fuses are detective, and (¢) at least three of the fuses are defective? Use a Poisson approximation to the binomial distribution here. Solution: Preliminaries: Recall that we actually have a hypergeometric situation hore with n = 20, and a/M =.05. However, itis plausible to assume that the population size M [which is unknown] is very large with respect to n; thus, as 2 first step, we can suppose that our probability function is approximately a binomial probability function with n= 20 and p = 05. [See Section 5-4B for a review of these ideas.) Furthermore, with these values of n and p, the binomial probability function is well approximated by a Poisson probability function with A = np = (20)(05) = 10. (a) Here, we wish to calculate P(X = 2)» p(2;1, where random variable X denotes the number of defectives, We see that ra;th= ESE" w 4839 @ For practice, let us look up this value in Table IV of Appendix A. Refer to the subtable below [which is a portion of Table IV], which indicates that we should find the location in the body of the table which is atthe intersection of the column labeled 4 = 1.0 and the row labeled x = 2. eee 0910 40663619 36593679 167 [1839 ‘vasa “0613 (b) Plat most 2 defectives) = p(0s 1) + pits) + p21) = 3679 + 3679 + 1839 9197 aftr using Table TV. (ec) Pfat least 3 defectives) = 1 ~ P(at most 2 defectives) = 1 ~ 9197 = 0803, after using the result from part (6). Discrete Probability Functions Continued 161 142 Probabitty and Statistics ote: We could casily solve the above example by using the binomial probability function with n = 20and p = .05. For example, from Table 1 of Appendix A, we see that P(X = 2) = b(2;20,.05) = .1887, and P(X <2) = Ehog b(X520,.05) = 3985 +3774 + .1887 = 9246, These are preferred answers to parts (a) and (b) of Example 6-6. EXAMPLE 6-7: Verify that 22.9 p(x) = 1. That is, verify that the sum of all Poisson probabilities is qual to 1. Note that we have the sum of an infinite series here, and that means we will have to make use of calculus, Solution: From calculus, itis known that the infinite series for e* in powers of = is given by 2 a Salers Gh tat a and this is valid for all values of 2. ‘Now, for a Poisson distribution, p(s on nye bE ® EF poideeteemae Factoring out e~* from the right side of Eg, (2), we obtain [isaede ade | @ z xt & rosa = ‘Comparing the expression inthe square brackets of Eq. (3) with the right side of Eq, (1), we see that the expression in the square brackets of Eq. (3) is equal to ¢* [replace z in Eq. (1) by 2]. Thus, we can rewrite Eq (3) as follows: 5 poser ‘We see that Eq, (4) is that which we wished to show. FA “ ‘We shall deal again with the Poisson probability function when we cover the topic of Poisson processes, B. The discrete uniform probability function Probably the simplest type of discrete probability function is the discrete uniform probability fuarction. Definition 6.3 (Discrete Uniform Probability Function): Here, the discrete random variable can take on any of the distinct values x, 25-5 Xeo1) ¥e Ifthe probability POX = x) = f(x) is given by 1 Fe) for each of then f(x) is a diserete uniform probability function [or discrete uniform probability distribution], The associated random variable is called a discrete uniform random variable. ‘An example of a discrete random variable is the random variable that represents the number of dots that occurs when a fair die is tossed. Here, fix) = for X = 1, 2,3,4,5,0r 6. EXAMPLE 6-8: A box contains 50 chips where ten are marked with the number 2, ten with the number 4, ten with the number 5, ten with the number 6, and ten with the number 9. An experiment consists of drawing a single chip from the box. Let random variable X indicate the number on the chip that is drawn, Determine and identily f(x), the probability function for X. Solution: f(x) = }for X = 2,4,5,6,9. Thus, fix)isa discrete uniform probability function, C. The geometric probability function Recall that the repeated independent and identical trials in a binomial experiment are often referred to as Bernoulli trials, Definition 6.4 (Geometric Probability Function): Given a sequence of Bernoulli trials for which the probability of success on a single Bernoulli rial is p [which is constant]. Let the random variable X denote the number of the trial on which the first success occurs. This X is called a geometric randem variable, and it takes on the values 1, 2, 3, 4,..., The probability function for this X is denoted by g(x:p), and it is called the geometric probability Junction or the geometric probability distribution or the geometric distribution. ‘Theorem 6.4: The geometric probability function g(x;p) is given by acep)=(1—pP tp, where X = 1,2,3,4,... {Sometimes, we shall use the symbol q for (1 — p), as was the case with the binomial probability function. ] Observe that X takes on a countably infinite number of values. Also, 0.< p <1. [Note that some practitioners allow for p = 1. In this special case, one would have g(x;1) = 1 for X= 1, and g(x; 1) = Olor X= 23,4... EXAMPLE 6-9: Prove Theorem 6. Proof: Suppose that X denotes the number of the trial on which the first success occurs. If the first success occurs on the first trial, then we have X =f and ailsp) = Plsuccess on frst trial) = p wo Now, let's focus on deriving the equation for g(x;p) for X = 2, 3,4, and so on. Now, a(x;p) is the probability that there is a sequence of x trials consisting of (x — 1) failures in a row followed by the first success on the last (rial. That is, (c~ Dates ae P(X = x) = glxsp) = PUFFF...FS) PUR )PURS) 00° PF )P(S,) = (1 = pil = poss php 2) pete ee bs hacros Thus, we have absp)=(1— pp fork = 2.3.4, e It we fet X = | in Eq. (3), we obtain Eq. (I). o A verification that 2. g(x;p) = 1 is presented in Problem 6-4, Discrete Probability Functions Continued 143 me Probability and Statistics EXAMPLE 6-10: Cards are drawn with replacement from a standard deck of cards containing 52 cards, What is the probability that the first spade occurs on the seventh card? Solution: Method 1: We wish to compute POF FaF aa FoSz) = PCF )P(Ea)--° PUES) POS) o Here, F; indicates a nonspade on the ith trial and S; indicates spade on the seventh trial. Now, P(R) = P(nonspade on ith card) = 3 and P(S,) = P(spade on 7th card) = 4. Thus, Bq, (I) leads to the following result: 9) = 094495 @ ‘Method 2: The probability we seek is the geometric probability function evaluated for p= 1/4 and X = 7. This quantity is g(x;p) = a(7:4). By Theorem 6.4, this equals (ra) as in Method 1. In calculus, one learns that a geometric series is an infinite series sym- bolized by Fat eat artar+.. tart @ ‘Theorem 65 (Sum of a Geometric Series): A geometric scries converges provided that —1 1 or r < —1, the geometric series diverges; that is, it doesn't converge and hence no finite value can be assigned to its sum.} Briefly, we write $ ate» when —T k) = (1 — pi or gt) Prove that P(X < b= 1 ~ 9. Proof (a) First of all, we know that P(X > h) = PIX > + Dm alk + tsp) + gtk + 2p) + ok + 3p) +. a 1 php, gtk+ 3p) = ind so on. Thus, substituting equations like these into Ea, (1) From Theorem 64, we have g(k + t:p) = (= pi", we obtain PIX > &) = pip + (1 = yp + (1 — py + @ Factoring out (1 — p}fp from all terms of the right side of Eq. (2}, we get P(X > k= (1— piel +0 - p+ P+) ° or PIX > = 1— ate $a © Let us apply the equation for the sum of a geometric series [see, for example, Eq, (b) of Theorem 6.5] to the sigma expression of Bq, (4). Here, a= Land r = 1 — p. [Note that itis valid to do this since 0 < p< | implies 0 b= (1 ~ ph =-9) ® (b) Here, we have PX sk) = 1 ~ Plnot X 8) © If we substitute the result from Bag, (5) into Eg, (6), we obtain PIX $= 1-gh oo EXAMPLE 6-12: Given the situation of Example 6-10. What is the probability that the first spade occurs on or before the seventh card drawn? Solution: The desired probability is P(X < 7), where X is a geometric random variable [which stands for the number of the trial, or card, on which the first spade occurs}. Thus, from part (b) of Example 6-1, and the fact that g = I~ p= 1— b= 2, we obtain P(X s7) (BY = 1 — 13348 = 86652 D. The negative binomial probability function Concerning repeated Bernoulli trials, sometimes we are interested in the number of the trial on which the kth success oceurs. (Here, k = 1 or 2 or 3, etc. Ifk were to equal 1, we would have the geometric distribution of Section 6-2C.] For example, we may want to know the probability that the eighth toss ofa fair die will yield the third ace, or the probability thatthe twentieth ‘card drawn with replacement from a standard deck of cards will result in the sixth spade drawn, Definition 6.5 (Negative Binomial Probability Function): Given a sequence of Bernoulli trials for which the probability of success on a single trial is p [which is constant]. Let the random variable X denote the number of the (rial on which the kth success occurs. [The integer k has the property that A 1] This random variable X is called a negative binomial random variable and it takes on the values k, k +1, k+2,.... Its probability function, denoted by b*(x5k,p),is called a negative binomial probability function, or a negative binomial probability distribution, ot, merely, a negative binomial distribution, ‘Theorem 6.6: The neg by binomial probability function b*(x;k,p) is given brisk p= (Ge ee ~ pr for x =k, k+1, k+2,..., Observe that x takes on a countably infinite number of values. Also, 0 < p < J. [The special case b*(x;k,p) for p = 1 is Discrete Probability Functions Continued us 6 Probability and Statistics given by bMx;k1) = 1 for x= k and bY(x:k,1) =O for eek 1 k+2, k-+3,and soon} (8) Sometimes, qis used instead of 1 ~ p. (b) Recall that the geometric distribution is the negative binomial distr- bution for k = 1, Thus, we bave 6*(x; 1,p) = g(p). (©) Negative binomial distributions are also known as binomial waiting-time distributions or as Pascal distributions. EXAMPLE 6-13! Prove Theorem 66, Proof: Ifthe kth suecess occurs on the xth trial [where x is equal to cither k or + Lor k-+ 2,etcl, that means that there must be (k — 1) successes on the first ( ~ 1) trial, followed by a success on the kth trial. The probability of the former is the binomial probability of (k — 1) successes in (x ~ 1) trials, and it is given by x1 sy ob Ga) (pr @ as we see from Theorem 5.5 [replace n by (x ~ 1) and x by (k~1)]. The probability of success on the kth tral is merely p. Now, because the x trials are independent, it follows that the event of obtaining (k ~ 1) suocesses on the frst, (1) tials is independent of the event of getting a success on the xth trial ‘Thus, by Definition 3.2, the probability 6*(xsk,p) is the product of p times the expression given in Eq, (2). That is, we have vicen=(S—!)pt—art wenekir ited Oo {A proof that 2 bcikp)=1 based on the properties of the binomial infinite series {another calculus topic}, is presented on page 222 of Larsen and Marx (1986). EXAMPLE 6-14 What is the probability that the twentieth card drawn with replacement from a standard deck of cards will be the sixth spade drawn? Solution: First, note that the probability of spade on a single card is .25. That is, p = 25. ‘Method 1: Ifthe sixth spade occurs on the twentieth card drawn, then there must be exactly 5 spades on the first 19 cards drawn, followed by a spade on the twentieth card drawn. The probability of the former event is the binomial probability 191 (5,19, 28) = sa (28)°(75)"* = 20233 a This value is available from Table I of Appendix A as 2023. The probability of spade on the twentieth card drawn is .25. Thus, the probability we seek is the product of the probabilities of these two independent events, namely 191 sag 250°C 75)*- (25) = 05058 @ Discrete Probability Functions Continued 147 Method 2: The answer we seek is just the negative binomial probability for Xx = 20 trials, where k = 6 and p = 25. Thus, from Theorem 6.6, we have vrane.25=(") ash and this is equal to .05058. and this is equal (0 05058 ‘The following theorem expresses the negative binomial probability IL function in terms of the binomial probability function. Theorem 6.7 enables one to use the table of binomial probabilities [Table I of Appendix A} in the i calculation of negative binomial probabilities. ‘Theorem 6.7: b*(x: kp) = (k/x)- Blk: x, ph where x =k, k-+ 1, k + 2,....On the left is the negative probability function for success number k occurring, con trial number x, On the right, we have the binomial probability function for exactly successes in x trials, [Ttis important to keep the symbols straight here. The general binomial probability function symbol is b(x;n,p), which represents the binomial probability of exactly x successes in n trials] EXAMPLE 6-15: Redo Example 6-14 by making use of Theorem 6.7 and Table of Appendix A. Solution: Here, we wish to calculate b*(20;6,.25). Let us substitute x = 20, k= 6, and p = 25 into the equation of Theorem 6.7 to get (20; 6.25) = $56(6;20, 25) w In Table I, we find that 6(6;20,.25) = .1686. Thus, we have (20; 6,.25) = #(.1686) = .05058 @ a before. EXAMPLE 6-16: Prove Theorem 6.7. Proof: Inthe usual expression for the binomial probability function, namely in bone) = sgh a let us replace x by & and n by x. Thus, we get (kx, 8) = item? ere te Q) Now, k x! 4 3 eH oy \ singe alja = (a ~ Uf fa isa postive integer. Now, from Theorem 6.6, we have ae =P aa pe Beck el = gage yi — PY “ ‘Thus, if we multiply both sides of Eq. (2) by k/x, and take note of Eqs. (3) and (4), we see that k Deak, p) = Rx) «6 148 Probability and Staisies 6-3. Parameters of Probability Distribution Families Each probability distribution that we covered in this chapter and in Chapter 5 ‘actually consisted of a family of individual probability distributions. For example, {or the binomial distribution as given by Theorem 5.5, namely for mt pact — pyre desma) = ae" — PY there is an individual probability distribution for each pair of values of n and p. ‘The quantities n and p are known as parameters for the binomial distribution family. For example, we have a particular binomial distribution in Example 6-1 {there,n = 2and p = .S],and we have another binomial distribution in Example 5.21 [there, n = 8 and p = 10} Each of these individual binomial distributions may be considered as members of the binomial distribution family. Speaking in general, we say that the parameters for a probability distribu- tion are quantities which are constants for a particulat member ofthe probability distribution family, but which take on different sets of values for different members of the probability distribution family. Thus, for the binomial distribution family cited above, the parameters are n and p, where n denotes a positive integer [and n> 2 for meaningful situations), and p can vary continuously on the interval given byO

0, but with np held constant at 10. TABLE 6-3 Approach of istibution Values " P| iin) 16018 mt 11988 11431 11379 1334 Discrete Probability Functions Continued 151 Observe that the Poisson probability p(x;4) for x = 11 and A= 0 is equal to .11374, as expected. rote: Recall that p(!1; 10) = (10%4e"#°)(111),A value or (11; 10, which is accurate to four signifi cant digits, namely 1137, is available from Table TV of Appendix A. PROBLEM 6-3 Over a long period of time, 2.3% of the calls received by a switchboard are ‘wrong numbers. Use the Poisson approximation to the binomial probability function to estimate the probability that among 200 calls received by the switchboard, (a) exactly four are wrong numbers, (b) at most two are wrong numbers, and (¢) at least four are wrong numbers. Solution Preliminaries: In the approximating Poisson probability function, we use 2 = np = 200 (023) = 4.60. We can look up the relevant Poisson probabilities in Table IV of Appendix A, (a) POX lx: A) = pl44.60) = 1875. (b) POX <2) = pl0;4.60) + p(1s4.60) + p(2;4.60) = .1626, 1 P(X $3) = 1 ~ [p(0;4.60) + p(134.60) + p(2;4:60) + p13; 4.60)] = 6743. EM 6-4. Verify that E2., g(x;p) = 1. Recall that g(x;p) denotes the geometric prob- ability function for which the probability of success in a single Bernoulli trial isp. Proof Since g(x;p) = (1 ~ pp, we have Y axip)= Y= pp w The series of Eq. (1) matches that of Eq. (2) if we let a = p, and r ‘of the right side of Eq. 2), we see that the sum in Eq, (1) is given by for-l9)= PK = 9) o ‘The expression on the left side is the conditional probability that random variable X is equal to x +5 given that X is greater than s. Show that a geometric random variable as a memoryless property, where x and 5 are integers and are such that x > 1 and s = 0. (b) Suppose that a fair dic is repeatedly tossed. What is the probability that the first ace occurs on the eighth trial, given that the first ace occurs on a trial higher than the third trial? Solution (Sopot Xa omer abl Then te igh ie of a canbe oiows P(X =x)=(1~ py tp forx=1,2,3, wD Using Detntion 33, atte condo probailiy frm om thee an Be expressed as puentsix> gn PE zttate a rio it oof Example denominator a (ene ceed P(X >= (PF i) Consider the event (¥ = x + sand X > s)ofthe numerator of Eq. 2). At the outset, recall that x and sare integers such that x > | ands > 0. The collection of values of random variable X such that X > s consists of 1 +8245 3+5...,X=-1+5x+8 4145, -.0. Thus, the collection of values of random variable X such that X = x +s and X > s consists only of the single value X =x + s. Thus, we have P(X =x +sand x >s)= P(X =x49=(1— pp & ‘The last term of Eq, (4) holds since X is a geometric random variable. Substituting from Eqs. (3) and (4) into Eq, (2), and canceling out (1 ~ py’, yields PU ax +s1X> 9) =U pO P © ‘Thus, since the right sides of Eqs. (1) and (5) ar identical, we.see that PUK =x +5/X > 3) PX =3) o which is what we intended to show [see Eq. (I) in the statement of the problem]. (b) Let X denote the trial on which the first ace occurs, Thus, X is a geometric random variable for which p= 1/6, In this problem, we are secking the value of the conditional probabi given by the following expression: P(X =8|X>3) o Now, the result proved in part (a), which is given by Eq. (6) above, applies here with s and x +s = 8, Thus, it follows that x = 5, Now, for s = 3 and x = 5, Eq. (6) becomes P(X =8|X >3) = PX =5) ® For P(X = 5), we have the geometric probability g(5; 1/6). Thus, we have P(X = 8] X > 3) = a(S; 1/6) = (3)*(2) = 080376 ) PROBLEM 6-7 A fair die has the faces with 1, 2, and 3 dots painted red, the faces with 4 and 5 dots painted green, and the face with 6 dots painted white. The dic is tossed repeatedly until cither a red or green face occurs [if white occurs, the di is tossed again]. What is the probability that a red face occurs before a green face occurs? Solution For a single toss of the die, we have P(R)=3/6, PG) =2/6, and PW) = 1/6 (Ia), (1b), (Le) where R, G, and W indicate faces that are “red,” “green,” and “white,” respectively. Discrete Probability Functions Continued 153 Let R, indicate “ted occurs on toss i,” and, similarly, let W; indicate “white occurs on toss j.” Thus, we have P(red occurs first) = P(R, or W; Ry oF W,WaRy or W, Wa WyRe or.) — (2) where, for example, the symbol 17, WR, means that “white” occurs on the first two tosses followed by “red” on the third toss, The events cited on the right side of Eq. (2) are mutually exclusive, and hus. (red occurs first) = P(R,) + P(W,R;) + POW, W,R,) + ® Since the individual tosses constitute independent trials, the events in the individual terms are independent. [Sce Theorem 5.3. For example, in the individual term W, W,Ry, the events W,, W,, and Ry are independent.] Thus, Eq. (3) becomes P(ced ocours first) = P(R,) + PW, JP(R,) + PUW,)P(W,)P(R,) + @ Now, using Eqs. (1a) and (16) in Eq, (), and noting that P(R,) = P(R), etc, we obtain Plsed occurs first) = 2+ E+) + 6 That is Pred occurs first) = 3° 3()*~* © Thus, after using Theorem 6.5 to determine the sum of this geometric series [here, a = 3, and 2], we obtain a 3 Because of the nature of the termination rule [which is to stop whenever red or green occurs], we have Pigroen occurs first) = | ~ P(red occurs first ® Parameters of Probability Distribution Families PROBLEM 6-8 Specify the members of the hypergeometric distribution family for which the parameters take on the following values: (a) n = 20, a= 30, M = 100; (b)n = Sa = 3, M = 25. Solution (a) Here nGe;na, M) = (x; 20, 30, 100) (b) Here, bein. M) = h(x; 5,3,25) “ey for x =0,1,2,3,4,5 s note: Probabilities for x = 4 and 5 are equal to zero. ke 154 Probability and Statistics el Supplementary Problems PROBLEM 6-9 Determine the cumulative distribution function F(x) for the hypergeometric random variable for which n = 3,a = 4, and M = 10. Sketch the bar chart for f(x) = h(x:3,4, 10) and the graph for the cumulative distribution function, F(x), in terms of x 20/120 for 0 < x < 1, F(x) = 80/120 for

You might also like