You are on page 1of 16

ST 260: Statistical Data Analysis

1

5 − np  P (X ≤ x) = P (X ≤ x + 0. the probability above is difficult to compute.5 − np  P (X ≥ x) = P (X ≥ x − 0.000−x Clearly.000. Then X is binomial and P (X ≤ 150) = 150 X x=0 16. a continuity correction is applied as follows:   x + 0. what is the probability that 150 or fewer errors occur? Let the random variable X denote the number of errors. 2 . assume that the number of bits received in error can be modeled by a binomial random variable. To approximate a binomial probability with a normal distribution. 000. Fortunately.Normal Approximation to Binomial and Poisson Distributions Example 1 In a digital communication channel. The above approximation is good when n is large relative to p (see Figure 1). and assume that the probability that a bit is received in error is 1 × 10−5 .5) ≈ P Z ≥ q np(1 − p) Probabilities involving X can be approximated by using the standard normal distribution.5) ≈ P Z ≤ q np(1 − p) and   x − 0. the normal distribution can be used to find an excellent approximation in this case. If 16 million bits are transmitted. Normal Approximation to Binomial Definition 1 If X is a binomial random variable with parameters n and p X − np Z=q np(1 − p) is approximately a standard normal random variable. 000 x ! (10−5 )x (1 − 10−5 )16.

5 − 5   = P (Z < −1.5 − 160  ≤q P (X ≤ 150) = P (X ≤ 150.1)(0.75 0 5 10 n = 1000. To judge how well the normal approximation works.5 n = 10.18) = 0. p = 0.12 (0. the normal approximation is reasonable.9 + 50 1 ! 50 2 49 0. p = 0.119 <q 50(0.5) = P  q 160(1 − 10−5 ) 160(1 − 10−5 ) X − 160 ≈ P (Z ≤ −0.112 Based on the normal approximation  P (X ≤ 2) = P  q X −5 2.25 600 600 400 400 200 200 0 0 5 0 ï5 10 n = 10.9)48 = 0.9) 50(0. 3 .9) Even for a small sample of 50 bits.25 600 500 400 400 300 200 200 100 0 0 5 10 0 200 15 250 300 Figure 1: Normal approximations to binomial distribution. assume only 50 bits are to be transmitted and the probability of an error is p = 0.75) = 0. The exact probability that two or less errors occur is P (X ≤ 2) = 50 0 ! 50 0. Example 2 The digital communication problem in the previous example is solved as follows:   150. p = 0.227 Example 3 Again consider the transmission of bits as given in the previous examples. p = 0.1)(0.1.n = 10.1(0.9) + ! 0.

then X −λ Z= √ λ is approximately a standard normal random variable.5653) = 0. The approximation is good for λ > 5. what is the probability that 950 or fewer particles are found? The probability can be expressed exactly as P (X ≤ 950) = 950 −1000 X e 1000x 0 x! The computational difficulty is clear.058756 4 . Example 4 Assume that the number of asbestos particles in a squared meter of dust on a surface follows a Poisson distribution with mean of 1000. If a squared meter is analyzed.5 − 1000 √ P (X ≤ x) = P Z ≤ 1000 ! = P (Z ≤ −1. !=1 !=8 1500 1000 800 1000 600 400 500 200 0 −5 0 5 0 −10 10 0 ! = 10 10 20 ! = 20 800 600 600 400 400 200 200 0 0 5 10 15 0 20 0 10 20 30 40 Figure 2: Normal approximations to Poisson distribution.Normal Approximation to Poisson Definition 2 If X is a Poisson random variable with E(X) = λ and V ar(X) = λ. The probability can be approximated as 950.

the empirical rule applies.Assessing Normality • Whether or not the normal distribution is an adequate model for a given data set is often assessed by 1. • It is bell-shaped. the mean and median are equal. Example 5 Consider the data shown in the histograms in Figure 3. • The interquartile range equals 1. Normal probability plot Comparing Data Characteristics to Theoretical Properties The normal distribution has several important theoretical properties: • Symmetrical. thus. thus.33 standard deviations. Box-plots 4. The “raw” data histograms are shown on the left-hand side. Poisson (! = 3) Standardized Poisson (! = 3) 2500 250 2000 200 1500 150 1000 100 500 50 0 0 5 10 0 −5 15 Poisson (! = 20) 0 5 Standardized Poisson (! = 20) 4000 250 200 3000 150 2000 100 1000 0 50 0 10 20 30 0 −5 40 0 Figure 3: Histograms of count data. Comparing characteristics of the data with theoretical properties of normal distribution 2. Frequency histogram 3. 5 5 . while standardized data histograms are shown on the right.

• If we compute the estimated mean. • If we compute the estimated mean. • If we examine the box-plots given in Figure 4. we obtain values 0. 6 . and 1.0070. respectively. respectively. and 1.0219. we see that the normal distribution is not a very good approximation to the Poisson distribution with λ = 3.1570.0090. median and IQR using the standardized data (with λ = 20).3421. • The same conclusion can be drawn by examining the normal probability plots given in Figure 5. median and IQR using the standardized data (with λ = 3).0034. Standardized Poisson (! = 3) 10 4 8 3 2 6 Z X Poisson (! = 3) 1 4 0 2 −1 0 −2 Poisson (! = 20) Standardized Poisson (! = 20) 40 3 35 2 30 1 Z X 25 20 0 15 −1 10 −2 5 Figure 4: Box-plots of count data. we obtain values −0. 0. 0.

95 0.001 0.75 Probability Probability Standardized Poisson (! = 3) 0.75 0.02 0. and the standardized scores zj satisfy j − 0.99 0.003 0. Φ(zj ) = 0.01 0.05 implies that zj = −1. Table 1 demonstrates calculation of the standardized normal scores for a data sample containing n = 10 observations.001 −1 0 1 2 3 4 −2 −1 Z 0 Z 1 2 3 Figure 5: Normal probability plots of count data.05 0.999 0. Constructing a Normal Probability Plot Definition 3 A normal probability plot can be constructed on ordinary axes by plotting the standardized normal scores zj against x(j) .999 0.99 0.05 0.95 0.50 0. where x(j) is the j th smallest observation in the sample. 7 .50 0.05.997 0.10 0.997 0.98 0.98 0.5)/n = 0.01 0.02 0.5 = P (Z ≤ zj ) = Φ(zj ) n For example. Figure 6 shows a plot of x(j) versus zj for the battery-life data given in Table 1.90 0.90 0.003 0.25 0. if (j − 0.Standardized Poisson (! = 20) 0.10 0.25 0.64.

8 ..5 −2 175 180 185 190 195 200 205 210 215 220 X(j) Figure 6: Normal probability plot of battery-life data given Table 1.67 1.Table 1: Example showing how to compute zj ’s when constructing NPP with n = 10.75 0. j x(j) 1 176 2 183 3 185 4 190 5 191 6 192 7 201 8 205 9 214 10 220 (j − 0.5)/10 0.5 1 zj 0.e.39 0.5 0 −0.55 0.64 NPP of Battrery−Life Data 2 1.05 0.39 −0.85 0. The resulting zj ’s are then plotted against the x(j) ’s (i.13 0. the observations ranked from smallest to largest).65 0.04 1.64 −1.95 zj −1.15 0.25 0.5 −1 −1.35 0.45 0.67 −0.04 −0.13 0.

5 3 3.5. Definition 5 Random sample. and S)..e. 6.5 and V ar(X) ¯ = 1. the sampling distribution of X n = 2 and their means (see Table 11).Sampling Distributions and the Central Limit Theorem • A sampling distribution is the probability distribution of a given statistic based on a random sample of size n.0.5.5.5 6 ¯ with n = 2 tosses of a single die.. 2. The random variables X1 .. Xn are a random sample of size n if (1) the Xi ’s are independent random variables.0. 4. 2.. The sampling distribution X ¯ is shown in Figure 7. while Figure 8 compares the distribution of X to that for X ¯ Note also that E(X) ¯ = 3. Figure 7: Sampling distribution of X 9 . Definition 4 A statistic is any function of the observations in a random sample. that is possible samples of size n = 2. This implies E(X) ¯ = µX of X. where Xi is the outcome of the ith toss). .5 Xïbar 4 4.0 ]. 2 2 ¯ = σ /2. Example 6 Consider tossing a fair die two times and computing the average of ¯ = (X1 + X2 )/2. and there are 11 values that X ¯ ∈[1.5 2 2. ¯ S 2 .. 3. X ¯ is determined by looking at all samples of size Then.5. 1. and (2) every Xi has the same probability distribution. where µX and σ denote the mean and variance of X.5.5 5 5.0. • We have encountered statistics before (e.0. and V ar(X) X X Sampling Distribution for Average of Two Rolls of a Fair Die 6/36 5/36 Probability 4/36 3/36 2/36 1/36 0 1 1. 3.g. the outcomes (i. X2 .46. 5. Note that in Table 11 there are a total of 36 ¯ can take on. X.0. 5. 4.

5 4. 2 4. 3 5.0 5.5 4. 4 3. 5 5. 3 3. 3 2. 5 4.0 4.5 3.5 2. 1 2. 1 6. 5 2. 5 3. 2 6. 4 6. 6 5. 3 4.5 6. 2 1.5 3. 4 5.0 4. 1 3.0 1.5 1.0 3.0 2.5 5. 5 6.5 2.Table 2: All possible outcomes for the average of two tosses of a fair die.5 4. Sample 1. 4 1. 1 1.0 4. 4 2.0 3.0 2. 2 3.0 5.5 2. 6 2. 2 2.5 5. 1 4.5 3.0 3.5 4.0 . 1 5.0 2.5 3.0 2. 6 10 ¯ X 1.0 3. 3 1. 4 4. 6 3.5 5. 3 6. 2 5. 5 1. 6 6.5 4.5 3. 6 4.0 4.0 3.0 3.

5 3 3.e.Probability Distribution for X Probability 1/6 0 1 2 3 4 5 6 X Probability Distribution for Xïbar 6/36 5/36 Probability 4/36 3/36 2/36 1/36 1 1. average of the outcomes if die is tossed twice).5 2 2. outcome if die is tossed once) and X (i.e..5 5 5.5 6 ¯ Figure 8: Probability distribution of X (i.5 4 Xïbar (n = 2) 4. 11 ..

. If n < 30 the central limit theorem will work if the distribution of the population is not grossly non-normal.. and (3) the method of sample selection. . if the sample size n is large. If a customer purchases a 6-pack of soda. say. X2 . 2 ¯ = X2 .3 ounces. Xn is a random sample of size n taken from a population (either finite or infinite) with mean µX and finite 2 ¯ is the sample mean..2 ounces and a standard deviation of 0. Definition 7 Central Limit Theorem.2 ounces and a standard deviation of 0. the sampling distribution of a statistic depends on (1) the distribution of the population. • In general. the sampling distribution of the sample mean will still be approximately normal with mean µ and variance σ 2 /n. what is the probability that the average amount of the 6 cans is more than 32 ounces? ¯ To find this probability. the limiting form of the distribution of variance σX .. Definition 6 Suppose each of n observations in a random sample.3 ounces. (2) the size of the sample.. we need to know the sampling distribution of X.. X1 . • In many cases of practical interest. the rule of thumb is that if n ≥ 30. Then X Pn 2 ¯ = µX and variance σX ¯ = i=1 Xi /n is also normally distributed with mean µX 2 σX /n. and if X Z= ¯ − µX X √ σX / n as n → ∞ is the standard normal distribution. 12 . If a customer purchases one bottle. the central limit theorem will work. If X1 . Xn . is normally distributed with mean µX and variance σX . • If we are sampling from a population with unknown probability distribution. Example 7 The foreman of a bottling plant has observed that the amount of soda in each “32-ounce” can is normally distributed with a mean of 32.Sampling Distribution of the Mean ¯ is often called the sampling distribution • The probability distribution of X of the mean. . what is the probability that the bottle will contain more than 32 ounces? We already know how to do this! Example 8 The foreman of a bottling plant has observed that the amount of soda in each “32-ounce” can is normally distributed with a mean of 32.

e. • It can be shown that E(Pˆ ) = µPˆ = p and V ar(Pˆ ) = σP2ˆ = p(1 − p)/n (so that σPˆ ) Note 1 For those interested and familiar with expectation and variance operators .. • The normal approximation to the binomial is often used to approximate the sampling distribution of a proportion. E(Pˆ ) = E(X/n) = 1 np E(X) = =p n n and 1 1 p(1 − p) V ar(Pˆ ) = V ar(X/n) = 2 V ar(X) = 2 · np(1 − p) = n n n • Assuming the normal approximation to the binomial distribution is adequate. Z=q Pˆ − p p(1 − p)/n 13 ≈ N (0. Is this consistent with the Dean’s claim? Sampling Distribution of a Proportion • The estimator of a population proportion of “successes” is the sample proportion.. He takes a random sample of 25 people who graduated one year ago and determines their weekly salary... He discovers the sample mean to be $750. i. A second-year student would like to check whether the claim about the mean is correct. we can standardize the sample proportion Pˆ and reference the standard normal distribution. we count the number of “successes” in a sample and then compute X Pˆ = n where X is the munber of “successes” and n is the sample size.e. 1) .Example 9 The Dean of the College of Business claims that the average salary of the school’s graduates one year after graduation is $800 per week (µX ) with a standard deviation of $100 (σX ). i.

– In a pharmaceutical study. – In the study of a chemical process.95)/100  = P (Z > 2. or X normally distributed with ¯1 − X ¯ 2 ) = µX¯ −X¯ = µ1 − µ2 E(X 1 2 and 2 2 ¯1 − X ¯ 2 ) = σ 2¯ ¯ = σ1 + σ2 V ar(X X1 −X2 n1 n2 14 .05(0.10? We are interested in the probability  P (Pˆ > 0. ¯1 − X ¯ 2 . – In a comparison of universities.2942) = 1 − Φ(2.05.g.2942) = 0.1 − 0.2942) = 1 − P (Z < 2.Example 10 Suppose that bottle caps are manufactured for a local brewery.0109 Sampling Distribution for Difference in Two Means • Often the sampling distribution for the difference in two means is of interest. one may want to determine if two different temperatures results in the same yield..05 0. what is the probability that the proportion of non-conforming bottle caps in the sample is greater than 0.1) ≈ P Z > q  0. e. one may want to determine if two different drugs produce the same effect. Suppose you take a random sample of 100 bottle caps from the process and determine the count of “non-conforming” bottle caps contained in the sample. one may be interested in comparing the average salaries of their graduates. • Suppose you take two random samples: one from a normal population with mean µ1 and variance σ12 and another from a normal population with mean µ2 and variance σ22 . is also • Then the difference between the two sample means. If the true process fraction nonconforming is p = 0.

500/yr 50 University 2 $60.000/yr $14. (σ) Sample Size (n) University 1 $62. Dev. Example 11 Starting salaries for MBA graduates at two universities are normally distributed with the means and standard deviations given in Table 11. then by the central limit theorem the ¯1 − X ¯ 2 is approximately normal. If graduates are selected at random from each university.300/yr 60 • Note that if both of the populations are NOT normally distributed. but the sample sizes are “large” (> 30). sampling distribution of X 15 .000/yr $18.r σ2 σ2 • Note that σX¯1 −X¯2 = n11 + n22 is also called the standard error of the difference between two means. what is the ¯1 − X ¯2? sampling distribution of X Mean (µ) Std.

12.98. What is the probability of observing a difference in the sample means less than that observed? (i. (a) If fill volume is known to have a mean of 12 oz and a standard deviation of 0.96. ¯1 − X ¯ 2 < −4. 11.05. Suppose you take take random samples of sizes n1 = 10 and n2 = 5 from two different normal populations.04.005 cm. Assume the fill volume is normally distributed. where x¯ = 12. A machine is used to fill containers with a liquid product.09. Suppose the first population has mean µ1 = 25 and variance σ12 = 5. 12. A random sample of 10 containers is selected and the net contents (oz) are as follows: 12.2. and 11.99.Practice Problems 1.75 and x¯2 = 29.02. A random sample of 200 printed circuit boards contains 18 defective or nonconforming units. (a) What is the sample process fraction nonconforming? (b) If the true process fraction nonconforming is p = 0. 4.45)) you are looking for P (X 16 . 12. Suppose that you compute the sample means of each sample and obtain x¯1 = 24.01.) 2.0150.e. 11.10. What is the probability that you will see a sample mean greater than x¯? 3.2535.25 cm and standard deviation of 0. 12. 12.02.03 oz. and the second population has mean µ2 = 32 and variance σ22 = 7.. Suppose you take a random sample of 15 bearings and compute x¯ = 8. The inside diameters of bearings used in an aircraft landing gear assembly are known to have a mean of 8.05. 12.03. what is the probability you will observe a sample process fraction nonconforming less than 0. what is the probability that you will see a sample average greater than x¯? (b) Does the assumption of normality seem appropriate for the fill volume data? (Note: you must justify your answer by assessing the normality of the data.