Introduction to statistics

Bioinformatics-II Computational Sciences April 2010
Prof. dr. Antoine van Kampen 1. Bioinformatics Laboratory, AMC 2. BioSystems Data Analysis, UvA a.h.vankampen@amc.uva.nl

Descriptive Statistics

Describing data

Moment

Non-mean based measure Mode, median Range (max-min), Interquartile range (1st3rd quartile) ---

Center Spread

Mean Variance (standard deviation) Skewness Kurtosis

Skew Peaked

Quartile
‡ In descriptive statistics, a quartile is any of the three values which divide the sorted data set into four equal parts, so that each part represents 1/4th of the sample or population.
± first quartile (designated Q1) = lower quartile ‡ cuts off lowest 25% of data (25th percentile ) ± second quartile (designated Q2) = median ‡ cuts data set in half (50th percentile ) ± third quartile (designated Q3) = upper quartile ‡ cuts off highest 25% of data, or lowest 75% (75th percentile )

‡ The difference between the upper and lower quartiles is called the interquartile range.

the term degrees of freedom (df) is a measure of the number of independent pieces of information on which the precision of a parameter estimate is based.D. http://www.jerrydallal. 227-228 . i !1 Degrees of freedom n 2 variance ( xi  Q ) § n 1 ! s i !1 n 2 Standard deviation In statistics. S.htm Jack Good's 1973 article in the American Statistician "What are Degrees of Freedom?" 27. of a Sample ( xi  Q ) 2 § n 1 ! s .com/LHSP/dof.Variance.

Skewness Frequency Value .

Box-whisker plots .

Quality of a sampling estimate Precision & validity No precision Precision but no validity Random error Systematic error (bias) .

chi-square ‡ What parameters describe their shapes ‡ How these distributions can be useful . t-distribution. hypergeometric.Distributions ‡ Normal. binomial. Poisson.

Normal distribution .

The Normal Distribution ‡ Also called a ³Gaussian´ distribution ‡ Centered around the mean Q with a width determined by the standard deviation W ‡ Total area under the curve = 1.0 1  ( x  Q ) / 2W 2 f ( x) ! e W 2T .

.A Normal Distribution . ‡ For a mean of 5 and a standard deviation of 1: . .

a histogram of your measurements will approach the appearance of a normal distribution. ± Imagine repeating this process many times. ‡ You won¶t get the same answer every time. but if you make a lot of measurements. ‡ Any situation in which the exact value of a continuous variable is altered randomly from trial to trial.What Does a Normal Distribution Describe? ‡ Imagine that you go to the lab and very carefully measure out 5 ml of liquid and weigh it. ‡ The random uncertainty or random error .

How Do You Use The Normal Distribution? ‡ Use the area UNDER the normal distribution ‡ For example. the area under the curve between x=a and x=b is the probability that your next measurement of x will fall between a and b .

96 standard deviations of the mean.4 75 94.55.6 A normal distribution with a mean of 75 and a standard deviation of 10.6.4 to 94. For all normal distributions. 95% of the area is within 1. The shaded area contains 95% of the area and extends from 55. .

4 94.6 Area is determined by integration of normal distribution .55.

Integration .

their mean would be Q and their standard deviation would be W In practice. Q and W will be given Later we¶ll use x and s to estimate Q and W . you have a finite number of measurements with mean x and standard deviation s ‡ For now.How Do You Get Q and W? ‡ To draw a normal distribution you must know Q and W 1  ( x  Q ) / 2W 2 f ( x) ! e W 2 ‡ If you made an infinite number of measurements.

The Standard Normal Distribution ‡ It is tedious to µintegrate¶ a new normal distribution for every population. so use a µstandard normal distribution¶ with standard tabulated areas. ‡ Convert your measurement x to a standard score (z-score) z = (x .Q) / W ‡ Use the standard normal distribution Q = 0 and W = 1 (areas tabulated in any statistics text book) The z-score indicates the number of standard deviations that value x is away from the mean Q .

Probability density function z-transform green curve is standard normal distribution .

Standard Normal Distribution z=1.96 2.5% .

Standard Normal Distribution .

8% mean=0 std=1 x>=1.66 transform of mean: z = (5-5)/3 = 0 transform of other value: z = (10-5)/3 = 1.66 .mean=5 std=3 x>=10 4.

Exercises 1 *If scores are normally distributed with a mean of 30 and a standard deviation of 5.statsoft. what percent of the scores is: (a) greater than 30? (b) greater than 37? (c) between 28 and 34? *What proportion of a normal distribution is within one standard deviation of the mean? *What proportion is more than 1.8 standard deviations from the mean? *A test is normally distributed with a mean of 40 and a standard deviation of 7.html . What value would be needed to be in the 85th percentile? Stat tables: http://www.com/textbook/sttable.

Binomial distribution .

What Does the Binomial Distribution Describe? ‡ yes/no experiments (two possible outcomes) ‡ The probability of getting all ³tails´ if you throw a coin three times ‡ The probability of getting all male puppies in a litter of 8 ‡ The probability of getting two defective batteries in a package of six .

Exercise 2 What is the probability of getting one µ2¶ when you roll six dice? .

if the overall probability of the result is p ‡ Note that here. k is a discrete variable ± Integer values only .The Binomial Distribution bionomial coefficient ‡ The probability of getting the result of interest k times out of n.

402 .Binomial Distribution ‡ n=6 ‡ p = 1/6 ‡ k = [0 1 2 3 4 5 6] number of dice rolled probability of rolling a 2 # of 2s out of 6 0.

number of puppies in litter probability of any pup being male # of males out of 8 . ‡ k = [0 1 2 3 4 5 6 7 8]. ‡ p = 1/2.Binomial Distribution ‡ n = 8.

unless p is very small ‡ Mean number of ³successes´ is np ‡ Variance of distribution is variance (X) = n p(1.p) .The Shape of the Binomial Distribution ‡ Shape is determined by values of n and p ± Only truly symmetric if p = 0.5 ± Approaches normal distribution if n is large.

your little brother claims to have rolled a ³Yahtzee´ in 6¶s (five dice all 6¶s) in one roll of the five dice. How justified would you be in beating him up for cheating? .Exercise 3 While you are in the bathroom.

1. 2..Poisson distribution e Q Pn ( Q ) ! n! P( ) Q n probability of getting n counts (=0...) average of distribution variance == mean .

Poisson distribution Randomly placed dots over 50 scale divisions. On average =1 dot per interval =1 e Q Pn (Q) ! n! P( ) Q n probability of getting n counts average of distribution n .

Exercise 4 e Q n (Q ) ! n! Q n Pn( ) probability of getting n counts average of distribution Average number of phone calls in 1 hour = 2.1 What is probability of getting 4 calls? .

1 What is probability of getting 0 calls? Does this simplify the formula? .Exercise 5 e Q n (Q ) ! n! Q n Pn( ) probability of getting n counts average of distribution Average number of phone calls in 1 hour = 2.

Hypergeometric distribution .

of these m are yellow and others are blue. ‡ X is a random variable following hypergeometric distribution ‡ What is the probability of observing X=6 yellow balls? N=20 m=n=10 draw k=10 balls X=6 .Hypergeometric Distribution ‡ Suppose that we have an urn with N balls in it. ‡ Then k balls are drawn from the urn without replacement and of these X are observed to be yellow.

0.15 2 4 X 6 8 10 P(X = x) = ¨m¸ ¨ n ¸ © ¹© ¹ ª x º ªk-xº ¨N¸ © ¹ ªkº N! x! (m-x)! (k-x)! (n-k+x)! = k! (N-k)! m! n! .00 0. 0 =n=k=10 White Balls drawn Remained in urn X m±X m Black k ±X n±k+X n k P(X) N-k N 0.Hypergeometric Distribution N=20.

Fisher¶s Exact Test ‡ We often want to ask whether there are more white balls in the sample than expected by chance.k)  If the probability is small. White Balls drawn Remained in urn x¶ m ± x¶ m Black k ± x¶ n ± k + x¶ n k N-k N P(X u x¶) = § ! x! (m-x)! (k-x)! ( -m-k x)! k! ( -k)! m! ( -m)! x= x' to min(m. . it is less likely that we get the result by chance.

Hypergeometric example ‡ We extracted 36 samples from leukemia microarray dataset: ± Whole dataset: 47 ALL + 25 AML (total 72 samples) How many ALL and AML samples do you expect when you randomly select samples from the dataset? .

Hypergeometric example ‡ We extracted 36 samples from leukemia microarray dataset: ± Whole dataset: 47 ALL + 25 AML (total 72 samples) How many ALL and AML samples do you expect when you randomly select samples from the dataset? Answer: 23.5 AML ratio = 1.88 (original ratio = 47/25=1.5 ALL 12.88) .

Hypergeometric example ‡ We extracted 36 samples from leukemia microarray dataset: ± Whole dataset: 47 ALL + 25 AML (total 72 samples) ± Extracted: 29 ALL + 7 AML Is this sample enriched for ALL samples? ALL Extracted Not extracted Total: 29 18 47 AML 7 18 25 36 36 72 Pr(extracted ALL 29) ¨47¸ ¨ 25 ¸ ª i º ª36 ± iº = ¨ 72 ¸ i >= 29 ª 36 º § 36 = 0.006  Conclusion: This sample is significantly enriched with ALL samples possible bias in sample selection .

Sampling Distribution ‡ Every time we take a random sample and calculate a statistic. . This distribution is referred to as a sampling distribution. a statistic is a random variable). ‡ A sampling distribution is a distribution that describes the chance fluctuations of a statistic calculated from a random sample. we will build up a distribution of values for the statistic. ‡ If we continue to take random samples and calculate a given statistic over time. the value of the statistic changes (remember.

Sampling Distribution of the Mean
‡ The probability distribution of distribution of the mean.

X

is called the sampling

‡ The distribution of X , for a given sample size n, describes the variability of sample averages around the population mean .

Sampling Distribution of the Mean
‡ If a random sample of size n is taken from a normal 2 population having mean x and varianceW x , then X is a random variable which is also normally distributed with mean x and variance W 2 .
x

n
‡ Further,

X  Qx Z! W n

is a standard normal random variable.

Sampling Distribution of the Mean
Original population 1
Original Population Averages - Sample Size = 10

n(100,5)

3

n(100,1.58)

80

85

90

95

100 X

105

110

115

120

80

85

90

95

100 X(10)

105

110

115

120

Averages - Sample Size = 2

Averages - Sample Size = 25

2

n(100,3.54)

4

n(100,1)

5/sqrt(2)=3.54
80 85 90 95 100 X(2) 105 110 115 120
80 85 90 95 100 X(25) 105 110 115 120

Sampling Distribution of the Mean
‡ Example: A manufacturer of steel rods claims that the length of his bars follows a normal distribution with a mean of 30 cm and a standard deviation of 0.5 cm.
(a) Assuming that the claim is true, what is the probability that a given bar will exceed 30.1 cm? (b) Assuming the claim is true, what is the probability that the mean of 10 randomly chosen bars will exceed 30.1 cm? (c) Assuming the claim is true, what is the probability that the mean of 100 randomly chosen bars will exceed 30.1 cm?

63 p=0.42) (b) Assuming the claim is true.1 cm? (z=30. what is the probability that the mean of 100 randomly chosen bars will exceed 30.02) .1-30)/(0.1 cm? (z=30.1-30)/0.1 cm? (z=30.5/sqrt(10)=0. what is the probability that a given bar will exceed 30.5/sqrt(100)=2 p=0.Sampling Distribution of the Mean ‡ Example: A manufacturer of steel rods claims that the length of his bars follows a normal distribution with a mean of 30 cm and a standard deviation of 0. (a) Assuming that the claim is true.5=0.26) (c) Assuming the claim is true.1-30)/(0. what is the probability that the mean of 10 randomly chosen bars will exceed 30.2 p=0.5 cm.

5 29 29.5 29 29.5 30 X 30.42 28.5 . 26 .5 30 X(100) 30.5 30 X(10) 30.5 31 31.5 31 31.5 28.02 28.Sampling Distribution of the Mean Steel Bar Lengths (cm) .5 31 31.5 29 29.5 Steel Bar Lengths (cm) Steel Bar Lengths (cm) .

standard error: _ W x = W .Properties of Sample Mean as Estimator of Population Mean ‡ Expected value of sample mean is population mean X) ! Q ‡ The mean has variance Variance = W 2 n n As n increase. _ W x decrease.

.

When the Population is Normal then the Sampling Distribution is Also Normal Population Distribution Central Tendency = 10 Q _ = Q x Q = 50 X Variation W _ = W x n Sampling Distributions n=4 DX = 5 QX = 50 X n =16 DX = 2.5 X .

Central Limit Theorem As Sample Size Gets Large Enough Sampling Distribution Becomes almost normal regardless of shape of population X X .

When The Population is Not Normal Population Distribution Central Tendency W = 10 Q = 50 X Sampling Distributions n=4 WDX = n =30 WDX = 1.8 Q _ = Q x Variation W _ = W x n QX ! 50 X .

Central Limit Theorem ‡ Rule of thumb: normal approximation for X will be good if n > 30. ‡ Otherwise t-distribution . If n < 30. the approximation is only good if the population from which you are sampling is not too different from normal.

‡ However. it is often true that one is estimating from the same set of data.t-Distribution ‡ So far. This may be true if one has a large amount of experience with a certain process. along with §(Xi  X ) W !s! Ö n 1 2 . we have been assuming that we knew the value of .

t-Distribution ‡ To allow for such a situation. we will consider the t statistic: X  Qx T! S n which follows a t-distribution. S n standard error of the mean .

t-Distribution t-Distribution t(n=6) t(n=g) = Z t(n=3) -4 -3 -2 -1 0 t 1 2 3 4 .

where is degrees of freedom. .distribution with parameter = n ± 1.t-Distribution ‡ If X is the mean of a random sample of size n taken from a normal population having the mean and variance 2. and §(Xi  X ) S ! 2 2 then n 1 X Q t! S/ n is a random variable following the t.

05 .80 t.t-Distribution ‡ The t-distribution has been tabularized. ‡ Note. due to symmetry. ‡ t represents the t-value that has an area of it. t1.20 t.95 t.= -t t-Distribution to the right of -4 -3 -2 -1 0 t(n=3) 1 2 3 4 t.

X ! 1455 S ! 72 ‡ Does this data support or refute a population average of 1400? . 1375. 1500. 1450.Example: t-Distribution ‡ The resistivity of batches of electrolyte follow a normal distribution. 1550. We sample 5 batches and get the following readings: 1400.

Example: t-Distribution X  Q 1455  1400 t! ! ! 1.78 .025 Refute 3 4 1.71 t=2.71 S/ n 72 / 5 t-Distribution Support Refute -4 -3 -2 -1 0 t(n=5) 1 2 p=0.

Introduction to Hypothesis Testing .

Nonstatistical Hypothesis Testing« ‡ A criminal trial is an example of hypothesis testing without the statistics. ‡ In a trial a jury must decide between two hypotheses. The null hypothesis is H0: The defendant is innocent ‡ The alternative hypothesis or research hypothesis is H1: The defendant is guilty ‡ The jury does not know which hypothesis is true. They must make a decision on the basis of evidence presented.

Nonstatistical Hypothesis Testing« ‡ In the language of statistics convicting the defendant is called rejecting the null hypothesis in favor of the alternative hypothesis.
± That is, the jury is saying that there is enough evidence to conclude that the defendant is guilty (i.e., there is enough evidence to support the alternative hypothesis).

‡ If the jury acquits it is stating that there is not enough evidence to support the alternative hypothesis.
± Notice that the jury is not saying that the defendant is innocent, only that there is not enough evidence to support the alternative hypothesis. ± That is why we never say that we accept the null hypothesis.

Nonstatistical Hypothesis Testing« ‡ There are two possible errors. ‡ A Type I error occurs when we reject a true null hypothesis. That is, a Type I error occurs when the jury convicts an innocent person. ‡ A Type II error occurs when we don¶t reject a false null hypothesis. That occurs when a guilty defendant is acquitted.

Nonstatistical Hypothesis Testing« ‡ The probability of a Type I error is denoted as . ‡ The probability of a type II error is . ‡ The two probabilities are inversely related. Decreasing one increases the other.

4. the null and the alternative hypotheses. There are two hypotheses. 3.Nonstatistical Hypothesis Testing« The critical concepts are these 1. 3. . The null hypothesis (H0) will always state that the parameter equals the value specified in the alternative hypothesis (H1) The goal is to determine whether there is enough evidence to infer that the alternative hypothesis is true. Conclude that there is not enough evidence to support the alternative hypothesis. The procedure begins with the assumption that the null hypothesis is true. There are two possible decisions: ± ± Conclude that there is enough evidence to support the alternative hypothesis. 2.

‡ Hypothesis testing is a procedure for making inferences about a population.e. Population Sample Inference Statistic Parameter ‡ Hypothesis testing allows us to determine whether enough statistical evidence exists to conclude that a belief (i. . hypothesis) about a parameter is supported by the data.

our operations manager wants to know whether the mean is different from 350 units. our research hypothesis becomes: ‡ H1:  350 This is what we are interested in determining . We can rephrase this request into a test of the hypothesis: ‡ H0: = 350 Thus.Concepts of Hypothesis Testing« ‡ Example: mean demand for computers during assembly lead time. Rather than estimate the mean demand.

‡ Thus.Concepts of Hypothesis Testing« ‡ The testing procedure begins with the assumption that the null hypothesis is true. we will assume: H0: = 350 (assumed to be TRUE) . until we have further statistical evidence.

is true? This is what we are interested in determining .Concepts of Hypothesis Testing« ‡ The goal of the process is to determine whether there is enough evidence to infer that the alternative hypothesis is true. ‡ That is. is there sufficient statistical information to determine if this statement: H1:  350.

Concepts of Hypothesis Testing« ‡ There are two possible decisions that can be made: Conclude that there is enough evidence to support the alternative hypothesis (also stated as: rejecting the null hypothesis in favor of the alternative) Conclude that there is not enough evidence to support the alternative hypothesis (also stated as: not rejecting the null hypothesis in favor of the alternative) NOTE: we do not say that we accept the null hypothesis. .

Concepts of Hypothesis Testing« ‡ Once the null and alternative hypotheses are stated. 355) we could not say that this provides a great deal of evidence to infer that the population mean is different than 350. . a large value of (say. ‡ If the test statistic¶s value is inconsistent with the null hypothesis we reject the null hypothesis and infer that the alternative hypothesis is true. If is close to 350 (say. 600) would provide enough evidence. the next step is to randomly sample the population and calculate a test statistic (in this example. if we¶re trying to decide whether the mean is not equal to 350. the sample mean). ‡ For example.

‡ There are probabilities associated with each type of error: P(Type I error) = P(Type II error ) = ‡ is called the significance level.Concepts of Hypothesis Testing« ‡ Two possible errors can be made in any test: ± A Type I error occurs when we reject a true null hypothesis and ± A Type II error occurs when we don¶t reject a false null hypothesis. .

Do NOT reject H0 when it is FALSE) .e.e.Types of Errors« ‡ A Type I error occurs when we reject a true null hypothesis (i. Reject H0 when it is TRUE) ‡ A Type II error occurs when we don¶t reject a false null hypothesis (i.

for which the sample mean is ¼178. The accounts are approximately normally distributed with a standard deviation of ¼65. ‡ A random sample of 400 monthly accounts is drawn. ‡ Can we conclude that the new system will be cost-effective? .Example ‡ A department store manager determines that a new billing system will be cost-effective only if the mean monthly account is more than ¼170.

our null hypothesis becomes: ‡ H0: = 170 (this specifies a single value for the parameter of interest) . that is: ‡ H1: > 170 (this is what we want to determine) Thus. We express this belief as a our research hypothesis.Example The system will be cost effective if the mean account balance for all customers is greater than ¼170.

What to do next?! .Example What we want to show: H1: > 170 H0: = 170 (we¶ll assume this is true) We know: n = 400 = 178 = 65 ‡ Hmm.

and ‡ The p-value approach (which is generally used with a computer and statistical software). we can use two different approaches: ‡ The rejection region approach (typically used when computing statistics manually).Example ‡ To test our hypotheses. ‡ We will explore both in turn« .

is the critical value of to reject H0. we decide to reject the null hypothesis in favor of the alternative hypothesis. Rejection Region« ‡ The rejection region is a range of values such that if the test statistic falls into that range. .Example.

Example ‡ It seems reasonable to reject the null hypothesis in favor of the alternative if the value of the sample mean is large relative to 170. that is if > . = P( > ) is also« = P(rejecting H0 given that H0 is true) = P(Type I error) .

Example ‡ All that¶s left to do is calculate and compare it to 178. we can calculate this based on any level of significance ( ) we want« .

i.Example ‡ At a 5% significance level (i.e.34). that: > 170 and that it is cost effective to install the new billing system . we get ‡ Solving we compute =175. =0.e.05).34 ‡ Since our sample mean (178) is greater than the critical value we calculated (175. we reject the null hypothesis in favor of H1.

Example The Big Picture« H1: > 170 H0: = 170 Reject H0 in favor of =175.34 =178 .

645 (z.05). we reject H0 in favor of H1« .46 > 1.Standardized Test Statistic« ‡ An easier method is to use the standardized test statistic: and compare its result to : (rejection region: z > ) ‡ Since z = 2.

given that the null hypothesis (H0: = 170) is true? p-value . = 178).p-Value ‡ The p-value of a test is the probability of observing a test statistic at least as extreme as the one computed given that the null hypothesis is true. ‡ In the case of our department store example.e. what is the probability of observing a sample mean at least as extreme as the one already observed (i.

Interpreting the p-value« ‡ The smaller the p-value. the more statistical evidence exists to support the alternative hypothesis. ‡ We observe a p-value of .0069. . hence there is evidence to support H1: > 170.

01 .05 .Interpreting the p-value« Overwhelming Evidence (Highly Significant) Strong Evidence (Significant) Weak Evidence (Not Significant) No Evidence (Not Significant) 0 p=.10 .0069 .

Interpreting the p-value« ‡ Compare the p-value with the selected value of the significance level: ‡ If the p-value is less than . ‡ If the p-value is greater than . we do not reject the null hypothesis. ‡ Since p-value = .0069 < H1 = . we reject H0 in favor of . we judge the p-value to be small enough to reject the null hypothesis.05.

Thus. We want to know whether there is enough statistical evidence to show that the population mean is less than 22 days. the alternative hypothesis is H1: < 22 The null hypothesis is H0: = 22 .Another example« The objective of the study is to draw a conclusion about the mean payment period. the parameter to be tested is the population mean. Thus.

‡ We set the significance level at 10%.Another example« ‡ The test statistic is z! x Q W/ n ‡ We wish to reject the null hypothesis in favor of the alternative only if the sample mean and hence the value of the test statistic is small enough. . ‡ As a result we locate the rejection region in the left tail of the sampling distribution.

63  22 6 / 220 ! .10 ! 1.91 p-value = P(Z < -. 759 ! ! 21 . .28 4 .Another example« ‡ Rejection region: ‡ Assume z i  z E !  z.91) =0.1814 Conclusion: There is not enough evidence to infer that the mean is less than 22. 63 220 §x x ! 220 ‡ and z! x Q / n ! 21.

One± and Two±Tail Testing« ‡ The department store example was a one tail test. . because the rejection region is located in only one tail of the sampling distribution: ‡ More correctly. this was an example of a right tail test.

.One± and Two±Tail Testing« ‡ The µpayment period¶ example is a left tail test because the rejection region was located in the left tail of the sampling distribution.

Right-Tail Testing« ‡ Calculate the critical value of the mean ( ) and compare against the observed value of the sample mean ( )« .

Left-Tail Testing« ‡ Calculate the critical value of the mean ( ) and compare against the observed value of the sample mean ( )« .

Two±Tail Testing« ‡ Two tail testing is used when we want to test a research hypothesis that a parameter is not equal () to some value .

Example ‡ KPN argues that its rates are such that customers won¶t see a difference in their phone bills between them and their competitors.09 .09 and ¼3. ‡ They then sample 100 customers at random and recalculate a monthly phone bill based on competitor¶s rates.09.87 (respectively). ‡ What we want to show is whether or not: H1:  17. We do this by assuming that: H0: = 17. ‡ They calculate the mean and standard deviation for all their customers at ¼17.

stat is ³small´ stat is ³large´ ‡ That is. we set up a two-tail rejection region. . The total area in the rejection region must sum to .Example ‡ The rejection region is set up so we can reject the null hypothesis when the test statistic is large or when it is small. so we divide this probability by 2.

025 = 1.Example ‡ At a 5% significance level (i.96 and our rejection region is: ‡ z < ±1.96 -z.96 -orz > 1. we have /2 = . = .025. z.025 z .05). Thus.e.025 0 +z.

96. we calculate = 17. .Example ‡ From the data.55 ‡ Using our standardized test statistic: ‡ We find that: ‡ Since z = 1. nor less than ±1. ‡ There is insufficient evidence to infer that there is a difference between the bills of KPN and the competitor.96 we cannot reject the null hypothesis in favor of H1.19 is not greater than 1.

Summary of One.and Two-Tail Tests« One-Tail Test (left tail) Two-Tail Test One-Tail Test (right tail) .

‡ Recall previous example H0: = 170 H1: > 170 ‡ At a significance level of 5% we rejected H0 in favor of H1 since our sample mean (178) was greater than the critical value of (175.Probability of a Type II Error ± ‡ It is important that that we understand the relationship between Type I and Type II errors. ± how the probability of a Type II error is calculated ± its interpretation.34) .

‡ Thus. ‡ In our example this means that if is less than 175. which means that we will not install the new billing system. we can see that: = P( < 175.34 (our critical value) we will not reject our null hypothesis.34 given that the null hypothesis is false) .Probability of a Type II Error ± ‡ A Type II error occurs when a false null hypothesis is not rejected.

thus« .Example ‡ = P( < 175. given that = 180).34 given that the null hypothesis is false) ‡ We need to compute for some new value of . For example. ‡ = P( < 175.34. suppose the true mean account balance is ¼180.

Example Our original hypothesis our new assumption .

increases the value of ‡ Decreasing the significance level and vice versa. Shifting the critical value line to the right (to decrease ) will mean a larger area under the lower curve for « (and vice versa) . ‡ Consider this diagram again.Effects on of Changing .

both of which are selected by the statistics practitioner. if the probability of a Type II error ( to be too large. ‡ Therefore. n. and/or increasing the sample size. ) is judged . we can reduce it by increasing .Judging the Test« ‡ A statistical test of hypothesis is effectively defined by the significance level ( ) and the sample size (n).

suppose we increased n from a sample size of 400 account balances to 1.000« ‡ The probability of a Type II error ( level while remains at 5% ) goes to a negligible .Judging the Test« ‡ For example.

‡ It represents the probability of rejecting the null hypothesis when it is false.Judging the Test« ‡ The power of a test is defined as 1± . .

Error Rates and Power (H0 and H1 = null and alternative hypothes) Fate of H0 Accept H0 Actually True H1 100 .E Reject E 100 .F (POWER) F .

Factors Affecting Power ‡ Increasing overall sample size increases power ‡ Having unequal group sizes usually reduces power ‡ Larger size of effect being tested increases power ‡ Setting lower significance level decreases power ‡ Violations of assumptions underlying test often decrease power substantially .

Exercises ‡ Exercises z-test (see word document) .

The t-test .

‡ Consider X Q . Thus this statistic will be more variable that a standard normal random variable. X Q W n has a standard normal distribution. ± If n is small. S n ± This is approximately normal if n is large. . ± This statistic follows a t distribution with n-1degrees of freedom.Recall t distribution.W2) population. S introduces additional variability. S is not expected to be close to W. ‡ Take random sample of size n from a N(Q.

‡ Suppose that the population is normally distributed with mean Q and variance W2.Confidence Intervals. Then ± If W is known. a 100(1-E)% confidence interval for Q is.   xsz /2 W n ± If W is not known. a 100(1-E)% confidence interval for Q is. x s tE / 2 (n  1) s n .

.g. ‡ The one sample t-test is used to test whether a population has a specific mean value ‡ The two sample t-test is used to test whether population means are equal. do training and control groups have the same mean.Overview of the t-test ‡ The t-test is used to help make decisions about population values. ‡ There are two main forms of the t-test. e. one for a single sample and one for two samples. .

63. therefore. ‡The SD of height in the sample is 5. ‡We randomly sample 50 women students at USF.05 inches. Then we find the standard error of the mean by dividing SD by sqrt(N) = 5. suppose we want to test whether the mean height of women at University of South Florida (USF) is less than 68 inches.75 inches.One-sample t-test We can use a confidence interval to ³test´ or decide whether a population mean has a given value.05 plus/minus 1. Our confidence interval is.01(find this in a t-table. For example. ‡We find that their mean height is 63. The critical value of t with (50-1) df is 2. 63.025).81. .75/sqrt(50) = . alpha=0.

Does the interval contain the hypothesized value? ¡ Hei t i I c es in Inc .05 SD=5.01 4 Frequenc ci ! X s 1.One-sample t-test example ne sample t test Confi ence inter al ei 10 N=50 8 Pop Mean = 68 M = 63. set a confidence interval around the sample mean.63 2 0 40 50 60 70 80 Take a sample.8 1 Histogram of Sample Height t=2.75 6 S X ! .

05 12 Q ! 68 t istri ution X  Q ! 4. Errors) from the hypothesized population mean.8 1 SX 6 Frequenc 3 0 62 Height in Inches 70 The sample mean is roughly six standard deviations (St.8 1  4 .95 9 S X ! . .05 inches.9 5 X  Q ! !  6 . If the population mean is really 68 inches.One-sample t-test Example ne sample t test t istri ution iew 15 X ! 63. very unlikely that we would find a sample with a mean as small as 63.1 1 t ! . it is very.

e. ± Experimental vs.. control group ± Males vs.Two-sample t-test ‡ Used when we have two groups. females ± New training vs.g. . ± means are just the same or different (nondirectional) ± or can predict one group higher (directional). old training method ‡ Tests whether group population means are the same.

‡If the two groups are sampled at random from 1 population. . Suppose we do this over and over. ‡We measure the height of each person and find the mean for each group. ‡We will then have a sampling distribution of mean differences. The standard deviation of the sampling distribution will be: 2 2 W X1  X 2 ! W X1  W X 2 The standard error of the difference is the root of the sum of squared standard errors of the mean. ‡Then we subtract the mean for group 1 from the mean for group 2.Sampling Distribution of Mean Differences ‡Suppose we sample 2 groups of size 50 at random from USF. the mean of the differences in the long run will be zero because the mean for both groups will be the same.

What would the standard deviation of the distribution of differences be? SD 2 36 2 ! 100 WX ! N SD ! 6 / 10 ! .6 WX ! N W X1  X 2 ! W 2 X1 W 2 X2 36 36 72 !  ! ! .6. Suppose we sampled people 100 at a time into two groups. We would expect that the average mean difference would be zero. it is . for the difference in means.85 100 100 100 The standard error for each group mean is . .Example of the Standard Error of the Difference in Means Suppose that at USF the mean height is 68 inches and the standard deviation of height is 6 inches.85.

We usually estimate population values with sample data. . thus: S X1  X 2 ! S  S S 2 2 here S X ! N 2 X1 2 X2 All this says is that we replace the population variance of error with the appropriate sample estimators. That is: 2 2 W X1  X 2 ! W X1  W X 2 We generally don¶t have population values.Estimating the Standard Error of Mean Differences The USF scenario we just worked was based on population information.

S X1  X 2 2 (n1  1) s12  (n2  1) s2 ¨ 1 1 ¸ ©  ¹ ! (n1  1)  (n2  1) © n1 n2 ¹ ª º . we find the pooled standard error.Pooled Standard Error S X1  X 2 ! S  S 2 X1 2 X2 We can use this formula when the sample sizes for the two groups are equal. The pooled standard error is a weighted average. where the weights are the groups¶ degrees of freedom. When the sample sizes are not equal across groups.

.Back to the Two-Sample t The formula for the two-sample t-test for independent samples looks like this: t X1  X 2 X1  X 2 ! S X1  X 2 This says we find the value of t by taking the difference in the two sample means and dividing by the standard error of the difference in means.

Empathy by College Major Suppose we have a professionally developed test of empathy. Scores come from comparing what people guess to what the people in the films said they felt at the time.Example of the two-sample t. We want to know whether Psychology majors have higher scores on average to this test than do Physics majors. The test has people view film clips and guess what people in the clips are feeling. . we just want to know if there is a difference. No direction. So we find some (N=15) of each major and give each the test.

Empathy Scores Person 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Psychology 10 12 13 10 8 15 13 14 10 12 10 12 13 10 8 Physics 8 14 12 8 12 9 10 11 12 13 8 14 12 8 12 .

20 4.33-10.05>. n.292+.05.Empathy From data N Mean SD SD2 2 SX t X1  X 2 Physics 15 10.s.87 2.292 Calculation 11.78 . .33 2.05 Term X1  X 2 S X1  X 2 t df t(.59 28 X1  X 2 ! S X1  X 2 Psychology 15 11.38 4. 28 df) 2 tail 2.87 Sqrt(.323 Result .84 .38/15=.59.323) .46/.09 4.78 15+15-2 2.46 .

Exercise ‡ Exercises t-test. see word document .

Chi-square .

E2.  Ok ! n 4. O3. . O1  O2  O3  .Background: 1. 2. Suppose there are n observations. 1  2  3 . frequencies: E1. . Sum of the observed frequencies is n. or theoretical. 3. Each observation falls into a cell (or class). . « . E3.  k !n 1st bserved Frequency xpected Frequency 1 1 k ategories 2nd 3rd 2 2 3 3 . O2. . . kth k k Total n n . Expected. Ek. . Observed frequencies in each cell: O1. Ok.

Goal: 1. 2. Large values of G2: Observed frequencies do not agree with expected frequencies. Decide whether the observed frequencies seem to agree or seem to disagree with the expected frequencies. Compare the observed frequencies with the expected frequencies. . Methodology: Use a chi-square statistic: (O  E ) 2 G2 ! § E all cells Small values of G2: Observed frequencies close to expected frequencies.

a separate distribution for each different number of degrees of freedom. G2 is distributed so as to form a family of distributions. 2.Sampling Distribution of G2*: When n is large and all expected frequencies are greater than or equal to 5. . G2 is nonnegative in value. then G2* has approximately a chi-square distribution. it is zero or positively valued. 3. G2 is not symmetrical. Recall: Properties of the Chi-Square Distribution: 1. it is skewed to the right.

G2(df.Critical values for chi-square: 1. E): critical value of a chi-square distribution with df degrees of freedom and E area to the right. 4. Chi-square distribution is not symmetrical: critical values associated with right and left tails are given separately. See Table. 3. E 0 G 2 (df . Identified by degrees of freedom (df) and the area under the curve to the right of the critical value. E ) G2 . 2.

05) = 26. / 16 / G2(16.3 .05 0 G 2 (16.05). 0.Example: Find G2(16.05 26.0.3 . 0. 0.05) G2 Portion of Table Area to the right df . 0.

the right-hand tail. H0: The probabilities p1. Degrees of freedom: df = k  1. Use a one-tailed critical region. . Test statistic: E all cells 3. p2.Testing Procedure: 1. To ensure a good approximation to the chi-square distribution: Each expected frequency should be at least 5 ( Ei u 5). . Expected frequencies: Ei ! n ™ pi 6. 5. pk are correct. . . 4. (O  E)2 G2 ! § 2. . Ha: At least two probabilities are incorrect.

05.Example: A market research firm conducted a consumer-preference experiment to determine which of 5 new breakfast cereals was the most appealing to adults. or did they indicate each cereal was equally likely to be selected? Use E = 0. . The results are given in the following table: Cereal Frequency A 25 B 17 C 15 D 22 E 21 Total 100 Is there any evidence to suggest the consumers had a preference for one cereal. A sample of 100 consumers tried each cereal and indicated the cereal he or she preferred.

the probability that a particular cereal is selected. 1. The Set-up: a. if no preference is given. 2.05.Solution: If no preference was shown. Level of significance: E = 0. b. The Hypothesis Test Criteria: a. Test statistic: G2* with df = k  1 = 5  1 = 4 c. Assumptions: The 100 consumers represent a random sample. b.2) = 20 consumers in each class. we expect (100)(0. Ha: There was a preference shown (not equally distributed). . Thus. Population parameter of concern: Preference for each cereal. The null and alternative hypotheses: H0: There was no preference shown (equally distributed). we expect the 100 consumers to be equally distributed among the 5 cereals.

45 1.25 0.25 0.3.20 .20 0. Sample information: Table given in the statement of the problem.2 E 20 20 20 20 20 100 O E 5 -3 -5 2 1 0 (O E )2/E 1. The Sample Evidence: a. b.05 3. Calculate the value of the test statistic: O 25 17 15 22 21 100 G2 = 3.

there is no evidence to suggest the consumers showed a preference for any one cereal. The p-value is larger than the level of significance. The Results: a.05) = G2(4. Decision: Fail to reject H0. Conclusion: At the 0.5429. . Critical value: G2(k  1.4.05) = 9. G2* is not in the critical region. E. 0. 5.05 level of significance.49 b. b. The p-value: ! P ( G 2 * " 3. 0.2 | df ! 4). Using computer: P = 0. The Probability Distribution (Classical Approach): a. The Probability Distribution (p-Value Approach): a. 6. b.

Ei . R1.r v c Contingency Table: 1. . . 2. Rr and C1. C2. Degrees of freedom: d ! ( r  1) ™ (c  1) n = grand total. . 6. r: number of rows. 5. j Row total v Column tot al Ri v C j ! ! n Grand total Each Ei. . . . . Used to test the independence of the row factor and the column factor. c: number of columns. 4.j should be at least 5. R2. . Expected frequency in the ith row and the jth column: 3. Cc: marginal totals.

Contingency table showing sample results and expected values: Tax Reform Yes No Unsure Total Political Party Democrat Republican Independent Total 34 11 12 57 (23.98) (15.33) (17.77) (12.72) 61 39 45 145 (O  E ) 2 G *! § ! 14.59) 10 16 15 41 (17.69) 17 12 18 47 (19.16 E all cells 2 .25) (11.64) (14.03) (12.

0. 5. . The Probability Distribution (p-Value Approach): a.3 b. Critical value: G2(4.4. b.01) = 13. The p-value is smaller than the level of significance. The Probability Distribution (Classical Approach): a. b. The Results: a.0068. The p-value: ! P ( G 2 * " 14.16 | df ! 4) By computer: P = 0. 4. Decision: Reject H0. Conclusion: There is evidence to suggest that opinion on tax reform and political party are not independent. G2* is in the critical region. E.

ANOVA Analysis of Variance .

From t to F« ‡ In the independent samples t test. that we wish to know about the relative effect of three or more different ³treatments´? . however. ‡ Suppose. you learned how to use the t distribution to test the hypothesis of no difference between two population means.

. ± There are so many comparisons that some will be significant by chance.From t to F« ‡ We could use the t test to make comparisons among each possible combination of two means. this method is inadequate in several ways. ± Any statistic that is based on only part of the evidence (as is the case when any two groups are compared) is less stable than one based on all of the evidence. ± It is tedious to compare all possible combinations of groups. ‡ However.

there will be no point in searching further.From t to F« ‡ What we need is some kind of survey test that will tell us whether there is any significant difference anywhere in an array of categories. . or the analysis of variance. or ANOVA. ‡ If it tells us no. ‡ Such an overall test of significance is the F test.

you are asking about the variances among those means. ‡ This question about means is answered by analyzing variances.The logic of ANOVA ‡ Hypothesis testing in ANOVA is about whether the means of the samples differ more than you would expect if the null hypothesis were true. ± Among other reasons. . you focus on variances because when you want to know how several means differ.

Two Sources of Variability
‡ In ANOVA, an estimate of variability between groups is compared with variability within groups. ± Between-group variation is the variation among the means of the different treatment conditions due to chance (random sampling error) and treatment effects, if any exist. ± Within-group variation is the variation due to chance (random sampling error) among individuals given the same treatment.

ANOVA
Total Variation Among Scores

Within Groups Variation Variation due to chance.

Between Groups Variation Variation due to chance and treat ent effect (if any exi ti .

Variability Between Groups

‡ There is a lot of variability from one mean to the next. ‡ Large differences between means probably are not due to chance. ‡ It is difficult to imagine that all six groups are random samples taken from the same population. ‡ The null hypothesis is rejected, indicating a treatment effect in at least one of the groups.

Variability Within Groups

‡ Same amount of variability between group means. ‡ However, there is more variability within each group. ‡ The larger the variability within each group, the less confident we can be that we are dealing with samples drawn from different populations.

The F Ratio

Between GroupVariability F! Within GroupVariability

ANOVA (F)
Total Variation Among Scores Within- roups Variation Variation due to chance. Between- roups Variation Variation due to chance and treatment effect (if any existis).

Two Sources of Variability
Variabilit y Bet een Groups ! Variabilit y Within Groups

F

1

Two Sources of Variability
Variabilit y Bet een Groups ! Variabilit y Within Groups

F !1

The F Ratio

ANOVA (F)
Total Variation Among Scores Within- roups Variation Variation due to chance. Between- roups Variation Variation due to chance and treatment effect (if any existis).

Mean Squares Within

Mean Squares Between

MSbetween F! MSwithin

mean squares between mean squares within

The F Ratio

MSbetween F! MSwithin
sum of squares within sum of squares between

MSwithin

SSwithin ! dfwithin

MSbetween

SSbetween ! df between

degrees of freedom within

degrees of freedom between

s !

2

§(X  X)
n 1

2

Sum of Squares Degrees of Freedom

The F Ratio MSbetween F! MSwithin MSwithin SSwithin ! dfwithin sum of squares total MSbetween SSbetween ! df between SStotal ! SSbetween  SSwithin degrees of freedom total dftotal ! dfbetween  dfwithin .

scores together. . then 2 2 T G square the total) een !7 n  N Total number of subjects. and divide by the number of Grand Total (add all of the subjects in the group.The F Ratio: SS Between SS between ! n7( X group  X grand ) 2 SS bet Find each group total. square it.

n Number of subjects in each group.The F Ratio: SS Within 2 SS within ! 7( X  X group ) Square each individual score and then add up all of the squared scores. 2 SS within ! 7X  7 2 Squared group total. .

The F Ratio: SS Total SS total ! 7( X  X grand ) 2 ! ( X group  X grand )  ( X  X group ) SStotal G ! §X  N 2 2 Grand Total (add all of the scores together. then add all of the squared scores together. . then square the total) Square each score. Total number of subjects.

05 level of significance. ‡ Determine the significance of the difference among groups.An Example: ANOVA ‡ A study compared the intensity of pain among three groups of treatment. Treatment 1 7 6 5 6 Treatment 2 12 8 9 11 Treatment 3 8 10 12 10 . using the .

. H 0 : Q1 ! Q 2 ! Q3 H A : H 0 is false.An Example: ANOVA ‡ State the research hypothesis. ± Do ratings of the intensity of pain differ for the three treatments? ‡ State the statistical hypothesis.

‡ H0 may be false in any number of ways. all may be different. ‡ Such a distinction no longer makes sense when the number of means exceeds two. and so on. . ‡ A directional test is possible only in situations where there are only two ways (directions) that the null hypothesis could be false.Nondirectional Test ‡ In testing the hypothesis of no difference between two means. a distinction was made between directional and nondirectional alternative hypotheses. ± Two or more group means may be alike and the remainder differ.

1 ‡ Within: df within ! n1  1  n 2  1  n3  1.. df within ! total number o sub ects .total number o groups ..Degrees of Freedom ‡ Between: df between ! number o groups .

An Example: ANOVA ‡ Set decision rule.05 df between ! number o groups  1 ! 3  1 ! 2 df within ! ( n1  1)  ( n 2  1)  ( n3  1) ! ( 4  1)  ( 4  1)  ( 4  1) ! 9 . E ! .

An Example: ANOVA ‡ Set the decision rule.05 df between ! 2 df within ! 9 Fcrit ! 4. E ! .26 .

An Example: ANOVA ‡ Calculate the test statistic. Treatment A 7 6 5 6 T:24 49 36 25 36 146 X2 Treatment B 12 8 9 11 T:40 144 64 81 121 410 X2 Treatment C 8 10 12 10 T:40 64 100 144 100 408 X2 SS within T2 ! 7X  7 n 2 Grand Total: 104 SS within « 24 2 40 2 40 2 » ! 146  410  408  ¬   144 ¼ ! 964  ?  400  400A ! 20 4 4 4 ½ ­ .

67 4 4 4 12 .33 ! 42. Treatment A 7 6 5 6 T:24 49 36 25 36 146 X2 Treatment B 12 8 9 11 T:40 144 64 81 121 410 X2 Treatment C 8 10 12 10 T:40 64 100 144 100 408 X2 SS between T 2 G2 !7  n N Grand Total: 104 SS between 24 2 40 2 40 2 (104) 2 !    ! 144  400  400  901.An Example: ANOVA ‡ Calculate the test statistic.

22 .34 ! ! ! 9.An Example: ANOVA MS between ! MS within MS between MS within SS between 42.61 MS within 2.67 ! ! ! 21.34 df between 2 SS within 20 ! ! ! 2.22 df within 9 MS between 21.

An Example: ANOVA ‡ Determine if your result is significant.67 20 62. Source Between Groups Within Groups Total df 2 9 11 SS 42. the ANOVA results are often summarized in a table.61>4.26 ‡ Interpret your results.22 F 9. ± Reject H0.34 2. ± There is a significant difference between the treatments.67 MS 21. ‡ ANOVA Summary Table ± In the literature. 9.61 .

. ‡ Post hoc tests have been designed for doing pair-wise comparisons after a significant F is obtained. we know.After the F Test ‡ When an F turns out to be significant. that there is a real difference somewhere among our means. we don¶t know where that difference is. ‡ But if there are more than two groups. with some degree of confidence.

± The 5 participants in the ³critically acclaimed´ condition are led to believe that these are paintings that are not famous but are highly thought of by a group of professional art critics.Exercise 6: ANOVA A psychologist interested in artistic preference randomly assigns a group of 15 subjects to one of three conditions in which they view a series of unfamiliar abstract paintings. Does what people are told about paintings make a difference in how well they are liked? Use the . ± The 5 in the control condition are given no special information about the paintings. ± The 5 participants in the ³famous´ condition are led to believe that these are each famous paintings. Famous 10 7 5 10 8 Critically Acclaimed 5 1 3 7 4 No Information 4 6 9 3 3 .01 level of significance.

Linear models .

yi}.. i=1... n>2 y = a*x + b x = predictor y = predicted value (outcome) a = slope b= y-axes intercept Goal: determine parameters a.b .n.Review linear regression Simplest form Fit a straight line through data points {xi .

Review linear regression Find values for a and b such that sum of squared error is minimized .

Review linear regression Predicted values y*=ax+b Measurments y n minimize R ( a. b) ! §(y i !1 n i !1 * i  yi ) 2 ! § ((axi  b)  yi ) 2 A minimum of a function (R) is characterized by a zero first derivative with respect to the parameters .

Intermezzo: minimum of function dy ! 2x y!x   dx dy min y   ! 2x ! 0 dx  x!0 2 dy x ! 1.2 dx dy x !0  ! 2 ™ (0) ! 0 dx .1   ! 2 ™ (1.1) ! 2.

Review linear regression A minimum of a function (R) is characterized by a zero first derivative with respect to the parameters this provides the parameter values for the model function .

b) ! § ((axi  b )  yi ) i !1 n 2 x ( a. b) n ! § ( axi  b  yi )xi ! 0 xa i !1 x ( a.Review linear regression minimize (a. b) n ! § ( axi  b  yi ) ! 0 xb i !1 a Explicit expressions for parameters a and b!! .

Linear and nonlinear models 1 ‡ (non) linear in the parameters ( . . ) ‡ Examples of linear models y= + x (linear) y= + x+ x2 (polynomial) y= + log(x) (log) .

Example: y ! 2 x  ax 2 y varies linear with a for fixed x .

Example: y ! a log( x) y varies linear with a for fixed x .

Linear and nonlinear models 2 y= + 1x1 + 2x2 + -linear model (in parameters) -y is linear combination of x¶s 0 F1 2 y ! F 0   F 2 x2 x1 -y is not a linear combination of x¶s -linear in the parameters -We can use MLR if variables are transformed x1¶=1/x1 x2¶=x2 y = 0 + 1x1¶ + 2x2¶ + .

Linear and nonlinear models 3 ‡ Models like y!e F0  F1 x cannot be linearized and must be solved with nonlinear regression techniques .

slope of line at fixed x is not constant) y ! e F0  F1x dy ! xe F0  F1x d F1 Nonlinear model y = log(x) dy/d = log(x) Linear model y = 0 + 1x1 + dy/d 1 = x1 Linear model 2x 2 .Linear and nonlinear models 4 ‡ Nonlinear model: At least one of the derivatives of the function wrt the parameters depends on at least one of the parameters (thus.

Significance testing and multiple testing correction .

Multiple testing ‡ Say that you perform a statistical test with a 0.05 threshold. ‡ Assume that all of the observations are explainable by the null hypothesis. ‡ What is the chance that at least one of the observations will receive a p-value less than 0. but you repeat the test on twenty different observations.05? .

Multiple testing Say that you perform a statistical test with a 0.358 = 0.05 Pr(not making a mistake) = 0.2% chance of making at least one mistake. . but you repeat the test on twenty different observations.642 There is a 64.0. what is the chance that at least one of the observations will receive a p-value less than 0.05? ‡ ‡ ‡ ‡ ‡ Pr(making a mistake) = 0. Assuming that all of the observations are explainable by the null hypothesis.9520 = 0.95 Pr(not making any mistake) = 0.358 Pr(making at least one mistake) = 1 .05 threshold.

Percentage sugar in candy (process 1) Percentage sugar in candy (process 2) no difference statistical test (alpha=0. Thus only 5% of making wrong decision. They are willing to accept an Type 1 error of 5%. .05) 100 candy bars 100 candy bars 5% change of finding a difference Suppose the company is required to do an expensive tuning of process 2 if a difference is found.

05) statistical test (alpha=0.2% of finding at least one significant difference Overall Type 1 error = 64.2% Day 20 statistical test (alpha=0.05) Change of 64.Percentage sugar in candy (process 1) Percentage sugar in candy (process 2) no difference Day 1 Day 2 statistical test (alpha=0.05) .

Bonferroni correction ‡ ‡ ‡ ‡ ‡ ‡ ‡ Assume that individual tests are independent. which is that many tests that should show an effect get below the corrected threshold.997520 = 0.0025 Pr(not making a mistake) = 0. Pr(making a mistake) = 0. It is clear though that this highly increases the beta error (false negative).0025.05 / 20 = 0.9512 Pr(making at least one mistake) = 1 . For the previous example.0. Divide the desired p-value threshold by the number of tests performed. 0.9512 = 0.0488 «meaning that the probability of one of the total number of tests being wrongfully said to be significantly different is of magnitude alpha (0. .9975 Pr(not making any mistake) = 0.0488) This is also known as correcting for the Family Wise Error (FWE).

88% of finding at least one significant difference Overall Type 1 error = 4.88% Day 20 statistical test (alpha=0.Percentage sugar in candy (process 1) Percentage sugar in candy (process 2) no difference Day 1 Day 2 statistical test (alpha=0.0025) Change of 4.0025) .0025) statistical test (alpha=0.

Multiple comparison # of nonrejected hypotheses #Ho = true U # of rejected hypotheses V (false positives) m0 # Ho = false T (false negatives) S m1 m-R R ³discoveries´ m .2.

the proportion of false discoveries among the discoveries (0 if none found) FDR = E(Q) Does it make sense? .e.The False Discovery Rate (FDR) criterion Benjamini and Hochberg R = # rejected hypotheses = # discoveries of these may be in error = # false discoveries The error (type I) in the entire study is measured by V Q! R !0 R"0 R!0 i.

000 So this error rate is scalable .unbearable So this error rate is adaptive The same argument holds when inspecting 10.bearable ‡ 3 false ones among 4 discovered .Does it make sense? Inspecting 100 features: ‡ 3 false ones among 60 discovered .

.FDR controlling proceures. Linear step up procedure (BH procedure.. 0 k) If no such k exists reject none .. FDR procedure) ‡ ‡ ‡ ‡ Order the p-values P(1) ” P(2) ”«” P(m) Let k ! ax{i:p(i) e(i / m)q} 0 1) Reject .. 0 2) .

0004700 0.0002450 0.00030 0.0000013 0. ‡ Approximately 5% of the examples above the line are expected to be false positives.0000008 0. effective p-value threshold is 0.00025 0.00035 0.0008900 1.00040 0.00020 0.0000945 0.FDR example q=0.0000235 0.0002450 .05000 p-value 0.00045 0. Thus.00010 0.0000000 ‡ Choose the threshold so that.05 m=1000 Rank 1 2 3 4 5 6 7 8 9 10 « 1000 (j )/m 0.00005 0.00050 0.0000012 0.0000078 0.00015 0.0000056 0. (jq)/m is larger than the corresponding pvalue. for all the genes above it.

Meanwhile an FDR of as high as . In contrast p-value of . and as such a much higher FDR can be tolerated than with a p-value.3 then we should expect 70 of them to be correct.FDR and p-value ‡ The False Discovery Rate (FDR) of a set of predictions is the expected percent of false predictions in the set of predictions. In the example above a set of 100 predictions of which 70 are correct might be very useful. For example if the algorithm returns 100 genes with a false discovery rate of .3 is generally unacceptabe in any circumstance. ‡ The FDR is very different from a p-value. .5 or even higher might be quite meaningful. especially if there are thousands of genes on the array most of which are not differentially expressed.

Choose FWER if high confidence in ³ALL´ selected genes is desired (for example. .False discovery rate When to use FWER and when to use FDR? FWER=Pr(V>0)=Pr(at least one false positives) FDR=E(Q)=E(V/R) 1. Use more flexible FDR procedures if certain proportions of false positives are tolerable (e.g. gene discovery). Loss of power due to strong control of type-I error. 2. selecting candidate genes for RTPCR validation).

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times