P. 1
How to Calculate Outliers

How to Calculate Outliers

|Views: 1,035|Likes:

See more
See less

12/11/2012

pdf

text

original

How to Calculate Outliers

Calculate Outliers
An outlier is a value in a data set that is far from the other values. Outliers can be caused by experimental or measurement errors, or by a long-tailed population. In the former cases, it can be desirable to identify outliers and remove them from data before performing a statistical analysis, because they can throw off the results so that they do not accurately represent the sample population. The simplest way to identify outliers is with the quartile method

1
Sort the data in ascending order. For example take the data set {4, 5, 2, 3, 15, 3, 3, 5}. Sorted, the example data set is {2, 3, 3, 3, 4, 5, 5, 15}.

2
Find the median. This the number at which half the data points are larger and half are smaller. If there are an even number of data points, the middle two are averaged. For the example data set, the middle points are 3 and 4, so the median is (3 + 4) / 2 = 3.5.

3
Find the upper quartile, Q2; this is the data point at which 25 percent of the data are larger. If the data set is even, average the 2 points around the quartile. For the example data set, this is (5 + 5) / 2 = 5.

4
Find the lower quartile, Q1; this the data point at which 25 percent of the data are smaller. If the data set is even, average the 2 points around the quartile. For the example data, (3 + 3) / 2 = 3.

5
Subtract the lower quartile from the higher quartile to get the interquartile range, IQ. For the example data set, Q2 -- Q1 = 5 -- 3 = 2.

6
Multiply the interquartile range by 1.5. Add this to the upper quartile and subtract it from the lower quartile. Any data point outside these values is a mild outlier. For the example set, 1.5 x 2 = 3. 3 -- 3 = 0 and 5 + 3 = 8. So any value less than 0 or greater than 8 would be a mild outlier. This means that 15 qualifies as a mild outlier.

7
Multiply the interquartile range by 3. Add this to the upper quartile and subtract it from the lower quartile. Any data point outside these values is an extreme outlier. For the example set, 3 x 2 = 6. 3 -- 6 = --3 and 5 + 6 = 11. So any value less than --3 or greater than 11 would be a extreme outlier. This means that 15 qualifies as an extreme outlier.

Tips & Warnings
 
Extreme outliers are more indicative of a bad data point than a mild outlier. Examine the causes of all outliers carefully.

Read more: How to Calculate Outliers | eHow.com http://www.ehow.com/how_5201412_calculate-outliers.html#ixzz1lknpL6We

Detection of Outliers
Introduction An outlier is an observation that appears to deviate markedly from other observations in the sample.
Identification of potential outliers is important for the following reasons.

1. An outlier may indicate bad data. For example, the data may have been coded incorrectly or an experiment may not have been run correctly. If it can be determined that an outlying point is in fact erroneous, then the outlying

Outliers may be due to random variation or may indicate something scientifically interesting. the plot can help determine whether we need to check for a single outlier or whether we need to check for multiple outliers. Identification Iglewicz and Hoaglin distinguish the three following issues with regards to outliers. That is. if the data contains significant outliers. Labeling.use robust statistical techniques that will not be unduly affected by outliers. Accomodation. If the normality assumption for the data being tested is not valid. do we need modify our statistical analysis to more appropriately account for these observations? 3. outlier identification . 2. In any event. In addition to checking the normality assumption.value should be deleted from the analysis (or corrected if possible). This section focuses on the labeling and identification issues. the lower and upper tails of the normal probability plot can be a useful graphical technique for identifying potential outliers. Normality Assumption Identifying an observation as an outlier depends on the underlying distribution of the data. Single Versus Some outlier tests are designed to detect the prescence of a . In this section. the prescence of one or more outliers may cause the tests to reject normality when it is in fact a reasonable assumption for applying the outlier test. we limit the discussion to univariate data sets that are assumed to follow an approximately normal distribution. it may not be possible to determine if an outlying point is bad data. For this reason. In particular. we may need to consider the use of robust statistical techniques. outlier accomodation . The box plot and the histogram can also be useful graphical tools in checking the normality assumption and in identifying potential outliers.e. it is recommended that you generate a normal probability plot of the data before applying an outlier test. 1.formally test whether observations are outliers. we typically do not want to simply delete the outlying observation. However. and so on). Although you can also perform formal tests for normality. 2. indicative of an inappropriate distributional model. In some cases. are the potential outliers erroneous data..flag potential outliers for further investigation (i. if we cannot determine that potential outliers are erroneous observations. outlier labeling . then a determination that there is an outlier may in fact be due to the non-normality of the data rather than the prescence of an outlier.

This list is not exhaustive (a large number of outlier tests have been proposed in the literature). In addition to discussing additional tests for data that follow an approximately normal distribution. these sources also discuss the case where the data are not normally distributed. It has the limitation that the number of outliers must be specified exactly. Grubbs' Test . For example.this is the recommended test when testing for a single outlier. Tietjen-Moore Test . .this is a generalization of the Grubbs' test to the case of more than one outlier. 1. you can transform the data to normality by taking the logarithms of the data and then applying the outlier tests discussed here. 3. Further Information Iglewicz and Hoaglin provide an extensive discussion of the outlier tests given above (as well as some not given above) and also give a good tutorial on the subject of outliers. Lognormal Distribution The tests discussed here are specifically based on the assumption that the data follow an approximately normal disribution. is based a value being too large (or small) compared to its nearest neighbor. If your data follow an approximately lognormal distribution. the Dixon test. The tests given here are essentially based on the criterion of "distance from the mean". This is not the only criterion that could be used. Barnett and Lewis provide a book length treatment of the subject. which is not discussed here. does the number of outliers need to be specified exactly or can we specify an upper bound for the number of outliers? The following are a few of the more commonly used outlier tests for normally distributed data. Generalized Extreme Studentized Deviate (ESD) Test this test requires only an upper bound on the suspected number of outliers and is the recommended test when the exact number of outliers is not known. 2.  Is the test designed for a single outlier or is it designed for multiple outliers? If the test is designed for multiple outliers.

principled and systematic techniques are used. and flags them as potential outliers. being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution. If the sample size is only 100. just three such outliers are already reason for concern. Outliers arise due to changes in system behaviour. and thus for 1. where appropriate.1 Retention o 3. Outlier detection[4] has been used for centuries to detect and. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean. see three sigma rule[3] for details. however. and 1 in 370 will deviate by three times the standard deviation. instrument error or simply through natural deviations in populations. roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean. This is essentially a learning approach analogous to unsupervised clustering.Model both normality and abnormality. it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability p) of a given distribution. an outlier could be the result of a flaw in the assumed theory. This approach is analogous to supervised classification and requires pre-labeled data. The original outlier detection methods were arbitrary but now. it is ill-advised to ignore the presence of outliers. the pathological appearance of outliers of a certain form appears in a variety of datasets. In general.3%. if the nature of the population distribution is known a priori. indicating that the causative mechanism for the data might differ at the extreme end (King effect). the number of outliers will follow a binomial distribution with parameter p. and not indicative of an anomaly. the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected. tagged as normal or abnormal.3 Non-normal distributions o 3. In a sample of 1000 observations. It may be considered semi-supervised as the normal class is taught but the algorithm learns to recognize abnormality.4 Alternative models 4 See also 5 References 6 External links  Occurrence and causes In the case of normally distributed data. There are three fundamental approaches to the problem of outlier detection:    Type 1 . being more than 11 times the expected number. A sample may have been contaminated with elements from outside the population being examined.  Identifying outliers There is no rigid mathematical definition of what constitutes an outlier. Alternatively. pinpoints the most remote points. Additionally.  Caution Unless it can be ascertained that the deviation is not significant.000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3. and identify observations which are deemed "unlikely" based on mean and standard deviation:   Chauvenet's criterion Grubbs' test for outliers . There may have been an error in data transmission or transcription. human error. fraudulent behaviour. determining whether or not an observation is an outlier is ultimately a subjective exercise. Outlier detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. The approach processes the data as a static distribution. p is approximately . This is analogous to a semisupervised recognition or detection task.    3 Working with outliers o 3.2 Exclusion o 3. Type 3 . Type 2 . which can generally be well-approximated by the Poisson distribution with λ = pn. Model-based methods which are commonly used for identification assume that the data are from a normal distribution.Determine the outliers with no prior knowledge of the data. A physical apparatus for taking measurements may have suffered a transient malfunction. remove anomalous observations from data.  Causes Outliers can have many anomalous causes. drawn from the full gamut of computer science and statistics. Outliers that cannot be readily explained demand special attention – see kurtosis risk and black swan theory.Model only normality (or in a few cases model abnormality). calling for further investigation by the researcher.

while mathematical criteria provide an objective and quantitative method for data rejection. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points. In regression problems. for example by using a hierarchical Bayes model or a mixture model. then one could for some constant k. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. this should be clearly stated on any subsequent report. and no more. having "fat tails". that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many. using a measure such as Cook's distance. Peirce's criterion[5][6][7][8] It is proposed to determine in a series of m observations the limit of error.  Exclusion Deletion of outlier data is a controversial practice frowned on by many scientists and science instructors.) n   Dixon's Q test ASTM E178 Standard Practice for Dealing With Outlying Observations Other methods flag observations based on measures such as the interquartile range. provided there are as many as such observations. especially in small sets or where a normal distribution cannot be assumed. the sample mean fails to converge as the sample size increases. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from A Manual of Astronomy 2:558 by Chauvenet.  Working with outliers The choice of how to deal with an outlier should depend on the cause.  Alternative models In cases where the cause of the outliers is known. an alternative approach may be to only exclude points which exhibit a large degree of influence on the parameters. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified.[13][14] .[11] If a data point (or points) is excluded from the data analysis.  Retention Even when a normal distribution model is appropriate to the data being analyzed. The principle upon which it is proposed to solve this problem is. For instance. and outliers are expected at far larger rates than for a normal distribution. Other approaches are distance-based[9][10] and frequently use the distance to the k-nearest neighbors to label observations as outliers or non-outliers.[12] the sample variance increases with the sample size. For example. beyond which all observations involving so great an error may be rejected. outliers are expected for large sample sizes and should not automatically be discarded if that is the case. abnormal observations. they do not make the practice more scientifically or methodologically sound. if define an outlier to be any observation outside the range: Q1 and Q3 are the lower and upper quartiles respectively. when sampling from a Cauchy distribution. it may be possible to incorporate this effect into the model structure.  Non-normal distributions The possibility should be considered that the underlying distribution of the data is not approximately normal.

scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->