4 views

Uploaded by Just Mahasiswa

- WMS-IV
- 1471-2474-12-155
- Statistical Methods for the Social Sciences
- Final Project of QTB
- Understand Statistics
- Report2.1cervezas
- afzal hussain project1
- FRM Part 1 FormulaSheet
- REYES-PhD
- Class27_RegressionNCorrHypoTest
- Correlation
- Seven Basic Tools of Quality
- Miller_Haden_2013_GLM Statistical Analysis.pdf
- Patience Special
- Confidence Intervals for Pearson's Correlation
- audit fee
- Cuesta Vargas (2013). Hydrotherapy as a Recovery Strategy After Excercise a Pragmatic Trial
- CompManual 2007
- Rj-quality Assurance Program for Pharma
- Q1

You are on page 1of 13

E. Garcia admin@miislita.com

Published on: Keywords: Abstract: July 8, 2010; Last Modified: January 8, 2011 tutorial, standard errors, correlation coefficients, Fisher Transformation, Fisher, Olkin-Pratt This is a three-section tutorial on correlation coefficients. Section one discusses the proper way of computing r values. Section two describes statistical applications. Section three discusses recent research on the problem of correlation coefficients as biased estimators. E. Garcia

Copyright:

Section One

Introduction

In a recent tutorial (1), we explained that each statistic has its own standard error SE. In that tutorial a distinction was made between the SE of a population mean and that of a correlation coefficient: (Eq 1) (Eq 2) where is the sample mean and defined as an arithmetic average, s is the sample standard deviation, n is the number of observations of x, and r is a correlation coefficient. In this tutorial, we use r to mean Pearson or Spearmans Correlation Coefficient. When necessary, Pearsons is denoted with rp and Spearmans with rs. Pearsons is used under normality and linearity assumptions while no assumption is made for Spearmans. In an upcoming tutorial it will be explained how both scenarios (linearity and nonlinearity) can be treated with Principal Component Analysis (PCA). Advances in the field make this possible.

Contrary to popular opinion: Spearmans (rs) can be computed for linear and nonlinear data. The only requirement is that the paired data must be ranked and measured on an ordinal or continuous scale. Pearsons (rp) can be applied to nonlinear problems by proper transformation of variables. For x instance rp can be applied to problems described by an exponential law (y = ca ) or a power law b (y = cx ). Note that these are two different laws, so do not arbitrarily use the exponential and power terms. Taking logarithms we can write o o For an exponential law, log y = x log a + log c. If we define a new variable y* = log y, then rp can be computed for x, y* paired data. In addition, plotting points with x, y* coordinates should give a straight line with slope equals to log a and intercept equals to log c. For a power law, log y = b log x + log c. If we define two new variables x* = log x and y* = log y, then rp can be computed for x*, y* paired data. Additionally, plotting points with x*, y* coordinates should give a straight line with slope equals to b and intercept equals to log c.

where a, b, and c are constant, a > 0, and b is a so-called power-law exponent. Thus, computing an rp value is possible. Note that it is still possible to rank transformed variables and compute an rs value.

As a practical example of this, suppose that is to be determined whether there is a power or exponential law relation between any two on/off- page attributes in a document or of one of these with document relevancy. After proper transformation of variables, their possible correlation can be examined.

Computing SE values serves a purpose. These are not computed for decorative purposes or to lure readers into buying a theory. SE values are computed for finding confidence intervals, significance testing, and hypothesis testing. In our previous tutorial, we advised readers against calculating the SE of r as marketers are doing. There are too many things wrong with the following statement: I literally took the standard deviation of the correlation coefficients for a metric and divided it by the square root of the number of samples. (2, 3). Calculating an SE in this way cannot be justified on statistical or mathematical grounds for several reasons: 1. From Equation 1, it is clear that the ratio is the SE of the sampling distribution for a mean value. as some search engine

2. Correlation coefficients are not additive (4). One cannot add raw r values to compute an arithmetic average and then use this average to compute a standard deviation of r values. Consider this: for centered data, a correlation coefficient is a cosine value. As shown in the Appendix section, adding, subtracting, or averaging cosines do not gives you a new cosine. Hence, computing an SE for r with Equation 1 is incorrect. 3. While can be computed directly from raw x values, indirect methods must be used for calculating . Since is also a cosine value, subtracting this from individual r values does not give a cosine difference. See Appendix section. 4. is an unbiased estimator of a monovariate distribution while is a biased estimator of a skewed bivariate distribution. Therefore, hypothesis testing with Equation 1 is not the same as with Equation 2.

To write this tutorial, we contacted fine scholars, experts in their own fields. They agreed with us on the above points. Their remarks are included in this tutorial with their permission. As mentioned before (1, 4), in order to compute individual r values have to be converted into additive quantities. Several techniques, each with their own assumptions and drawbacks, can be used: transcendental transformations, numerical expansions, weighted averages, or combination of techniques. Some of these are discussed in the next sections.

The best known technique for transforming r values into additive quantities is the r to Z transformation due to Fisher (5). Fisher Transformation is defined as (Eq 3) Its standard error is given by (Eq 4)

A comparison between Equations 1 4, reveals that depends of n and s, depends only of n. Mistaking one SE for another affects hypothesis testing.

Equation 3 is not valid when r = 1 exactly, but this is not a real issue since in most experimental problems r = 1 exactly is not achievable. For the most part in many analytical studies (e.g., material testing, paternity tests, analytical chemistry, etc), an accuracy of 99.9999% is accepted as valid quantitative work. Approximating 1 as 0.999999 gives

.

The error in r is about 0.000001 and negligible. Any concern about a Z score exploding when r = 1 is irrisory. As it will be shown, there are many statistical tests wherein you must use a Fisher Transformation.

According to Faller (6), once r values are converted into Z scores, an arithmetic mean score computed. A Fisher weighted mean value is then obtained by rearranging Equation 3, can be

(Eq 5) In Equation 3, Fisher Transformation is an elementary transcendental function called the inverse hyperbolic tangent function. Equation 5 is therefore a hyperbolic tangent function. In Windows computers, these functions are built-in in their scientific calculator program which is accessible by navigating to Start > All Programs > Accessories > Calculator. Microsoft Excel also has these built-in as ATANH and TANH functions. Equations 3 and 5 are needed to compute a Fisher weighted mean value . For instance, in Appendix 5 of her graduate thesis, Monika Inde Zsak writes (7):

Since the correlation coefficients are not additive, they had to be transformed in to a new variable called Fisher Z value to be able to calculate an average value. The variable is approximately normally distributed, independent of the distribution of the original correlation coefficient. The Z value can further be summed for calculating a mean value After the average for Z is calculated the correlation coefficient was simply found by rearranging formula (A.5.2)

Thus, when we find search results matching mean correlation coefficient or average correlation coefficient in the snippet abstract of search results, we better be sure in which context such expressions were used before quoting the original source.

Weighted Statistics

A Fisher Transformation is not the only way of computing weighted mean values of correlation coefficients. Several authors have proposed other type of transformations. Some of these are based on weighting specific statistics using numerical expansions or a combination of techniques. Most of these derivations are limited to a particular experimental setting and are not universally applicable. For instance, back in 1952, Lyerly (8) described a transformation for computing a weighted average correlation coefficient based on a long numerical expression. As Lyerly pointed out, his approach worked well for small n and number of sets N (e.g., n = N = 4), but it was too cumbersome for large n and N. Lyerly recognized that the arithmetic became complex even for n and N values greater than 7. He pointed out that his method works for null cases (null hypothesis testing), but not for non-null cases and that additional research was needed. In another studies, Reed and Frankham (9, 10) computed mean correlation coefficients, but clarified that these were weighted values:

All statistical tests were carried out with JMP software (version 3.0; SAS Institute) on weighted means and variances. .

We contacted Frankham and he kindly redirected us to Reed. Attempts at contacting Reed were unsuccessful.

Other authors have computed mean correlation coefficients using a vector space approach. For instance, Gandolfo et. al. (11) has computed correlation values from trayectories, sampled from time series data, and represented as vectors. Their treatment was based on a previous work conducted by other authors (12). As they stated:

Trajectories were sampled at 100 Hz, and the goodness of the performance was quantified with a correlation coefficient, as defined in ref. 2.

In Gandolfo et. al. (11) paper, ref. 2 corresponds to a vector treatment developed by Shadmehr and MussaIvaldi back in 1993 and described in the Appendix section of that second work (12). Gandolfo and Mussa-Ivaldi have co-authored (13) while at MIT, apparently at Emilio Bizzis lab. A list of researchers that have worked at BizziLab is available online (14). Essentially in Shadmehr and Mussa-Ivaldi treatment, trayectories were sampled at regular time intervals and the data was mean centered. Time series were modeled as vectors. Cosines between vectors were computed by normalizing inner products by vector lengths. Cosines were then taken for correlation coefficients. Finally, arithmetic averages of correlation coefficients were computed.

Instead of misquoting or speculating about Gandolfo et. al. (11) and Shadmehr and Mussa-Ivaldis work (12), we did what any serious researcher would do: we contacted them directly and inquired about their work. Mussa-Ivaldi (Sandro) kindly responded. We pointed out that since correlation coefficients are not additive, it is not correct computing mean correlation coefficients by averaging individual values. Mussa-Ivaldi accepted this point and graciously allowed us to reproduce his comments which are given below.

This is indeed a good point. Which has also an intuitive geometrical view: a sum of cosines is not a cosine. So, the mean correlation coefficient cannot be taken to be a correlation coefficient. When considering the correlation coefficient (or the cosine between vector pairs) as a random variable, one can still ask what is the empirical mean of this variable. One cannot go much beyond that because this variable cannot be assumed to be Gaussian since it has a finite support ([-1, +1]). However, in our work we just wanted to have an estimate of how the shape of trajectories was gradually recovered in time. And the data were sufficiently strong to make a statement. Namely, we saw (Fig 11) that after the perturbation was applied the mean correlation coefficient had a consistent upward trend toward +1. This was seen invariably with all subjects and in all experiments. I suppose this was sufficient to make our point. Nevertheless, you are quite right that we cannot conclude that the mean correlation coefficient - as the mean cosine between vector pairs - plotted in that figure corresponds to a true expected value. So, we are indisputably guilty of using a measure that is not quite correct. The Fisher correction would be needed, for example, to run a t-test. However, I think that the Fisher correction would hardly affect our conclusions. Thanks Sandro Hi Edel, you are welcome to use our discussion in the tutorial. On my end I'll take advantage of your comments to be more careful discussing the implication of taking arithmetic means! Best regards Sandro

Gandolfo et. al. (11) and Shadmehr and Mussa-Ivaldi (12) papers are 10 and 17 years old, respectively while we are now in 2010. It is now academic to argue whether the error incurred by computing arithmetic averages of r values, if back then discovered or not, weighted during the peer review process. Mussa-Ivaldis response subsumes well the whole point: a sum of cosines is not a cosine. Over the years we have read many research papers incorrectly reporting mean and standard deviation values of correlation coefficients, perhaps because these were not questioned by reviewers. Other authors publish research, but do not explain how something like a standard deviation was computed. To compound the confusion, since a standard error is a special type of standard deviation, the two concepts are often mistaken (15). Some authors even use standard deviation when in fact are reporting SE values. So, when one reads standard deviation of a correlation coefficient, one must inquire how or for what purpose those standard deviation or SE values were computed. Professor Don Zimmerman, a leading expert in Statistics agrees with us on this. In two of many emails we have exchanged, he correctly asserts:

Edel, I agree that statistics like standard deviations of correlation coefficients are misused a lot in published articles and that it would be a good idea for authors to find some consistent way of reporting their central tendency and variability. I think that the geometric analogy is very helpful when looking at properties of correlation. I used the vector space framework quite a bit in some early papers relating to properties of true, error, and observed scores in classical test theory. Of course that theory is all about matters of correlation through and through. Incidentally, in the geometric framework the standard deviation is the length of a centered vector, just as the correlation is the cosine of the angle between centered vectors. So it seems that assuming the standard deviation of correlation coefficients is meaningful would be the same as assuming that the collection of correlation coefficients (cosines) itself is a vector space. I have a suspicion that mathematically that is not true. Please feel free to point readers to the Psicologica paper in any way you wish. Sincerely, Don Hi, Edel, Thanks for sending that link. Those standard deviations were obtained by simply pooling all the r's and calculating the sd in the usual way. You are probably right that it is not a legitimate procedure. Best wishes, Don

Students should be careful before quoting a scientific work in a blog, term paper, or thesis without reading and researching completely the original source. Students should also be aware that scientists can make incorrect assumptions and still obtain good experimental results, but such results not necessarily validate a faulty theory. In this regard, dont assume that because Google searches in EXACT mode match many scientific papers containing average correlation coefficients, mean correlation coefficients, or standard deviation of a correlation coefficient validates a particular theory or preconceived ideas you might have nurtured. It should be underscored that an EXACT search is not a match for literal phrases, but for term sequences since term matching is conditioned by parsing rules.

Furthermore, scientists are humans, make mistakes, and learn from these. Thats a byproduct of the Scientific Method. Certainly those mistakes not necessarily amount to what is commonly deemed as quack science. What does amount to quack science is not using the Scientific Method and insisting in misinforming with wrong theory with half-true arguments, despite the amount of evidence to the contrary. Once exposed, those doing quack science often do not accept the evidence, do not learn from their mistakes, and never make an effort at admitting anything. Instead, they often get into a misinformation campaign to hide their errors.

Part Two

Statistical Applications of Standard Errors and Correlation Coefficients

As mentioned early, computing SE values serves a purpose. These are not computed for the sake of decoration. For testing a population whose parameters are known, a Z-test can be conducted. As it is quite unusual to know the entire population, a t-test is frequently used. Both tests serve similar purposes: an SE is calculated and a t or Z value computed. The computed value is compared against a value from a statistical table at a given confidence level. For instance, at 95% ( = 0.05) confidence level, Ztable = 1.96. If a null hypothesis was made and Zcalculated Ztable = 1.96, the hypothesis is rejected, otherwise accepted. In the following sections we describe several applications.

As noted by Lu (16), a correlation coefficient is not normally distributed and its variance is not constant, but dependent on both the sample size and the true correlation coefficient () of the entire population. Consequently, confidence intervals (CI) for an r value or a mean value ( ), cannot be computed directly from a distribution of r values. An r to Z transformation is needed. Since from Equation 4 , (Eq 6) The procedure for computing CI has been described by Lu, Stevens, and elsewhere (16, 17). [Note. Lu (16) refers to SE as a sample standard deviation! ]. The following three examples are adapted from Stevens (17). Example 1. Let n = 100 and r = 0.35. Calculate the corresponding CI at a 95% ( = 0.05) confidence level (Z = 1.96) Convert r to Z. Using Equation 3, Zr = 0.3654. Compute both extremes of the CI:

Step 1.

Step 2.

Step 3

0.16

Thus given a sample correlation coefficient of 0.35, there is a 95% probability that the true value of the population correlation coefficient, , falls in the range from 0.51 and 0.16; i.e. 0.51 0.16. Therefore at that probability level, is not indistinguishable from any value within that range obtained from samples of same size. In general, when CI includes 0, it is said that is not statistically different from zero.

Stevens (17) has described these types of tests very well: Significance tests of correlation coefficients are generally very similar to tests of means. One difference is that the sampling distribution of r is more complex than sampling distributions for means or mean differences. The shape of the sampling distribution of r tends slowly towards the normal distribution as n increases. With an n of 3 or more, a t-test can be used to test whether a given observed r is significantly different than a hypothesized population correlation () of zero. When = 0, the formula for the test is (Eq 7) However if 0, Stevens recommends (17) an r to Z transformation: When 0, the sampling distribution of r is very skewed. This problem is minimized through use of Fishers r to Z transformation. Example 2. Lets revisit the previous example, where n = 100 and r = 0.35. The hypothesis now is that this r value is not statistically different from a hypothesized correlation coefficient of = 0.50. Since 0, instead of Equation 6, we conduct a Z-test at a 95% ( = 0.05) confidence level (Z = 1.96) and as follows. Make the null hypothesis (H0) that r = 0.35 is not statistically different from a hypothesized correlation coefficient of = 0.50. Convert r values to Z values. Thus, Zr = 0.3654 and Z = 0.5493. Compute a statistical Z value as

Step 1.

Step 2. Step 3.

In this example, | Z calculated | = | -1.8112 | = 1.81. Since in this case | Z calculated | < | Z table | = (1.81 < 1.96), we cannot reject H0. Therefore r = 0.35 does not differ significantly from the hypothesized value of = 0.50. We already knew this from Example 1.

The standard procedure for testing whether any two correlation coefficients r1 and r2, from different samples are significantly different from one another is now described. Example 3. A researcher wants to test whether the previous r value of 0.35 for a sample of 100 observations is significantly different than an r value of 0.25 computed from a second sample of 75 observations at the 95% (= 0.05) confidence level, (Z = 1.96). Make the null hypothesis (H0) that r1 = 0.35 and r2 = 0.25 are not statistically different. Convert r values to Z scores and compute the difference between these. Thus, Z1 = 0.3654, Z2 = 0.2554 and Z1 - Z2. = 0.1100 Compute a Z score by dividing the difference by the pooled standard error i.e. Z = and take the absolute value. ;

Since in this case | Z calculated | < | Z table |, we cannot reject H0. The difference between r1 and r2 is not significant at the 95% confidence level. When H0 is accepted for these types of tests, r1 and r2 are not statistically different. For those samples, it is risky to say that one r value is better than the other, whatever the connotation of better.

We want to conclude this section by highlighting the importance of using supplementary information like scatterplots before interpreting correlation coefficients. As noted by Dallal (18);

To further illustrate the problems of attempting to interpret a correlation coefficient without looking at the corresponding scatterplot, consider this set of scatterplots, which duplicates most of the examples from pages 78-79 of Graphical Methods for Data Analysis by Chambers, Cleveland, Kleiner, and Tukey. (*). Each data set has a correlation coefficient of 0.7. The moral of these displays is clear: ALWAYS LOOK AT THE SCATTERPLOTS! The correlation coefficient is a numerical summary and, as such, it can be reported as a measure of association for any batch of numbers, no matter how they are obtained. Like any other statistic, its proper interpretation hinges on the sampling scheme used to generate the data. Source: http://www.jerrydallal.com/LHSP/corr.htm (*) For a list of publications, see http://stat.bell-labs.com/cm/ms/departments/sia/wsc/publish.html

Figure 1. Dissimilar scatterplots with same correlation coefficient. That subsumes pretty well the whole point. Do not ignore the visual power of scatterplots when computing correlation coefficients.

In a 7/16/2010 post, a search marketer reported an r value of 0.67 and stated:

If 1 is perfectly correlative, then 0.67 is certainly a strong correlative relationship and a figure of some interest, when we consider there are a couple hundred factors that reportedly contribute to rank. (http://searchenginewatch.com/3641002).

This raises the question of how to characterize correlation strengths. Several attempts have been made at classifying r values as weak, poor, moderate, strong, or very strong using scales of correlations. Table 1 lists some of these scales, obtained from several online resources.

Table 1. Some Scales on Correlation Strengths Suggested Scale of r values Scale A Strength -1.0 to 0.5 or 1.0 to 0.5 Strong -0.5 to 0.3 or 0.3 to 0.5 Moderate -0.3 to 0.1 or 0.1 to 0.3 Weak 0.1 to 0.1 None or very weak Scale B Strength |r| > 0.7 Strong .3 < |r| < .7 Moderate 0 < |r| < .3 Weak Scale C Strength Greater than 0.80 Strong 0.80 to 0.5 Moderate less than 0.50 Weak Scale D Strength 0.9 to 1.0 Very strong 0.7 to 0.9 Strong, high 0.4 to 0.7 Moderate 0.2 to 0.4 Weak, low 0.0 to 0.2 Very weak

Resource http://www.experiment-resources.com/statistical-correlation.html

http://www.sjsu.edu/faculty/gerstman/StatPrimer/correlation.pdf

http://mathbits.com/mathbits/tisection/statistics2/correlation.htm

http://www.bized.co.uk/timeweb/crunching/crunch_relate_expl.htm

The problem with these scales is that their boundaries are often defined using subjective arguments, not to mention that not all researchers agree with using such boundaries or scales at all. Note that these scales do not agree. For instance, the strength ascribed to a value of 0.67 would be considered as moderate in Scales B, C, and D, but as strong in Scale A. Additionally, while in Scale A a value of 0.5 is borderline between strong and moderate, in Scale C it is borderline between moderate and weak. Which of the above scales is correct? Without supplementary information, relying on these scales can be misleading. First as shown in the previous section, looking at a single r value says nothing about whether the data describes a compact or sparse cluster of points or if there is a nonlinear trend or outliers present in the data. Second, samples of different sizes (n = 3, 10, 100, 1,000,000, etc) can all have the same correlation value. By just looking at an r value we cannot say that the strength of correlation weights the same or is equally meaningful across sample sizes. Third, if for an r value a confidence interval test shows that the intervals include zero, the reported r value is not statistically different from zero, in which case the degree of strength ascribed to the r value might be meaningless. Finally, there is the question of the fraction of explained and unexplained variations that can be accounted for by the regression model. This is addressed with a Coefficient of Determination R, defined as R = r2. An R value gives the fraction of variations in the dependent variable (y) that can be explained by variations in the independent variable (x). Thus 1 R gives the fraction of variations that cannot be explained by the model. For r = 0.67, R = 0.4489, and about 45% of the variations in y can be explained by the regression model while 55% cannot be explained by the model. Im not sure that a regression model wherein 55% of the variations remain unexplained could be trusted as a forecasting model or for making comparison from something that dynamic as search engine variables and Web variables. The same applies to a reported r value of 0.5, wherein 75% of the variations will remain unexplained. In my opinion, using correlation tables to classify correlation strengths associated to linear models as strong or moderate with such large variations might be risky. Instead of relying on correlation scales, I use R values along with other statistics (i.e., Chi Square, etc) and supplementary information as general guidelines.

The problem with correlation strength scales is that these say nothing about how the size of a sample impacts the significance of a correlation coefficient. This is a very important issue that is now addressed. Consider three different correlation coefficients: 0.50, 0.35, and 0.17. Assume that we want to test that there is no significant relationship between the two variables at hand. The null hypothesis (H0) to be tested is that these r values are not statistically different from zero ( = 0). How to proceed? As recommended by Stevens (17), for = 0, H0 can be tested using a two tailed (i.e.,two sided) t-test at a given confidence level, usually at a 95% level. If tcalculated ttable, H0 is rejected. However, if tcalculated < ttable H0 is not rejected and there is no significant correlation between variables. Here tcalculated is computed as r/SEr = r*SQRT[((n 2)/(1 r2))] while ttable values are obtained from the literature (http://en.wikipedia.org/wiki/Student%27s_t-distribution#Table_of_selected_values). Table 2 summarizes the result of testing the null hypothesis at different sample size values.

Table 2. H0 tests at different sample sizes (two-tailed, 95% confidence.

n 5 10 12 14 20 30 40 50 5 10 12 14 20 30 40 50 5 10 12 14 20 30 40 50 df = n - 2 3 8 10 12 18 28 38 48 3 8 10 12 18 28 38 48 3 8 10 12 18 28 38 48 r 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.35 0.17 0.17 0.17 0.17 0.17 0.17 0.17 0.17 SEr 0.50 0.31 0.27 0.25 0.20 0.16 0.14 0.13 0.54 0.33 0.30 0.27 0.22 0.18 0.15 0.14 0.57 0.35 0.31 0.28 0.23 0.19 0.16 0.14 t(calc) 1.000 1.633 1.826 2.000 2.449 3.055 3.559 4.000 0.647 1.057 1.182 1.294 1.585 1.977 2.303 2.589 0.299 0.488 0.546 0.598 0.732 0.913 1.063 1.195 t (0.95) 3.182 2.306 2.228 2.179 2.101 2.048 2.024 2.011 3.182 2.306 2.228 2.179 2.101 2.048 2.024 2.011 3.182 2.306 2.228 2.179 2.101 2.048 2.024 2.011 Reject (H0 : = don't reject don't reject don't reject don't reject reject reject reject reject don't reject don't reject don't reject don't reject don't reject don't reject reject reject don't reject don't reject don't reject don't reject don't reject don't reject don't reject don't reject

0)?

The table addresses at which size level an r value is high enough to be statistically significant. For n = 14, all three r values (0.50, 0.35, and 0.17) are not statistically different from zero. For n = 30, r = 0.50 is statistically different from zero while r = 0.35 and r = 0.17 are not. Conversely, r = 0.50 is not statistically different from zero when n 14 while r = 0.35 is not different from zero when n 30. Finally, r = 0.17 is not statistically different from zero at any of the sample sizes tested.

Part Three

Recent Advances on Correlation Coefficients

In 2003, Zimmerman, Zumbo, and Williams (19) reported that there are situations wherein r to Z conversions produce results that deviate from expected values. As Zimmerman pointed out to us, this occurs for instance with non-zero ( 0) cases or when bivariative normality is violated. This is not a limitation of the r to Z conversion itself which is no more than an elementary transcendental function. Unlike which is an unbiased estimator, is a biased estimator of a skewed bivariate distribution. Therefore, the bias observed is a property of the mean of sample correlation coefficients. As correctly asserted by Zimmerman, et al. (19),

It should be emphasized that this bias is a property of the mean of sample correlation coefficients and is distinct from the instability in the variance of sample correlations near 1.00 that led Fisher to introduce the so called r to Z transformation.

To correct for the inherent bias, a Fisher (20) and Olkin-Pratt correction (21) has been suggested. These corrections can be applied to either r or values. According to theory, the absolute value of this bias is For Fisher Correction: For Olkin-Pratt Correction:

Differentiating either expression with respect to and setting the derivative to zero reveals that the bias is a maximum at A plot of values vs the bias confirms this as can be seen from Figure 2.

Figure 2. Profile curves for Fisher and Olkin-Pratt corrections for n = 30. The bias reaches a maximum value at

In this case we use n = 30. As n increases, n >> 3, and in theory identical curves should be obtained. Since this bias is a property of correlation coefficients, treating and its standard error as one would treat and its standard error is contraindicated.

Appendix

It is fairly easy to show that an arithmetic average or a standard deviation is not computable directly from m numbers of r values. For centered data, r is a cosine value:

Adding, subtracting, or averaging any two cosines do not result in a new cosine value. For instance, for any two values r1 and r2

For m > 2, the arithmetic is far more complex. The error introduced can be non trivial. For example let r1 = 0.877 and r2 = 0.577. One might think that the average is . Converting r1 and r2 to Z scores and averaging, = 1.010... Converting this result back gives a mean value of , confirming that in this case the mean ( ) is skewed toward r1.

If no other information is available, Fishers Z Transformation is one of several weighting strategies recommended in the literature for computing weighted correlations. However, in a recent work (22), we covered the limitations of this transformation and proposed a new weighted average that makes unnecessary its use. We also proved that our approach adds a new discriminatory level to the analysis by weighting individual correlations in terms of the variance of the dependent variable. The proposed average seems to be more discriminatory than Fishers Z Transformation and other weighting strategies. If variance data is not known, one would have no other option, but to report averages based on the transformation.

References

1. 2. 3. 4. 5. 6. Garcia, E. (2010). A Tutorial on Standard Errors. http://www.miislita.com/information-retrieval-tutorials/a-tutorial-on-standard-errors.pdf Sphinn (2010) SEO is Mostly a Quack Science. http://sphinn.com/story/151848/#77530 Garcia, E. (2010). Beware of SEO Statistical Studies. http://irthoughts.wordpress.com/2010/04/23/beware-of-seo-statistical-studies/ Statsoft Textbook: Basic Statistics. http://www.statsoft.com/textbook/basic-statistics/#Correlationso Wikipedia. Fisher Transformation. http://en.wikipedia.org/wiki/Fisher_transformation Faller, A. J. (1981). An Average Correlation Coefficient. Journal of Applied Metereology Vol 20, p 203-205,(1981). http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450(1981)020%3C0203:AACC%3E2.0.CO;2 Zsak, M. I. (2006). Decision Support for Energy Technology Investments in Built Environment . Master Thesis Norwegian University of Science and Technology.

7.

http://www.iot.ntnu.no/users/fleten/students/tidligere_veiledning/Zsak_V06.pdf

8. 9. Lyerly, S. (1952). The Average Spearman Rank Correlation Coefficient. Psychometrika Vol 17, 4, (1952). http://www.springerlink.com/content/024876247x874885/fulltext.pdf Reed, H. D., Frankham, R. (2001). How Closely Correlated Are Molecular And Quantitative Measures Of Genetic Variation? A Meta-Analysis . Evolution, 55(6), 2001, pp. 10951103. http://www.life.illinois.edu/ib/451/Frankham%20(2001).pdf

10. Reed, H. D., Frankham, R. (2003). Correlation between Fitness and Genetic Diversity. Conservation Biology, Pages 230237, Volume 17, No. 1, February 2003 http://biology.ucf.edu/~djenkins/courses/biogeography/readings/ReedFrankham2003.pdf 11. Gandolfo, F., Li, C-S. R., Benda, B. J., Schioppa, C. P., and Bizzi, E. (2000). Cortical Correlates of Learning in Monkeys Adapting to a new Dynamical Environment. Proceedings of the National Academy of Sciences of the United States of America, Vol 97, No. 5, (2000). http://www.pnas.org/content/97/5/2259.full.pdf#page=1&view=FitH 12. Shadmehr R., Mussa-lvaldi, F. A. (1994). Adaptive Representation of Dynamics during Learning of a Motor Task . The Journal of Neuroscience, May 1994, 74(5): 32083224 http://www.jneurosci.org/cgi/reprint/14/5/3208.pdf 13. Mussa-Ivaldi GA, Gandolfo F (1993) Networks that approximate vector-valued mappings IEEE Int Conf Neural Networks 1973-1978. 14. Bizzi, E. (2010) Current Lab Members http://web.mit.edu/bcs/bizzilab/members/ 15. Altman, D. G., J Martin Bland, J. M. (2005). Standard deviations and standard errors . BMJ 2005;331:903 (15 October), doi:10.1136/bmj.331.7521.903 http://www.bmj.com/cgi/content/extract/331/7521/903 16. Lu, Z. Computation of Correlation Coefficient and Its Confidence Interval in SAS. David Shen, WCI, Inc. AstraZeneca Pharmaceuticals http://www2.sas.com/proceedings/sugi31/170-31.pdf 17. Stevens, J. J. (1999). EDPSY 604 Significance Tests of Correlation Coefficients . http://www.uoregon.edu/~stevensj/MRA/correlat.pdf 18. Dallal, G. E. (1999). Correlation Coefficients. http://www.jerrydallal.com/LHSP/corr.htm 19. Zimmerman, D. W., Zumbo, B. D, Richard H. Williams, R. H. (2003). Bias in Estimation and Hypothesis Testing of Correlation. Psicolgica (2003), 24, 133-158. http://www.uv.es/revispsi/articulos1.03/9.ZUMBO.pdf 20. Fisher, R.A. (1915). Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10, 507-521. http://digital.library.adelaide.edu.au/dspace/bitstream/2440/15166/1/4.pdf 21. Olkin, I., Pratt, J.W. (1958). Unbiased estimation of certain correlation coefficients . Annals of Mathematical Statistics, 29, 201-211. http://digital.library.adelaide.edu.au/dspace/bitstream/2440/15166/1/4.pdf 22. Garcia, E (2011). On the non-additivity of correlation coefficients.

http://www.miislita.com/statistics/on-the-non-additivity-correlation-coefficients.pdf

- WMS-IVUploaded byAnaaaerobios
- 1471-2474-12-155Uploaded byequalben
- Statistical Methods for the Social SciencesUploaded byJeff Bastian
- Final Project of QTBUploaded byWaqas Waheed
- Understand StatisticsUploaded byNxMalware
- Report2.1cervezasUploaded byCriss Morillas
- afzal hussain project1Uploaded byAfzal Hussain
- FRM Part 1 FormulaSheetUploaded byEmmanuelDasi
- REYES-PhDUploaded byedgar_chie
- Class27_RegressionNCorrHypoTestUploaded bybdsisira
- CorrelationUploaded bykaalkunalrocks
- Seven Basic Tools of QualityUploaded byarnoldindia
- Miller_Haden_2013_GLM Statistical Analysis.pdfUploaded byRodrigo Alves
- Patience SpecialUploaded bycerbury
- Confidence Intervals for Pearson's CorrelationUploaded byscjofyWFawlroa2r06YFVabfbaj
- audit feeUploaded byLaksmi Mahendrati Dwiharja
- Cuesta Vargas (2013). Hydrotherapy as a Recovery Strategy After Excercise a Pragmatic TrialUploaded byIordache Alexandru VIrgil
- CompManual 2007Uploaded bysamersamersamer
- Rj-quality Assurance Program for PharmaUploaded byraju1559405
- Q1Uploaded byThomas Goh
- Lesson 11Uploaded byzaenalkmi
- koutsoyiannis-pmp-hershfield-wrr1999.pdfUploaded byAshlee Smith
- c 01061421Uploaded byInternational Organization of Scientific Research (IOSR)
- Lab 4Uploaded byerick9721
- DiameterUploaded byLefinia Putri
- 09093011260425828Uploaded bymahoutsukai_monika
- syllabus statistical methodsUploaded byapi-339611548
- Chapter 09 - Latest (1)Uploaded byRyena Nerman
- Disssertation_Build Strong Customer-Based Brand Equity in International MarketUploaded byVU ThuyDung
- Razi (2014). Turkish language proficiency and cultural adaptation of American EFL teachers in TurkeyUploaded byartaudbasilio

- REFERAT KONJUNGTIVITIS AYEEUploaded byTeguh El-bahgie
- Referat-CA-Colon.pdfUploaded byJust Mahasiswa
- referat-cpr-anestesi-2011.pdfUploaded byJust Mahasiswa
- referat pemeriksaan neurologis pada pasien komaUploaded byHendrawan Ariwibowo
- Referat UrtikariaUploaded bydavid_santoso_9
- REFERAT THTUploaded byAditya Prabawa
- Demam Kelenjar Mononucleosis InfectiosaUploaded byAlain Laurent
- Cervical Spine FracturesUploaded byNila Lumiere
- Referat Sinusitis Jamur PinkUploaded byAlvina Fujianto
- Referat Ayuni Tbc Pada UrogenitalUploaded bypshop_1
- Refarat Perdarahan Dalam Kehamilan FK UNAYAUploaded byaidil_1
- Referat-AnafilaksisUploaded byJust Mahasiswa
- Menguak Misteri Kamar BiusUploaded bywahyu Pur
- Referat Kehamilan Lebih WaktuUploaded bybonarz
- Referat Encephalitis FixUploaded byIqyu Chan Kyu
- Trombus Dan EmboliUploaded byHery Hamid
- bedah-behapalUploaded byAyie Ajah
- Jaga Poli FixUploaded byJust Mahasiswa
- Frak TurUploaded byJust Mahasiswa
- REFERAT THTUploaded byErwin Theodore
- Tugas Eklamsi, Kasep, Klw , Letak SungsangUploaded byJust Mahasiswa
- REFERAT THTUploaded byAditya Prabawa
- lescius-medicus1Uploaded byBudi Istriawan
- Case DavinaUploaded byJust Mahasiswa
- OBSGYNUploaded byJust Mahasiswa
- Infeksi Saluran Pernapasan Akut (Contoh)Uploaded byJust Mahasiswa
- Case Presentation Nagib Muhammad NewUploaded byJust Mahasiswa
- Pp Kelumpuhan Nervus Fasialis Perifer 2Uploaded byJust Mahasiswa
- Cover P2mUploaded byJust Mahasiswa
- Cover P2mUploaded byJust Mahasiswa

- Recruitment Process Concept HRDP 2018_P'PUN&Bu SelvyUploaded bykikit
- Applying analytics in financial institutions' fight against fraudUploaded bybatsuren23
- Health Analyst or Business Analyst or Reimbursement Analyst or CUploaded byapi-122238512
- Driving Quality - A Health IT Assessment Framework for Measurement - NQFUploaded byRich Elmore
- Written Report Mo ToUploaded byTrisha is Love
- kamcoUploaded byjibinkuriako
- quageo-2018-0017Uploaded byi g n a t irina mădălina
- Wichi PhonologyUploaded byFiesto Fiestoforo
- UMAT Info Book 11Uploaded byLam Anh Nguyen
- Critical Success Factors for BI Systems - Yeoh y Koronios 2010Uploaded byjarmmx
- “A STUDY ON BRAND IMAGE OF ICICI PRUDENTIAL LIFE INSURANCE LTDUploaded byAnkit Jain
- 02 WholeUploaded byImai Sinaga
- New Guidelines to Evaluate the Response to Treatment in Solid Tumors (Jnci.oxfordjournals.org)Uploaded byDoni Mirza Kurniawan
- Celce Murcia ApproachesUploaded byÉrika Perillo
- An Earthquake Prediction is a Prediction That an Earthquake of a Specific Magnitude Will Occur in a Particular Place at a Particular TimeUploaded byJake Harvey Nolial Mendoza
- Liucunzhi Self 200711 8Uploaded byIntan Anantha
- antimicrobialUploaded bypalflax
- Chapter 2Uploaded byMoorthy Ganesan
- TOR on How to Go About Commercializing First Aid TrainingUploaded byTemesgen Chabbo HGabriel
- Axis and Nomenclature in Wind TunnelUploaded byDhanasekar
- APPLEBY ET AL (2011) - Changes in Strength Over a Two Year Period in Professional Rugby Union PlayersUploaded byDaniel Smith
- 128438-2015-syllabusUploaded byREj Urbano
- ReportUploaded byGabriela Iuga
- Internet Addiction EssayUploaded byuwotm6
- article titleUploaded byapi-347732656
- 2015Uploaded byclinicapsic
- Convex OptmizationUploaded byKarrthik Sharma
- Customer Value CreationUploaded byapi-3837252
- Global Dry Powder Inhaler Market (2016-2024)- Research NesterUploaded byDenMark1
- Attitude of Secondary Students towards Basic MathematicsUploaded byJASH MATHEW