pdfcrowd.comopen in browserPRO version
Are you a developer? Try out the HTML to PDF API
the unfamiliar reader, a concise account of the Ganzfeld procedure can be read here. Themain point I want to make about the Ganzfeld experiments is that, since 1985, there havebeen 8 independent, published meta-analyses of Ganzfeld experiments; and with theexception of the 1999 meta-analysis by Julie Milton and Richard Wiseman, which was shownby statistician Jessica Utts and acknowledged by Wiseman (personal correspondence, July2011) to have used a flawed estimate of the overall effect size and p-value of the combinedresults, all of them have shown statistically highly significant effects with a replication ratewell above what’s expected by chance. The literature also shows rather convincingly, in myview, that the leading Ganzfeld critic, Ray Hyman, has been unable to account for thesehighly significant effects by prosaic means like publication bias, optional stopping,inadequate randomization of targets, sensory leakage, cheating, decline effect, etc. On thislast point, I recommend reading Bem and Honorton’s 1994 paper, Bem’s reply to Hyman in
1994, and Storm and co.’s reply to Hyman in 2010.
As an example of the strength of the statistical evidence, let’s look at the most recentGanzfeld meta-analysis by parapsychologist Patrizio Tressoldi, who applies a frequentist andBayesian statistical analysis to 108 Ganzfeld experiments from 1974–2008. All theseexperiments were screened for adequate methodological quality and have an overall hit rateof 31.5% in 4,196 trials, instead of the 25% hit rate expected by chance. Moreover, using theconservative file-drawer estimate of Darlington/Hayes, the lower bound on the number ofunreported experiments needed to nullify this overall hit rate is 357, which is consideredimplausible by Darlington/Hayes’ criterion.For the frequentist analysis, Tressoldi applied two standard meta-analytic models, namely, a‘fixed-effects’ model (which assumes a constant true effect size across all experiments) anda ‘random-effects’ model (which assumes a variable true effect size across allexperiments). Whereas a standard deviation from the mean of only ~1.6 is needed for theresults of a meta-analysis to achieve statistical significance, the fixed-effects model yieldsan overall effect that’s significant by more than 19 standard deviations from the meaneffect of zero, while the random-effects model yields an overall effect more than 6 standarddeviations from the mean. The corresponding odds against chance for the fixed-effectsmodel is off the charts, and for the more conservative random-effects model is greater thana billion to 1.For the Bayesian analysis (which I know Massimo believes is more reliable and valid than theclassical approach), Tressoldi follows Rouder and co. in considering two hypotheses. Thefirst is the null hypothesis that the true effect size is zero for all experiments, and the
occasions, my most recenteffort in philosophy ofscience actually concerns what my col...
Subscribe to RS
Join this site
with Google Friend Connect
More » Already a member? Sign in
Join Massimo for drinks, food andconversation
Dinner & Philosophy meetup