487

British Journal of Psychology (2002), 93, 487 –499 © 2002 The British Psychological Society www.bps.org.uk

The Mind Machine: A mass participation experiment into the possible existence of extra-sensory perception
Richard Wiseman* and Emma Greening
Perrott-Warrick Research Unit, Psychology Department, University of Hertfordshire, UK
This paper describes a mass participation experiment examining the possible existence of extra-sensory perception (ESP). The Mind Machine consisted of a specially designed steel cabinet containing a multi-media computer and large touch-screen monitor. The computer presented participants with a series of videoclips that led them through the experiment. During the experiment , participants were asked to complete an ESP task that involved them guessing the outcome of four random electronic coin tosses. All of their data were stored by the computer during an 11-month tour of some of Britain’s largest shopping centres, museums, and science festivals. A total of 27,856 participants contributed 110,959 trials, and thus, the nal database had the statistical power to detect the possible existence of a very small ESP effect. However, the cumulated outcome of the trials was consistent with chance. The experiment also examined the possible relationship between participants’ ESP scores and their gender, belief in psychic ability, and degree of predicted success. The results from all of these analyses were non-signi cant. Also, scoring on ‘clairvoyance’ trials (where the target was selected prior to the participant’s choice) was not signi cantly different from ‘precognitive’ trials (where the target was chosen after the participants had made their choice). Competing interpretations of these ndings are discussed, along with suggestions for future research.

Parapsychologists have carried out a large number of studies examining the possible existence of extra-sensory perception (ESP). One of the principal types of experimental design uses the ‘forced-choice’ procedure, in which participants are asked to guess the identity of hidden ‘targets’ (e.g. the colour of playing cards) that have been randomly selected from a set of alternatives known to participants prior to making their guess (e.g. they are told that the cards will be either red or black).
* Requests for reprints should be addressed to Richard Wiseman, Psychology Department, University of Hertfordshire, College Lane, Hat eld AL10 9AB, UK (e-mail: r.wiseman@herts.ac.uk).

These experiments have investigated a wide range of hypotheses (see Palmer. words). the automated experiments conducted by both Schmidt (1969b) and Honorton (1987) produced highly signi cant effects. participants who believe in ESP tending to outperform disbelievers (Lawrence. not providing feedback) and individual differences (e. Some studies have examined the possible existence of telepathy by having another person. p = . Although the cumulated effect size was small (0. shuf ing the cards. Honorton (1987) developed ‘ESPerciser’—a computer-based system that presented participants with four on-screen boxes and asked them to guess which one had been randomly selected by the computer. providing feedback to participants about their scores vs. and involved data collection and analysis being carried out by hand. Rhine. for example.6 million guesses made in over 140 forced choice ESP studies conducted between 1882 and 1939. 1993). Likewise. circle. More recently.g. for example. those that believe in ESP vs. disbelievers).g. These studies have also examined how ESP scores are affected by different kinds of target material (e. cross.02). 1989). Many of the studies were independently signi cant and. or wavy lines.e. many forced-choice ESP studies have been criticized on both methodological and statistical grounds. Honorton and Ferrari (1989) presented a meta-analysis of nearly 2 million guesses from precognitive forced choice experiments conducted between 1935 and 1987. Similarly. These studies were often very labour-intensive. Many researchers have argued that the results of these studies support the existence of ESP For example. effect size = 0. concentrate on targets prior to the participant’s guess. The cumulated outcome of both trial types was highly signi cant (precognition trials. The majority of this work involved participants attempting to guess the order of shuf ed packs of cards carrying the image of a star. the order of a deck of cards that have been shuf ed and immediately sealed in an envelope). A third set of studies has examined participants’ precognitive abilities by. experimental procedures (e. Recent research has tended to use more automated procedures. with. However. and Morris (1998) carried out a meta-analysis of 22 forced-choice ESP studies that had compared scoring between clairvoyant and precognitive trials. Hansel (1980) and Gardner (1989) have claimed that some of the early card guessing experiments employed procedures that would have allowed for participant cheating and ‘sensory cueing’ (i. 1940/1966). Smith. as a group. For example. Stuart. Milton.01. this was highly signi cant ( p < 10 ± 25 ). and then comparing the predicted order with the actual order. prompted participants to indicate which lamp they thought the device had selected and provided feedback by lighting the target lamp after they had registered their choice. effect size = 0. 3. p = 9 ´ 10 ± 7 : clairvoyant trials. For example.002). 1978 for a review). provided strong evidence of above-chance scoring. (1940 /1966) reviewed the ndings from more than . Pratt et al.488 Richard Wiseman and Emma Greening Many of the early forced-choice ESP experiments were conducted by Rhine and his colleagues at Duke University in the early part of the last century (see Pratt.009. The results of many experiments also contain evidence of signi cant internal effects. square. Also. Steinkamp. Others have suggested that the highly signi cant effects from some of the automated studies are . Both systems automatically recorded information about both the selected targets and participant choices. & Greenwood. having them predict the order of a deck of cards. referred to as a ‘sender’.g.g. participants inadvertently detecting and utilizing subtle signals about the identity of targets). Other work has investigated clairvoyance by having participants attempt to guess the identity of targets that are not known to anyone else (e. symbols vs. and trials employing feedback obtaining better results than those giving no feedback (Honorton & Ferrari. Schmidt (1969a) developed an electronic device that randomly selected one of four lamps.

All of their data were stored by the computer during an 11-month tour of some of Britain’s largest shopping centres. and reported that the studies obtained a mean rating of just 10. The current authors aimed to help resolve this debate by devising a novel procedure for carrying out a large-scale forced-choice ESP experiment. Skulason. 1981. 1991. under certain circumstances. Schmidt. both Leuba (1938) and Greenwood (1938) discussed the potential dangers of ‘optional stopping’. B.3. 1987. The poor levels of methodological and statistical safeguards present in some past studies have been highlighted in two more recent metaanalyses. However. Researchers have also carried out large-scale ESP experiments using a wide cross-section of the public making their responses in non-laboratory environments. as the experiments used non-random methods to select and order targets (see Hyman. in 1961. for example. in their meta-analysis. Steinkamp et al. 1954). Arecent meta-analysis of the large-scale experiments carried out via the mass media indicated a low effect size (± . 1994). severely reduce the effect size of a study and thus . Honorton. using both clairvoyant (see Haraldsson. The results of such studies have the potential to produce spuriously high or low ESP study outcomes via the ‘stacking effect’. the Zenith Radio Corporation asked their listeners to guess the order of a short series of randomly ordered targets. The studies received an average rating of just 3. During the experiment. participants were asked to complete a forced-choice ESP task that involved trying to guess whether a ‘virtual’ coin toss would result in a ‘head’ or a ‘tail’. 1969a) designs. Critics have also pointed to possible problems with the way in which data have been collected and analysed. In reply. Thorisson. making more than 1 million guesses. 1986a for a summary). and noted the non-signi cant correlations between the presence/absence of methodological safeguards and study outcome (see Honorton & Ferrari. Kennedy. Likewise.000 members of the public attempt a precognitive task in which they were asked to predict the order of random number sequences. and thus a statistical correction (known as the ‘Greville correction’) was applied to the data. 1969b) and precognitive (see Broad & Shafer. For example. The computer presented participants with a series of videoclips that led them through the experiment.The Mind Machine 489 invalid. Six of the eight studies contained in the metaanalysis involved large groups of participants attempting to guess a single target sequence. Steinkamp et al. 1989. Honorton and Ferrari (1989) analysed the quality of each study in their database by assigning a point for each of 8 major methodological criteria. this correction can. Likewise. 1980). (1998) assigned clairvoyant studies a methodological quality rating of between 1 and 19. 1998). Over 46.000 people responded. The outcome of each virtual coin toss was determined by the computer’s random number generator (RNG). 1982. 1987. Milton also noted that these ndings are dif cult to interpret. However. in 1937. Honorton. The Mind Machine consisted of a specially designed steel cabinet containing a multi-media computer and large touch-screen monitor. 1989.0046) whose overall cumulative outcome did not differ from chance expectation (Milton. Radin & Bosworth. Schmidt. Other studies have suffered from the ‘stacking effect’—a statistical artefact that can occur when guesses from many participants are all matched against the same target material (see Pratt. museums. For example. wherein researchers are able to conclude an experiment when the study outcome conforms to a desired result. arguing that many of the alleged aws are unlikely to account for reported effects (see Palmer. Houtkooper. some parapsychologists have questioned the validity of these criticisms by. Radin.. 1987. & Haraldsson. J.6. Many parapsychologists have reported studies in which participants appeared to be able to psychically determine the outcome of a computerized RNG. and science festivals. 1987. & Hoeltje. Rhine had over 29.

and its methodology was developed for several reasons. The Mind Machine overcame these problems by creating a totally automated experiment and by having each participant only contribute a very small number of trials. Studies providing immediate. feedback to participants obtained signi cantly higher effect sizes than those giving delayed or no feedback. The Mind Machine was designed to overcome these artefacts by specifying the size of the nal database in advance of the experiment. many previous forced-choice ESP studies have been criticized on various methodological grounds. Also. it was predicted that participants’ overall ESP scores would differ signi cantly from mean chance expectation. 1997). feedback. To examine this . On the basis of the results from previous forced-choice ESP studies. 1994).490 Richard Wiseman and Emma Greening mask evidence for psi. Ameta-analysis carried out by Lawrence (1993) revealed that participants who believed in psychic ability tended to obtain signi cantly higher forced-choice ESP scores than disbelievers. and generating a new target sequence for each participant. as they tend to be time-consuming and tedious for both experimenters and participants alike (Broughton. Also. and poor randomization. Third. trial-by-trial. 1967: Rhine. For example. Fourth. given that a previous meta-analysis of forced-choice ESP studies conducted via the mass media under non-conducive conditions had resulted in a cumulative outcome consistent with chance (Milton. The Mind Machine methodology also enabled an attempt to replicate one of the most reliable internal effects in the forced-choice ESP literature. The Mind Machine built upon this previous research. it is widely acknowledged that carrying out large-scale forced-choice ESP experiments is usually problematic. again. including optional stopping and stacking effects. experiments testing participants individually had signi cantly higher effect sizes than those employing group testing. the Mind Machine methodology could incorporate many of the factors that have positively correlated with study outcome in meta-analyses of previous forced choice studies. 1962) both obtained signi cant psi effects. the computer running the experiment was secured inside a locked cabinet that could not be accessed by participants. as noted above. Many of these patterns were also found in the metaanalysis carried out by Steinkamp et al. In addition. 1991. (1998). the two studies that did not have to be subjected to this correction (Brier. First. and thus possesses the statistical power to reliably detect the small effect sizes reported in many previous forced-choice ESP studies. with immediate. the study had the potential to collect a huge amount of data from thousands of participants. none of the studies gave participants instant feedback about their ESP scores. For example. The Mind Machine was designed to minimize these potential problems. critics have correctly noted that some previous forcedchoice ESP studies have suffered from potential statistical problems. many of the studies incorporated procedures that have not been associated with high effect sizes in meta-analyses of past laboratory experimentation. as noted above. including sensory shielding. Second. and many supplied no feedback at all. possible randomization problems were minimized by having the target selection carried out by a pseudo-random number generator that had been fully tested prior to use. Interestingly. Incorporating such ‘ESP conducive’ procedures was important. To maximize the potential of obtaining evidence for ESP the Mind Machine tested participants individually and provided them . Fifth. opportunities for participant cheating. trial-by-trial. Honorton and Ferrari’s (1989) meta-analysis of precognition forced-choice studies noted that several factors were signi cantly associated with increased ESP scoring. Radin.

Honorton. As noted above. Schmeidler. Both computer and monitor were housed within a Barcrest steel cabinet specially constructed for high-volume public use. 1945). Harris. several studies have examined potential differences between ESP scoring on clairvoyant trials (i.e. Wiseman. Machin. and others showing no signi cant differences (see Palmer. Elite TX 512K Motherboard AT.5 m ´ 0. 1987).The Mind Machine 491 hypothesis. participants were asked to indicate whether this was the rst time they had taken part in the experiment. The Mind Machine experiment examined the potential relationship between gender and ESP scoring. it was predicted that the ESP scores of believers would be signi cantly higher than disbelievers’ scores. 3. At the start of the Mind Machine experiment. these studies have obtained mixed results. with precognition trials (where the outcome of the other half was determined after they had indicated their decision). but prevented them from having access to . Although a few studies have reported precognition trials outscoring clairvoyance trials (see Freeman. This cabinet measured approximately 1. The Mind Machine followed up on this work by comparing the scoring of clairvoyant trials (where the outcome of half of trials was determined prior to participants indicating their guess). The small number of forced-choice studies that have examined participants’ ESP performance over several sessions have tended to report a decline effect (see Honorton & Carbone. with some reporting females outperforming males. On the basis of previous work. participants in the present experiment were asked to indicate whether they believed in the existence of psychic ability prior to completing the ESP task. 1978 for a review). where the target is chosen after the participants have made their choice). Participants in the Mind Machine study were asked to predict how many coin tosses they believed they would guess correctly.5 m and allowed participants to use the touch-screen monitor. and the relationship between their predicted and actual success was examined. The experiment also explored whether participants taking part in the study for the rst time would obtain signi cantly different ESP scores to those repeating the experiment. and thus it was possible to examine the ESP scoring of ‘novices’ to participants who had taken part in the study before. 1971. The study also investigated four other internal effects that have received mixed support in past research. 1965. Some of these experiments have shown a signi cant relationship between predicted and actual performance (see Musso. Previous forced-choice experiments have also examined the potential relationship between ESP scoring and gender. 1997).e. where the target is selected prior to the participant’s choice) and precognition trials (i. the meta-analytic review conducted by Steinkamp et al. (1998) showed no signi cant difference between the two types of trials. Smith. This computer was connected to a 17-inch colour touch-sensitive monitor. These studies have obtained mixed results. Asmall number of past forced-choice studies have investigated whether participants’ predictions about their performance on a forthcoming ESP task are related to their actual performance. 1962. but others have failed to obtain this effect (see Palmer. Again. 1978 for a review).5 m ´ 0.5-Gb hard drive. Apparatus Hardware The experiment was carried out on an Omega Science PC with a P200MMX INTEL Processor. & Joiner. Humphrey. 1971.

0)) obtained from the P-Lab website (http://random. 1. 1.g. The coin on the left of the . The next videoclip showed RW in the centre of the screen. and the website contains the results of this extensive testing.. These videoclips were lmed in RW’s laboratory and were accompanied by background music written especially for the program. Taking part in the experiment took approximately two and a half minutes. Each venue provided a continuous power supply. The top left of the screen displayed short videoclips of RW asking each of the questions. This videoclip contained the Mind Machine logo and the phrase ‘Touch the screen to begin’.85%of participants did not complete the experiment. or equal to.sbg.5. no). The bottom half of the screen contained large virtual buttons displaying possible answers to the questions. mouse and keyboard) by securing them in the base of the cabinet. When the power was reinstated. The program initially displayed a videoclip designed to attract participants to the cabinet.mat.5. and how many coin tosses they believed they would guess correctly (0. This PRNG has been fully evaluated for randomness. or disconnected at the mains.1. The top right of the screen contained the same question in text form. The PRNG was seeded from the number of milliseconds that had elapsed since the computer system had started using the Windows ‘GetTickCount’ API function. who led them through the experiment. The Mind Machine was able to achieve this by using multi-media videoclips to create an engaging and enjoyable experiment.492 Richard Wiseman and Emma Greening the computer and its peripherals (e. When participants touched the screen. Software The Mind Machine program presented participants with a series of videoclips containing an experimenter (RW). The lead connecting the power supply to the computer was secured in such a way as to prevent it being pulled free from the cabinet. If this value was less than 0. Next. uncertain. This was achieved using an inversive congruential pseudorandom number generator (ICG(214748364 7. 2. and the engaging nature of the program is re ected in the fact that only 0. the resulting trial order became P-C -P-C. 3. Both panels were always locked whenever the cabinet was left in a public venue. no). The screen was then partitioned into three sections. If the supply was turned off. For this part of the experiment. The base of the cabinet could be accessed via panels at the front and rear of the cabinet.ac. the computer randomly determined if the target for clairvoyant trials (‘TargetC’) would be a ‘Head’ or a ‘Tail’.at/). whether they believed that psychic ability existed ( yes. and the keys were retained by the experimenters throughout the project. the order became C -P -P-C . whether it was the rst time that they had taken part in the experiment ( yes. Participants were rst asked whether they were male or female (possible answers: male. The PRNG algorithm was programmed to provide an integer between 0 and 99 inclusive. the computer failed to operate. the computer presented a videoclip of RW welcoming them to the experiment and asking them to rst answer four simple questions. the computer automatically booted into the Mind Machine program. 0. The program next randomly determined the order of the two clairvoyant (C) and two precognition (P) trials. If the value was greater than. female). the PRNG was programmed to return a number between 1 and 0. 4). and two large coins in the top left and right of the screen. and an even value was coded as a ‘tail’ and an odd value as a ‘head’. Some parapsychologists have emphasized the importance of creating forced-choice ESP tasks that participants nd both interesting and absorbing.

. otherwise it was classi ed as a ‘miss’. suffer from two types of optional stopping. will be reported in a separate paper. the participant’s choice was compared with TargetP If the participant’s choice matched the target. data from every ESP trial were included in the database (i. and an even value was coded as a ‘tail’ and an odd value as a ‘head’. In addition. whilst the coin on the right displayed a tail. the programme was evaluated on the University campus. participants’ performance on initial ESP trials could in uence whether they completed the experiment. and revealed the outcome of the coin toss by presenting them with a large ‘head’ or ‘tail’. the participant’s choice was compared with TargetC. For example. along with other psychological data. The computer also returned to this initial videoclip from any point in the program if the computer screen remained untouched for 25 s. and not complete the remaining two trials. potentially. At the end of each session. For precognitive trials.e. and not examining their data until the experiment has terminated (Milton & Wiseman. Minimizing optional stopping The Mind Machine experiment could. Alternatively. they fully understood both the questions and ESP task. Asecond form of optional stopping can occur when experimenters do not set sample sizes in advance of a study. ). Clearly. a videoclip informed participants of their total score. After all four trials had been completed. participants did not have to complete all four trials to be included in the database). the PRNG was used to provide an integer between 0 and 99 inclusive. Avideoclip then asked participants the percentage of trials they thought that people should get correct by chance alone (0% 25% 50% 75% 100% This question was . To prevent either scenario from creating a potential artefact. However. . designed to test the notion that participants who disbelieve in the paranormal will outperform believers on tests of probability estimation (see Blackmore & Troscianko. . Participants reported that they found the programme both compelling and interesting. participants might perform well on the rst two trials and not complete the remaining trials to avoid the possibility of them obtaining misses. RW asked participants to touch one of the coins to indicate whether they believed the coin would land heads or tails. the trial was classi ed as a ‘hit’. Participants’ data were stored in two databases that were regularly backed up throughout the duration of the experiment. . participants might obtain two misses on the rst two trials. For clairvoyant trials. and it is possible that a minority of individuals may have deliberately entered incorrect data. Before starting the tour. The results of this question. the authors have no way of checking that participants taking part in the actual experiment answered the questions honestly. First. a videoclip thanked participants for taking part and presented them with an opportunity to see a nal videoclip that would explain more about the experiment. the authors decided that each of the participants’ screen . become disappointed. such responses would have simply acted as a source of noise and been unlikely to have obscured general patterns within the data.The Mind Machine 493 screen displayed a head. Before starting the experiment. Finally. and thus have the potential to terminate the study when the results reach a desired outcome. Again. This potential problem can be avoided by experimenters setting the nal sample size in advance of the study. 1985). the program returned to the initial videoclip containing the Mind Machine logo. A videoclip then informed participants whether they had obtained a hit or a miss. 1997). The computer recorded the participant’s guess and then randomly determined if the target for precognitive trials (‘TargetP’) was a ‘head’ or a ‘tail’.

the Royal Museum of Scotland (Edinburgh). However. Testing randomness There exists a great deal of debate around the concept of randomness in parapsychology (see Gilmore. most researchers (see Palmer.73 50. percentage of heads. chisquare values (with continuity correction) and p-values (two-tailed) for each of the four trials separately and combined Number of heads Trial 1 Trial 2 Trial 3 Trial 4 Combined 13. each target should be selected an approximately equal number of times) and intertrial independence (i.06 . Palmer. Results The nal database contained 27. the four ESP trials.856 participants providing 250.29 .839 55. the Lakeside Shopping Centre (Essex) and the MetroCentre (Gateshead)).49 . 1989). Equiprobability Chi-squared analyses revealed that the frequency with which the computer selected heads and tails did not differ signi cantly from chance on any of the four trials.54 . and the probability question) and that the nal database would consist of 250. for the purposes of ESP testing. 1989.567 Percentage of heads 49. and at various intervals during the tour.81 .54 49.043 datapoints from the ve questions and 110.000 datapoints. Before embarking on the tour. the Olympia Exhibition Centre (London).392 Number of tails 14.934 13. In addition.801 13. science festivals. either separately or combined (see Table 1).959 ESP trials).818 13. for example. 1986b) consider it important to demonstrate both equiprobability (i. participants completing the experiment would have provided 9 datapoints—the answers to the initial four questions. Number of trials on which the computer selected heads and tails.002 datapoints (139. Table 1. The Tour The Mind Machine was taken on an 11-month tour of some of Britain’s leading museums.965 13.38 . the data from the forced choice ESP task were not examined until the end of the study. Each issue will be discussed in turn. and shopping centres (including.e.e.30 50.494 Richard Wiseman and Emma Greening presses would constitute a single datapoint (i.71 Chi-square 1.780 55.055 13.48 . each target has an equal opportunity of occurring after any other target). the Mind Machine was placed on campus at the University of Hertfordshire.92 P-value (two-tailed) .e.11 49.767 13.14 .14 .

The analysis comparing Trial 2 and Trial 3 was signi cant. number of hits.04 (. participants taking part in the Mind Machine experiment did not have any contact with a ‘live’ experimenter. As the vast majority of participants only completed the experiment once. the experiment’s overall outcome did not differ from chance.83) . Table 4 contains the z-scores and p-values (two-tailed) for each of the internal comparisons both including and excluding Trial 3. number of trials.19 (.959 trials. namely. Chi-square values (with continuity correction) and p-values (two-tailed. unlike the vast majority of laboratory studies. both including and excluding Trial 3. it is highly unlikely that they would be able to detect and utilize this small bias. However. it is theoretically possible that the effect could coincide with a more general response bias and thus lead to artefactual ndings. Further investigation revealed that this signi cance was due to the target sequence showing a slight (approximately 1% tendency to avoid repeating ) Trial 2’s target on Trial 3.66) Trial 2 Trial 3 7. The experiment involved 27. the experiment may have obtained chance results.856 participants contributing 110. because the conditions under which it was conducted were not conducive to ESP Despite designing the . To assess this possibility. For example.81) ESP scoring Table 3 contains the number of participants. However.16 (. there were several differences between the Mind Machine experiment and most laboratory studies. Table 2.005) . The Mind Machine may have sampled from a . None of the analyses was signi cant. in parentheses) comparing the frequency with which targets were chosen between each of the four trials Trial 1 Trial 2 Trial 3 Trial 4 . and incorporated conditions that previous literature had found to be ESP-conducive.93) .06 (.91 (. Also. Discussion The Mind Machine aimed to obtain evidence for ESP under conditions that minimized the possibility of methodological and statistical aws. Again.009 (. it was decided to analyse the data both including and excluding the only trial that could be affected by the bias. z-scores.69) .The Mind Machine 495 Intertrial dependency Table 2 contains the results of chi-squares comparing the frequency with which heads and tails were chosen between each of the four trials. and all of the internal analyses were non-signi cant. and p-values (two-tailed) for the overall database and each sub-group of data. the Mind Machine experiment took place in relatively noisy public spaces compared with the quiet surroundings of a laboratory. Trial 3. These ndings could be interpreted in one of two ways. First. none of the analyses was signi cant. experiment to incorporate many of the factors that have positively correlated with study outcome in a meta-analysis of previous forced-choice ESP experiments. percentage of hits. and thus had the statistical power to detect the possible existence of a very small effect.

72 49. 1994).75 50.01 (.60) ± .48 .67) ± . the study may have failed to nd evidence of forced-choice ESP because such an effect does not exist.53 (.79 50.55 (.79) .76 50.61) ± .53) .74 (.13 49.26 .92 50. z-scores and p-values (two-tailed) for internal effects both including and excluding Trial 3 All trials p-value (two-tailed) .79 .70 ± .84) Excluding Trial 3 Percentage hitting 49.28 (.20) ± .44 z-score .11 49.65) .45 49.73 49. z-scores and p-values (two-tailed) for the overall database and each sub-group of data.99 50.98 49.24 . precognition Belief in psychic ability: Believers vs.77 50.00 z-score ( p-value) ± .39 (.86 z-score ( p-value) ± . with participants coming across the experiment whilst visiting a public venue.51 (.70) ± 1.06 (.94) ± .61) ± .62 (. Given that the study maximized safeguards against the type .18 ± .27 (.67) ± .98) . disbelievers Predicted success: Below vs. Finally. number of hits.10 49.42 (. Number of participants.07 (.02 (.95) ± . one referee noted that RW is well known to the public for his sceptical attitude towards the paranormal. and suggested that this may have lowered participants’ expectations of demonstrating an ESP effect and affected their subsequent ESP scores.45 (. percentage of hits.08 49.66 Excluding Trial 3 p-value (two-tailed) .26 (. no 1.93 ± . versus being suf ciently motivated to volunteer to take part in a laboratory experiment (see Milton.70 (.88 50.20 (.63 (.39) .46) .75 ± .32) ± . Alternatively.90 49.45 .51 (.79 z-score Trial type: Clairvoyance vs.26 different participant population to laboratory studies. number of trials.86 (.76 ± . both including and excluding Trial 3 All trials Percentage hitting All trials Trial type Clairvoyance Precognition Belief in psychic ability Believers Uncertain Disbelievers Predicted success Below chance Chance Above chance First time? Yes No 49.35 .01 49.86 49.53) ± .43 (.90 50.21 49.79) Table 4. above chance First time: Yes vs.496 Richard Wiseman and Emma Greening Table 3.58) ± .48) 1.96 (.34) .45 .

participants indicated that they expected to perform above chance (the approximate percentages of participants indicating that they expected to guess 0. the lack of any signi cant effects would support the notion that the positive ndings reported in these previous experiments were spurious. No research has compared the ESP scores of the type of participants taking part in public versus laboratory experimentation. For example. designing another Mind Machine experiment to examine the relationship between these variables and ESP performance (e. The notion that ESP performance might be in uenced by environmental factors has been more fully examined. . Adrian Owen.). in part. and reported that these ratings were not correlated with study outcome. Steinkamp et al. the methodology provides access to a more representative cross-section of the public than the vast majority of laboratory research. However. Geomica and Barcrest for supporting the work described in this paper. this interpretation of the null effect seems unlikely. We also wish to acknowledge the assistance provided by John Bain. 3. Acknowledgements The authors would like to thank The Committee on The Public Understanding of Science. as a group. Caroline Watt.g. because it is dif cult to determine whether the differences in conditions between the Mind Machine experiment and laboratory-based research could account for the results obtained in this study. In addition. Reed (1959) found that different types of background music had no effect on ESP scoring. in line with some of the massparticipation studies conducted in the very early days of academic psychology (see Perloff & Perloff. For example. by placing the cabinet in locations that differed in levels of environmental distractions. and the staff at each of the venues who were kind enough to host the Mind Machine. 1978 for a review). the authors urge researchers to consider adopting this unusual methodology to carry out additional mass-participation studies in other areas of psychology and parapsychology. Mike Hutchinson. and no contact). this type of mass participation study is likely to prove an effective tool for promoting the public understanding of certain research methods. or performance over a very large number of conditions). to our knowledge. indirect contact.g. Assessing these competing explanations is problematic. Susan Blackmore. 1. e. but has obtained mixed results (see Palmer. the almost continuous stream of data would allow investigators to examine time of day effects. given that. Finally. The Perrott Warrick Fund. having the videoclips presented by known sceptics vs. . it is impossible to assess the possible effect that RW may have had on participants’ expectations and ESP scoring. (1998) rated 22 forced-choice ESP studies on three different levels of participant–experimenter contact (direct contact. For these reasons. Finally. 1977). The Mind Machine methodology can gather a huge amount of data in a relatively short period of time. only one study has examined the relationship between the degree of participant–experimenter contact and forced-choice ESP performance. The way in which these data are collected allows experimenters to test hypotheses that would be problematic to examine in laboratory -based research (e. proponents of ESP etc.The Mind Machine 497 of methodological and statistical problems associated with many previous forced-choice ESP experiments. especially where small effect sizes are expected. Beven (1947) reported that participants carrying out an ESP card guessing task in good light outperformed those in a darkened room. and 4 trials correctly were as follows: 0 correct—4% 1 correct—8% 2 correct—49% 3 correct—25% 4 : : : : correct—14% Future work could start to tease apart these competing ideas by ).g. mail. However. 2. Tina Gutbrod.

C. NY: Prometheus Books. How not to test a psychic. S. A. T. 81.). 191 –209. . . 125 –135. Broad. 11. Gilmore.). Successful performance of a complex psi-mediated timing task . In H. M. 65. Guidelines for extrasensory perception research. New York: Ballantine Books.. 309 –340. (1989). 29. Foundation s of parapsychology (pp. 34 –40. In S. Journal of Parapsychology. C.. Randomness and the search for Psi. J. (1965). R. J. pp. 2. Freeman. (1985). London: Routledge & Kegan Paul. Blackmore. ESP experiments with primary school children. Ameta-analysis of forced choice sheepgoat ESP studies. (1991). (1981). & Hoeltje. 1935–1987.. (1987).498 Richard Wiseman and Emma Greening References Beven. 1947–1993. J.). 75 –90. ESP and parapsychology: A critical re-evaluation. R. C. L. M. The Defense Mechanism Test as a predictor of ESP performance: Icelandic Study VII and meta-analysis of 13 experiments. Adva nces in parapsychological research 2: Extrasensory perception (pp. W G. (1945). UK: University of Hertfordshire Press.. D. L. M. (1938). B. Lawrence. T. 83. Morris. J. Houtkoope r. L. Edge. (1980). & Ferrari. New York: Plenum Press. 115 –121. 76 –89. Buffalo. & Troscianko. 217 –221. Foundations of parapsychology (pp. Journal of Parapsychology. Honorton. & J. An experiment to test the role of chance in ESP research. The Journal of the American Society for Psychical Research. (1971). Journal of Parapsychology. 205 –216. 5. M. Journal of Parapsychology. Honorton. Broughton. & Shafer. D. 53. M. An experiment in precognition. Musso. (1947). 26. & J. In The Parapsychological Association 37th Annua l Convention: Proceedings of Presented Papers. Honorton. E. 455 –468. Palmer. E. (1938). R. Palmer. J. Humphrey. London: Routledge & Kegan Paul.. 51. (1997). & Carbone. 33. Kennedy. L. Journal of Parapsychology. (1989). Milton. Amass school test of precognition. Krippner (Ed. (1989). C. Rush (Eds. J. 284–334. Journal of Parapsychology. (1993). In Parapsychological Association 36th Annua l Convention: Proceedings of Presented Papers. Hyman. 138–160). Further comments on Schmidt’s PK experiments. (1986a). M. Rush (Eds. British Journal of Psychology. E. Further position effects in the Earlham College series. In H. 51. Buffalo. illusory control and the ‘chance baseline shift’. (1980). 53. A. (1994). J. J. 75 –86. J. 123 –130. (1962). J. J. Journal of Parapsychology. Palmer. 161 –183). Skeptical Inquirer.. NY: Prometheus Press. C. C. 66 –74. Brier. (1969). J. . Morris. Leuba. The ESP controversy. Greenwood. Statistical methods in ESP research. by unselected subjects. C. (1989). An empirical investigation of some sampling problems. Palmer. 222 –230. Gathering in the sheep and goats . & Wiseman. S. Milton. 281 –309. H. R. Gardner. 26 –31. Learning to use ESP: Do the calls match the targets or do the targets match the calls? Journal of the American Society for Psychical Research. B. 291 –320. 2. Mass-ESP: A meta-analysis of mass-media recruitment ESP studies. (1978). Edge. R. Parapsychology: The controversial science. Palmer. 9. A preliminary study of feedback-augmented EEG alpha activity and ESP card-guessing performance. 59 –244). Belief in the paranormal: Probability misjudgements. ‘Fortune telling’: Ameta-analysis of forced choice cognition experiments. Journal of Parapsychology. Journal of Parapsychology.. 74. (1987). Hat eld. Haraldsson. H. Hansel. J. R. Journal of the American Society for Psychical Research. Precognition and real-time ESP performance in a computer task with an exceptional subject. Journal of Parapsychology. Journa l of Parapsychology. R. pp. (1986b). Extrasensory perception: Research ndings. ESP tests in light and darkness. J.

J. 193 –218. 61. & Bosworth. Luckiness... (1959). (1962).The Mind Machine 499 Palmer. R. R. Pratt. comparing clairvoyance and precognition. 453 –483. 142 (Abstract).. D. Reed. (1971). Harris. F & Haraldsson. 33. Experimental attempts to in uence pseudorandom number sequences.. Mood and attitude on a pretest as predictors of retest ESP performance. Journal of Parapsychology. Schmidt. J. The variance for multiple-calling ESP data. B.. Radin. The precognition of computer numbers in a public test. L. Journal of Parapsychology. (1969b). The Fair—An opportunity for depicting psychology and for conducting behavioural research.. J. Journal of Parapsychology. H. F Milton.. performance on a psi task. 99 –108. Journal of the American Society for Psychical Research. 62. Acomparison of clairvoyance test results under three different testing conditions. H. P & Joiner. Wiseman. (1954). & Perloff. 359 –374. MA: Bruce Humphries. Radin. feedback on a computerized clairvoyance task. The Journal of the American Society for Psychical Research. A. and . Radin. 220 –229. Journal of Parapsychology. R. A meta-analysis of forced-choice experiments . M. B. (1991). (1982). Journal of the American Society for Psychical Research. 18. Rhine. & Morris. Response distributions in a computer based perception task: Test of four models. (1989).. (1997). S. Boston. Journal of Parapsychology. 244 –251. 55. (1977). R. 341 –344. Effects of belief in ESP and distorted . D. Smith. Schmidt. Rhine. G. San Francisco: Harper Collins. 33 –44. G.. J. 33. Stuart. Steinkamp. 324 –335. E. L. (1987). 23. B. 37 –40. 32. (1998). B. revised version received 30 August 2001 . Journal of Parapsychology. R. D. L. Precognition of a quantum process. Skulason. R. Pratt.. K. J. The conscious universe: The scienti c truth of psychic phenomena. Journal of Parapsychology. 79. 65 . Journal of Parapsychology. J. D. Received 28 March 2001. Thorisson. 300 –306. (1997).. Smith. M. J. C. A reply to Gilmore. J. Schmeidler. (1940/1966)... Machin. & Greenwood. E. J. Perloff.. Journa l of Parapsychology. 76 . (1969a). 45 –58. G. American Psychologist. Extra-sensory perception after sixty years. Clairvoyance tests with a machine. 26. 53. competition.

Sign up to vote on this title
UsefulNot useful