You are on page 1of 14

487

British Journal of Psychology (2002), 93, 487 –499 © 2002 The British Psychological Society www.bps.org.uk

The Mind Machine: A mass participation experiment into the possible existence of extra-sensory perception
Richard Wiseman* and Emma Greening
Perrott-Warrick Research Unit, Psychology Department, University of Hertfordshire, UK
This paper describes a mass participation experiment examining the possible existence of extra-sensory perception (ESP). The Mind Machine consisted of a specially designed steel cabinet containing a multi-media computer and large touch-screen monitor. The computer presented participants with a series of videoclips that led them through the experiment. During the experiment , participants were asked to complete an ESP task that involved them guessing the outcome of four random electronic coin tosses. All of their data were stored by the computer during an 11-month tour of some of Britain’s largest shopping centres, museums, and science festivals. A total of 27,856 participants contributed 110,959 trials, and thus, the nal database had the statistical power to detect the possible existence of a very small ESP effect. However, the cumulated outcome of the trials was consistent with chance. The experiment also examined the possible relationship between participants’ ESP scores and their gender, belief in psychic ability, and degree of predicted success. The results from all of these analyses were non-signi cant. Also, scoring on ‘clairvoyance’ trials (where the target was selected prior to the participant’s choice) was not signi cantly different from ‘precognitive’ trials (where the target was chosen after the participants had made their choice). Competing interpretations of these ndings are discussed, along with suggestions for future research.

Parapsychologists have carried out a large number of studies examining the possible existence of extra-sensory perception (ESP). One of the principal types of experimental design uses the ‘forced-choice’ procedure, in which participants are asked to guess the identity of hidden ‘targets’ (e.g. the colour of playing cards) that have been randomly selected from a set of alternatives known to participants prior to making their guess (e.g. they are told that the cards will be either red or black).
* Requests for reprints should be addressed to Richard Wiseman, Psychology Department, University of Hertfordshire, College Lane, Hat eld AL10 9AB, UK (e-mail: r.wiseman@herts.ac.uk).

the order of a deck of cards that have been shuf ed and immediately sealed in an envelope). with. and involved data collection and analysis being carried out by hand.01.009. concentrate on targets prior to the participant’s guess. 1989). Others have suggested that the highly signi cant effects from some of the automated studies are . & Greenwood. for example. The majority of this work involved participants attempting to guess the order of shuf ed packs of cards carrying the image of a star. as a group.488 Richard Wiseman and Emma Greening Many of the early forced-choice ESP experiments were conducted by Rhine and his colleagues at Duke University in the early part of the last century (see Pratt.g. circle. Stuart. cross.02). the automated experiments conducted by both Schmidt (1969b) and Honorton (1987) produced highly signi cant effects.6 million guesses made in over 140 forced choice ESP studies conducted between 1882 and 1939. Similarly. this was highly signi cant ( p < 10 ± 25 ). More recently. 1978 for a review). for example. For example. Milton. referred to as a ‘sender’. p = . effect size = 0. The cumulated outcome of both trial types was highly signi cant (precognition trials. or wavy lines. and then comparing the predicted order with the actual order. those that believe in ESP vs. (1940 /1966) reviewed the ndings from more than . Many researchers have argued that the results of these studies support the existence of ESP For example.g. For example.002). Honorton (1987) developed ‘ESPerciser’—a computer-based system that presented participants with four on-screen boxes and asked them to guess which one had been randomly selected by the computer. 1993). Although the cumulated effect size was small (0. and Morris (1998) carried out a meta-analysis of 22 forced-choice ESP studies that had compared scoring between clairvoyant and precognitive trials. not providing feedback) and individual differences (e. shuf ing the cards. having them predict the order of a deck of cards. Other work has investigated clairvoyance by having participants attempt to guess the identity of targets that are not known to anyone else (e. Many of the studies were independently signi cant and. experimental procedures (e. Smith. These experiments have investigated a wide range of hypotheses (see Palmer. effect size = 0. Rhine. participants inadvertently detecting and utilizing subtle signals about the identity of targets). words). The results of many experiments also contain evidence of signi cant internal effects. Steinkamp. symbols vs. These studies were often very labour-intensive. disbelievers). Hansel (1980) and Gardner (1989) have claimed that some of the early card guessing experiments employed procedures that would have allowed for participant cheating and ‘sensory cueing’ (i. 1940/1966). Pratt et al.e. Some studies have examined the possible existence of telepathy by having another person. many forced-choice ESP studies have been criticized on both methodological and statistical grounds. and trials employing feedback obtaining better results than those giving no feedback (Honorton & Ferrari. Recent research has tended to use more automated procedures. However. square. prompted participants to indicate which lamp they thought the device had selected and provided feedback by lighting the target lamp after they had registered their choice. A third set of studies has examined participants’ precognitive abilities by. provided strong evidence of above-chance scoring. Both systems automatically recorded information about both the selected targets and participant choices. providing feedback to participants about their scores vs. Likewise. 3.g.g. p = 9 ´ 10 ± 7 : clairvoyant trials. Also. These studies have also examined how ESP scores are affected by different kinds of target material (e. Schmidt (1969a) developed an electronic device that randomly selected one of four lamps. participants who believe in ESP tending to outperform disbelievers (Lawrence. Honorton and Ferrari (1989) presented a meta-analysis of nearly 2 million guesses from precognitive forced choice experiments conducted between 1935 and 1987.

Likewise. and reported that the studies obtained a mean rating of just 10. Steinkamp et al. under certain circumstances. using both clairvoyant (see Haraldsson. 1994). as the experiments used non-random methods to select and order targets (see Hyman. For example. 1998). 1980). 1987. Schmidt. Steinkamp et al. 1969b) and precognitive (see Broad & Shafer. Likewise.3. in 1937. 1989. museums. arguing that many of the alleged aws are unlikely to account for reported effects (see Palmer. The current authors aimed to help resolve this debate by devising a novel procedure for carrying out a large-scale forced-choice ESP experiment. Milton also noted that these ndings are dif cult to interpret. In reply.0046) whose overall cumulative outcome did not differ from chance expectation (Milton. 1987. Honorton and Ferrari (1989) analysed the quality of each study in their database by assigning a point for each of 8 major methodological criteria. participants were asked to complete a forced-choice ESP task that involved trying to guess whether a ‘virtual’ coin toss would result in a ‘head’ or a ‘tail’. All of their data were stored by the computer during an 11-month tour of some of Britain’s largest shopping centres. J. The Mind Machine consisted of a specially designed steel cabinet containing a multi-media computer and large touch-screen monitor. 1969a) designs. Many parapsychologists have reported studies in which participants appeared to be able to psychically determine the outcome of a computerized RNG. The outcome of each virtual coin toss was determined by the computer’s random number generator (RNG). B. both Leuba (1938) and Greenwood (1938) discussed the potential dangers of ‘optional stopping’. in 1961. 1982. 1986a for a summary). However.000 members of the public attempt a precognitive task in which they were asked to predict the order of random number sequences. For example. 1981. 1987.000 people responded.The Mind Machine 489 invalid. (1998) assigned clairvoyant studies a methodological quality rating of between 1 and 19. During the experiment. Other studies have suffered from the ‘stacking effect’—a statistical artefact that can occur when guesses from many participants are all matched against the same target material (see Pratt. Arecent meta-analysis of the large-scale experiments carried out via the mass media indicated a low effect size (± . Honorton. the Zenith Radio Corporation asked their listeners to guess the order of a short series of randomly ordered targets. and thus a statistical correction (known as the ‘Greville correction’) was applied to the data. Rhine had over 29. The results of such studies have the potential to produce spuriously high or low ESP study outcomes via the ‘stacking effect’. & Hoeltje. wherein researchers are able to conclude an experiment when the study outcome conforms to a desired result. Critics have also pointed to possible problems with the way in which data have been collected and analysed. Schmidt. The poor levels of methodological and statistical safeguards present in some past studies have been highlighted in two more recent metaanalyses. for example.6. 1991. Kennedy. Radin. making more than 1 million guesses. Researchers have also carried out large-scale ESP experiments using a wide cross-section of the public making their responses in non-laboratory environments. Honorton. in their meta-analysis. some parapsychologists have questioned the validity of these criticisms by. and noted the non-signi cant correlations between the presence/absence of methodological safeguards and study outcome (see Honorton & Ferrari. 1954). Skulason. Thorisson. However. Six of the eight studies contained in the metaanalysis involved large groups of participants attempting to guess a single target sequence. The studies received an average rating of just 3. The computer presented participants with a series of videoclips that led them through the experiment.. Over 46. 1989. severely reduce the effect size of a study and thus . Radin & Bosworth. Houtkooper. 1987. this correction can. & Haraldsson. and science festivals.

490 Richard Wiseman and Emma Greening mask evidence for psi. it is widely acknowledged that carrying out large-scale forced-choice ESP experiments is usually problematic. the two studies that did not have to be subjected to this correction (Brier. Honorton and Ferrari’s (1989) meta-analysis of precognition forced-choice studies noted that several factors were signi cantly associated with increased ESP scoring. Third. To maximize the potential of obtaining evidence for ESP the Mind Machine tested participants individually and provided them . Fourth. the study had the potential to collect a huge amount of data from thousands of participants. Also. experiments testing participants individually had signi cantly higher effect sizes than those employing group testing. including optional stopping and stacking effects. 1994). Many of these patterns were also found in the metaanalysis carried out by Steinkamp et al. First. it was predicted that participants’ overall ESP scores would differ signi cantly from mean chance expectation. as noted above. In addition. Radin. For example. and thus possesses the statistical power to reliably detect the small effect sizes reported in many previous forced-choice ESP studies. Fifth. many previous forced-choice ESP studies have been criticized on various methodological grounds. The Mind Machine was designed to overcome these artefacts by specifying the size of the nal database in advance of the experiment. opportunities for participant cheating. 1997). 1967: Rhine. Second. as noted above. the Mind Machine methodology could incorporate many of the factors that have positively correlated with study outcome in meta-analyses of previous forced choice studies. feedback. and poor randomization. and its methodology was developed for several reasons. To examine this . Also. The Mind Machine was designed to minimize these potential problems. critics have correctly noted that some previous forcedchoice ESP studies have suffered from potential statistical problems. The Mind Machine built upon this previous research. trial-by-trial. the computer running the experiment was secured inside a locked cabinet that could not be accessed by participants. including sensory shielding. Ameta-analysis carried out by Lawrence (1993) revealed that participants who believed in psychic ability tended to obtain signi cantly higher forced-choice ESP scores than disbelievers. none of the studies gave participants instant feedback about their ESP scores. given that a previous meta-analysis of forced-choice ESP studies conducted via the mass media under non-conducive conditions had resulted in a cumulative outcome consistent with chance (Milton. as they tend to be time-consuming and tedious for both experimenters and participants alike (Broughton. trial-by-trial. feedback to participants obtained signi cantly higher effect sizes than those giving delayed or no feedback. Studies providing immediate. and generating a new target sequence for each participant. The Mind Machine overcame these problems by creating a totally automated experiment and by having each participant only contribute a very small number of trials. The Mind Machine methodology also enabled an attempt to replicate one of the most reliable internal effects in the forced-choice ESP literature. Incorporating such ‘ESP conducive’ procedures was important. with immediate. For example. possible randomization problems were minimized by having the target selection carried out by a pseudo-random number generator that had been fully tested prior to use. Interestingly. 1991. many of the studies incorporated procedures that have not been associated with high effect sizes in meta-analyses of past laboratory experimentation. 1962) both obtained signi cant psi effects. (1998). again. and many supplied no feedback at all. On the basis of the results from previous forced-choice ESP studies.

participants were asked to indicate whether this was the rst time they had taken part in the experiment. where the target is chosen after the participants have made their choice). These studies have obtained mixed results. Machin. The experiment also explored whether participants taking part in the study for the rst time would obtain signi cantly different ESP scores to those repeating the experiment. Smith. This cabinet measured approximately 1. 1987).5 m ´ 0. The Mind Machine experiment examined the potential relationship between gender and ESP scoring. Both computer and monitor were housed within a Barcrest steel cabinet specially constructed for high-volume public use.e. the meta-analytic review conducted by Steinkamp et al. Participants in the Mind Machine study were asked to predict how many coin tosses they believed they would guess correctly. but others have failed to obtain this effect (see Palmer. with precognition trials (where the outcome of the other half was determined after they had indicated their decision). 1971.The Mind Machine 491 hypothesis. Although a few studies have reported precognition trials outscoring clairvoyance trials (see Freeman. 1962.5 m and allowed participants to use the touch-screen monitor. This computer was connected to a 17-inch colour touch-sensitive monitor. with some reporting females outperforming males. Harris. 3. but prevented them from having access to . 1978 for a review). The Mind Machine followed up on this work by comparing the scoring of clairvoyant trials (where the outcome of half of trials was determined prior to participants indicating their guess). & Joiner. Asmall number of past forced-choice studies have investigated whether participants’ predictions about their performance on a forthcoming ESP task are related to their actual performance. The study also investigated four other internal effects that have received mixed support in past research. Elite TX 512K Motherboard AT. 1978 for a review). these studies have obtained mixed results. Honorton. As noted above. Some of these experiments have shown a signi cant relationship between predicted and actual performance (see Musso. several studies have examined potential differences between ESP scoring on clairvoyant trials (i.5 m ´ 0. Apparatus Hardware The experiment was carried out on an Omega Science PC with a P200MMX INTEL Processor. Humphrey. 1971. 1997). 1945). On the basis of previous work. where the target is selected prior to the participant’s choice) and precognition trials (i. and thus it was possible to examine the ESP scoring of ‘novices’ to participants who had taken part in the study before. it was predicted that the ESP scores of believers would be signi cantly higher than disbelievers’ scores. participants in the present experiment were asked to indicate whether they believed in the existence of psychic ability prior to completing the ESP task. Schmeidler. At the start of the Mind Machine experiment. Wiseman. Previous forced-choice experiments have also examined the potential relationship between ESP scoring and gender.5-Gb hard drive. Again. and others showing no signi cant differences (see Palmer. The small number of forced-choice studies that have examined participants’ ESP performance over several sessions have tended to report a decline effect (see Honorton & Carbone.e. and the relationship between their predicted and actual success was examined. (1998) showed no signi cant difference between the two types of trials. 1965.

This PRNG has been fully evaluated for randomness.at/). or disconnected at the mains.1. The next videoclip showed RW in the centre of the screen. The top right of the screen contained the same question in text form.492 Richard Wiseman and Emma Greening the computer and its peripherals (e. 2. and the keys were retained by the experimenters throughout the project. The Mind Machine was able to achieve this by using multi-media videoclips to create an engaging and enjoyable experiment. For this part of the experiment. and the website contains the results of this extensive testing. The top left of the screen displayed short videoclips of RW asking each of the questions. whether it was the rst time that they had taken part in the experiment ( yes. The bottom half of the screen contained large virtual buttons displaying possible answers to the questions. The PRNG was seeded from the number of milliseconds that had elapsed since the computer system had started using the Windows ‘GetTickCount’ API function. The program initially displayed a videoclip designed to attract participants to the cabinet. or equal to. If the supply was turned off. and an even value was coded as a ‘tail’ and an odd value as a ‘head’. This was achieved using an inversive congruential pseudorandom number generator (ICG(214748364 7.ac. Each venue provided a continuous power supply. no). These videoclips were lmed in RW’s laboratory and were accompanied by background music written especially for the program. the computer presented a videoclip of RW welcoming them to the experiment and asking them to rst answer four simple questions. 3.5.85%of participants did not complete the experiment.5. The screen was then partitioned into three sections.. the computer automatically booted into the Mind Machine program. the computer randomly determined if the target for clairvoyant trials (‘TargetC’) would be a ‘Head’ or a ‘Tail’. and the engaging nature of the program is re ected in the fact that only 0. no). female). Next. whether they believed that psychic ability existed ( yes. The coin on the left of the . uncertain. The program next randomly determined the order of the two clairvoyant (C) and two precognition (P) trials. the resulting trial order became P-C -P-C. If this value was less than 0. Some parapsychologists have emphasized the importance of creating forced-choice ESP tasks that participants nd both interesting and absorbing. the computer failed to operate. If the value was greater than. Participants were rst asked whether they were male or female (possible answers: male. mouse and keyboard) by securing them in the base of the cabinet. the order became C -P -P-C . 0. 1. Taking part in the experiment took approximately two and a half minutes. 4). The lead connecting the power supply to the computer was secured in such a way as to prevent it being pulled free from the cabinet. 1. Both panels were always locked whenever the cabinet was left in a public venue. When the power was reinstated. and how many coin tosses they believed they would guess correctly (0. The base of the cabinet could be accessed via panels at the front and rear of the cabinet.g.mat. Software The Mind Machine program presented participants with a series of videoclips containing an experimenter (RW). The PRNG algorithm was programmed to provide an integer between 0 and 99 inclusive. who led them through the experiment. When participants touched the screen.sbg. This videoclip contained the Mind Machine logo and the phrase ‘Touch the screen to begin’. and two large coins in the top left and right of the screen.0)) obtained from the P-Lab website (http://random. the PRNG was programmed to return a number between 1 and 0.

the PRNG was used to provide an integer between 0 and 99 inclusive. . and not examining their data until the experiment has terminated (Milton & Wiseman. Participants reported that they found the programme both compelling and interesting. First. 1997). they fully understood both the questions and ESP task. A videoclip then informed participants whether they had obtained a hit or a miss. will be reported in a separate paper. To prevent either scenario from creating a potential artefact. Finally. participants’ performance on initial ESP trials could in uence whether they completed the experiment. However. . suffer from two types of optional stopping.The Mind Machine 493 screen displayed a head. For example. along with other psychological data. Before starting the experiment. . The computer also returned to this initial videoclip from any point in the program if the computer screen remained untouched for 25 s. For precognitive trials. Before starting the tour. the programme was evaluated on the University campus. data from every ESP trial were included in the database (i. 1985). designed to test the notion that participants who disbelieve in the paranormal will outperform believers on tests of probability estimation (see Blackmore & Troscianko. and thus have the potential to terminate the study when the results reach a desired outcome. . The computer recorded the participant’s guess and then randomly determined if the target for precognitive trials (‘TargetP’) was a ‘head’ or a ‘tail’. RW asked participants to touch one of the coins to indicate whether they believed the coin would land heads or tails. and it is possible that a minority of individuals may have deliberately entered incorrect data. the program returned to the initial videoclip containing the Mind Machine logo. For clairvoyant trials. the authors have no way of checking that participants taking part in the actual experiment answered the questions honestly. At the end of each session. In addition. the participant’s choice was compared with TargetC. and not complete the remaining two trials. and an even value was coded as a ‘tail’ and an odd value as a ‘head’. whilst the coin on the right displayed a tail. This potential problem can be avoided by experimenters setting the nal sample size in advance of the study. Asecond form of optional stopping can occur when experimenters do not set sample sizes in advance of a study. potentially. such responses would have simply acted as a source of noise and been unlikely to have obscured general patterns within the data. otherwise it was classi ed as a ‘miss’. participants might perform well on the rst two trials and not complete the remaining trials to avoid the possibility of them obtaining misses. the trial was classi ed as a ‘hit’. Participants’ data were stored in two databases that were regularly backed up throughout the duration of the experiment. Avideoclip then asked participants the percentage of trials they thought that people should get correct by chance alone (0% 25% 50% 75% 100% This question was . Alternatively. participants did not have to complete all four trials to be included in the database). and revealed the outcome of the coin toss by presenting them with a large ‘head’ or ‘tail’.e. The results of this question. the authors decided that each of the participants’ screen . a videoclip informed participants of their total score. Again. After all four trials had been completed. Minimizing optional stopping The Mind Machine experiment could. Clearly. participants might obtain two misses on the rst two trials. become disappointed. a videoclip thanked participants for taking part and presented them with an opportunity to see a nal videoclip that would explain more about the experiment. ). the participant’s choice was compared with TargetP If the participant’s choice matched the target.

06 .043 datapoints from the ve questions and 110. the data from the forced choice ESP task were not examined until the end of the study.29 . Results The nal database contained 27.780 55. for the purposes of ESP testing. Before embarking on the tour. However.000 datapoints.934 13.11 49.965 13.e.818 13. chisquare values (with continuity correction) and p-values (two-tailed) for each of the four trials separately and combined Number of heads Trial 1 Trial 2 Trial 3 Trial 4 Combined 13.e. Each issue will be discussed in turn. the Lakeside Shopping Centre (Essex) and the MetroCentre (Gateshead)).e. each target has an equal opportunity of occurring after any other target). the four ESP trials.494 Richard Wiseman and Emma Greening presses would constitute a single datapoint (i.801 13. and at various intervals during the tour. participants completing the experiment would have provided 9 datapoints—the answers to the initial four questions. percentage of heads.54 .38 . for example. the Mind Machine was placed on campus at the University of Hertfordshire.055 13. In addition.767 13. Testing randomness There exists a great deal of debate around the concept of randomness in parapsychology (see Gilmore. either separately or combined (see Table 1).002 datapoints (139.71 Chi-square 1. The Tour The Mind Machine was taken on an 11-month tour of some of Britain’s leading museums.54 49.92 P-value (two-tailed) . 1989).959 ESP trials). science festivals. and shopping centres (including. Number of trials on which the computer selected heads and tails. Equiprobability Chi-squared analyses revealed that the frequency with which the computer selected heads and tails did not differ signi cantly from chance on any of the four trials. each target should be selected an approximately equal number of times) and intertrial independence (i.14 .81 .392 Number of tails 14. most researchers (see Palmer. the Royal Museum of Scotland (Edinburgh). 1986b) consider it important to demonstrate both equiprobability (i. Palmer. and the probability question) and that the nal database would consist of 250.48 .49 .567 Percentage of heads 49.73 50. 1989.30 50.839 55.856 participants providing 250. the Olympia Exhibition Centre (London).14 . Table 1.

To assess this possibility. number of hits. The experiment involved 27. Table 2. the experiment may have obtained chance results. However. there were several differences between the Mind Machine experiment and most laboratory studies.The Mind Machine 495 Intertrial dependency Table 2 contains the results of chi-squares comparing the frequency with which heads and tails were chosen between each of the four trials.856 participants contributing 110. and all of the internal analyses were non-signi cant.009 (. These ndings could be interpreted in one of two ways. The analysis comparing Trial 2 and Trial 3 was signi cant. it is theoretically possible that the effect could coincide with a more general response bias and thus lead to artefactual ndings.83) .06 (. both including and excluding Trial 3. Further investigation revealed that this signi cance was due to the target sequence showing a slight (approximately 1% tendency to avoid repeating ) Trial 2’s target on Trial 3. However. As the vast majority of participants only completed the experiment once. none of the analyses was signi cant. in parentheses) comparing the frequency with which targets were chosen between each of the four trials Trial 1 Trial 2 Trial 3 Trial 4 . Chi-square values (with continuity correction) and p-values (two-tailed. The Mind Machine may have sampled from a . Again. and thus had the statistical power to detect the possible existence of a very small effect. namely. z-scores.66) Trial 2 Trial 3 7. participants taking part in the Mind Machine experiment did not have any contact with a ‘live’ experimenter.959 trials. Discussion The Mind Machine aimed to obtain evidence for ESP under conditions that minimized the possibility of methodological and statistical aws. First. Trial 3. percentage of hits.91 (.81) ESP scoring Table 3 contains the number of participants. it is highly unlikely that they would be able to detect and utilize this small bias. the experiment’s overall outcome did not differ from chance. because the conditions under which it was conducted were not conducive to ESP Despite designing the . number of trials.69) . Also. None of the analyses was signi cant.93) . For example.04 (.19 (. and p-values (two-tailed) for the overall database and each sub-group of data. Table 4 contains the z-scores and p-values (two-tailed) for each of the internal comparisons both including and excluding Trial 3. the Mind Machine experiment took place in relatively noisy public spaces compared with the quiet surroundings of a laboratory. and incorporated conditions that previous literature had found to be ESP-conducive.16 (. unlike the vast majority of laboratory studies. experiment to incorporate many of the factors that have positively correlated with study outcome in a meta-analysis of previous forced-choice ESP experiments. it was decided to analyse the data both including and excluding the only trial that could be affected by the bias.005) .

percentage of hits.45 .51 (. number of trials.02 (.08 49.11 49.32) ± . disbelievers Predicted success: Below vs.75 ± . 1994).76 ± .39 (.93 ± . no 1.62 (.10 49.45 49. precognition Belief in psychic ability: Believers vs.84) Excluding Trial 3 Percentage hitting 49.88 50.21 49. one referee noted that RW is well known to the public for his sceptical attitude towards the paranormal. the study may have failed to nd evidence of forced-choice ESP because such an effect does not exist.42 (.46) .496 Richard Wiseman and Emma Greening Table 3. z-scores and p-values (two-tailed) for internal effects both including and excluding Trial 3 All trials p-value (two-tailed) .63 (.70 ± .07 (.96 (.99 50.34) .79 z-score Trial type: Clairvoyance vs.70 (.66 Excluding Trial 3 p-value (two-tailed) .98 49.26 .72 49. and suggested that this may have lowered participants’ expectations of demonstrating an ESP effect and affected their subsequent ESP scores.58) ± .35 .67) ± .75 50.13 49.61) ± . with participants coming across the experiment whilst visiting a public venue.60) ± .27 (. z-scores and p-values (two-tailed) for the overall database and each sub-group of data.90 49.92 50.86 49.90 50.48 .94) ± .53) ± . Finally.45 .77 50.98) .76 50.28 (. Number of participants.01 49.44 z-score .01 (.53) .86 z-score ( p-value) ± .18 ± . versus being suf ciently motivated to volunteer to take part in a laboratory experiment (see Milton.79 .26 different participant population to laboratory studies.70) ± 1.53 (.55 (.26 (.86 (. Given that the study maximized safeguards against the type .20) ± .45 (.51 (.00 z-score ( p-value) ± . Alternatively.06 (. number of hits.43 (.20 (.61) ± .65) . both including and excluding Trial 3 All trials Percentage hitting All trials Trial type Clairvoyance Precognition Belief in psychic ability Believers Uncertain Disbelievers Predicted success Below chance Chance Above chance First time? Yes No 49.67) ± .48) 1.95) ± .74 (. above chance First time: Yes vs.73 49.24 .79 50.79) Table 4.39) .79) .

in part. Finally. the methodology provides access to a more representative cross-section of the public than the vast majority of laboratory research. this interpretation of the null effect seems unlikely. it is impossible to assess the possible effect that RW may have had on participants’ expectations and ESP scoring.The Mind Machine 497 of methodological and statistical problems associated with many previous forced-choice ESP experiments. given that. The Perrott Warrick Fund. as a group. and reported that these ratings were not correlated with study outcome. proponents of ESP etc. We also wish to acknowledge the assistance provided by John Bain. and 4 trials correctly were as follows: 0 correct—4% 1 correct—8% 2 correct—49% 3 correct—25% 4 : : : : correct—14% Future work could start to tease apart these competing ideas by ). Adrian Owen. mail. 3. . However.g. because it is dif cult to determine whether the differences in conditions between the Mind Machine experiment and laboratory-based research could account for the results obtained in this study. and the staff at each of the venues who were kind enough to host the Mind Machine. Reed (1959) found that different types of background music had no effect on ESP scoring. (1998) rated 22 forced-choice ESP studies on three different levels of participant–experimenter contact (direct contact. by placing the cabinet in locations that differed in levels of environmental distractions. only one study has examined the relationship between the degree of participant–experimenter contact and forced-choice ESP performance. Geomica and Barcrest for supporting the work described in this paper. indirect contact. The notion that ESP performance might be in uenced by environmental factors has been more fully examined. 1977). Susan Blackmore. For example. Beven (1947) reported that participants carrying out an ESP card guessing task in good light outperformed those in a darkened room. Caroline Watt. For these reasons. or performance over a very large number of conditions). No research has compared the ESP scores of the type of participants taking part in public versus laboratory experimentation. participants indicated that they expected to perform above chance (the approximate percentages of participants indicating that they expected to guess 0. e. In addition. Assessing these competing explanations is problematic. the lack of any signi cant effects would support the notion that the positive ndings reported in these previous experiments were spurious. 2. 1978 for a review). the almost continuous stream of data would allow investigators to examine time of day effects. Steinkamp et al.). 1. . However. the authors urge researchers to consider adopting this unusual methodology to carry out additional mass-participation studies in other areas of psychology and parapsychology. designing another Mind Machine experiment to examine the relationship between these variables and ESP performance (e. Acknowledgements The authors would like to thank The Committee on The Public Understanding of Science. this type of mass participation study is likely to prove an effective tool for promoting the public understanding of certain research methods. For example. Mike Hutchinson. having the videoclips presented by known sceptics vs. to our knowledge. in line with some of the massparticipation studies conducted in the very early days of academic psychology (see Perloff & Perloff.g. The way in which these data are collected allows experimenters to test hypotheses that would be problematic to examine in laboratory -based research (e. and no contact). Tina Gutbrod. Finally. but has obtained mixed results (see Palmer. The Mind Machine methodology can gather a huge amount of data in a relatively short period of time. especially where small effect sizes are expected.g.

. L. Houtkoope r. 26 –31. J. 34 –40. London: Routledge & Kegan Paul. 1935–1987. (1978). 2. J. ESP experiments with primary school children. 115 –121. Further position effects in the Earlham College series. Journal of Parapsychology. 138–160). 51. Gilmore. 2. 75 –90. & Wiseman. Gardner. 291 –320. Belief in the paranormal: Probability misjudgements. illusory control and the ‘chance baseline shift’. (1989). Greenwood. 76 –89. A preliminary study of feedback-augmented EEG alpha activity and ESP card-guessing performance. 191 –209. 74. Journal of Parapsychology. A. Adva nces in parapsychological research 2: Extrasensory perception (pp. M. The Defense Mechanism Test as a predictor of ESP performance: Icelandic Study VII and meta-analysis of 13 experiments. Edge. J. W G. Buffalo. (1989). New York: Ballantine Books. 33. (1938). Hyman. Guidelines for extrasensory perception research. An empirical investigation of some sampling problems.. Hat eld. 26. Leuba. Journal of Parapsychology. . ‘Fortune telling’: Ameta-analysis of forced choice cognition experiments. 161 –183). Honorton.. M. (1989). Milton. 217 –221. Blackmore. Edge. Buffalo. J. In Parapsychological Association 36th Annua l Convention: Proceedings of Presented Papers.). J.). E. Foundation s of parapsychology (pp. E. M. C. Palmer. 284–334. & Troscianko. T. J.498 Richard Wiseman and Emma Greening References Beven. H. The Journal of the American Society for Psychical Research. Successful performance of a complex psi-mediated timing task . How not to test a psychic. Parapsychology: The controversial science. Broughton. 59 –244). J. Journa l of Parapsychology. Musso. Palmer. (1980). 66 –74. 123 –130. In The Parapsychological Association 37th Annua l Convention: Proceedings of Presented Papers. R. R. Hansel. 455 –468. Freeman. Journal of Parapsychology. 53. Humphrey. ESP tests in light and darkness. 81. An experiment to test the role of chance in ESP research. Learning to use ESP: Do the calls match the targets or do the targets match the calls? Journal of the American Society for Psychical Research. R. Lawrence.. R. Mass-ESP: A meta-analysis of mass-media recruitment ESP studies. L. (1965). (1987). Journal of Parapsychology. C. 125 –135. 65. Precognition and real-time ESP performance in a computer task with an exceptional subject. M. 29. British Journal of Psychology. & Ferrari. 51. UK: University of Hertfordshire Press. J. Further comments on Schmidt’s PK experiments. Journal of Parapsychology. In S. Morris. (1991).. Milton. New York: Plenum Press. 75 –86. 1947–1993. by unselected subjects.). pp. Honorton. . (1980). E. L. D. (1971). (1994). In H. London: Routledge & Kegan Paul. 222 –230. ESP and parapsychology: A critical re-evaluation. NY: Prometheus Books. 281 –309. (1985). J. M. Amass school test of precognition. Ameta-analysis of forced choice sheepgoat ESP studies. J. J. & J. T. Morris. Rush (Eds. H. R. Haraldsson. Journal of the American Society for Psychical Research. 309 –340. Journal of Parapsychology. (1962). Journal of Parapsychology. J. R. B. (1989). Extrasensory perception: Research ndings. Honorton. D. & J. (1945). Krippner (Ed. Statistical methods in ESP research. (1986b). C. Palmer. 53. (1938). C. J. Gathering in the sheep and goats . R. B. (1986a). An experiment in precognition. (1947). (1987). . S. (1997). Foundations of parapsychology (pp. Kennedy. pp. Palmer. 9. Skeptical Inquirer. C. L. M. C. J. Rush (Eds. NY: Prometheus Press. C. S. & Carbone. Broad. (1993).. (1969). Journal of Parapsychology. Randomness and the search for Psi.. 11. 83. Journal of Parapsychology. Palmer. & Shafer. In H. (1981). The ESP controversy. & Hoeltje. 205 –216. 5. Brier. A.

B. 142 (Abstract). Rhine. (1987). G. Journal of Parapsychology. San Francisco: Harper Collins. J. Journal of Parapsychology. Response distributions in a computer based perception task: Test of four models. L.. (1989). Acomparison of clairvoyance test results under three different testing conditions. Journal of Parapsychology. (1982). Schmeidler. 53. Smith. Reed. K. D. and . Pratt. 45 –58.The Mind Machine 499 Palmer. Smith. J. Harris. 76 . (1954). (1969a). C. J. M. 65 . Steinkamp. Luckiness. (1940/1966). Clairvoyance tests with a machine. performance on a psi task. H. P & Joiner. R.. MA: Bruce Humphries. 79. (1977).. (1998). Radin. Rhine.. D. & Greenwood.. G. Effects of belief in ESP and distorted . The Fair—An opportunity for depicting psychology and for conducting behavioural research. Schmidt. 55. 99 –108. 300 –306.. J. A reply to Gilmore. J. competition. (1997). R. Journa l of Parapsychology. Extra-sensory perception after sixty years. 324 –335. & Perloff. J. 359 –374. 32. comparing clairvoyance and precognition. Journal of Parapsychology. 193 –218. 341 –344.. F & Haraldsson. The precognition of computer numbers in a public test. R. J. A. Schmidt. Journal of the American Society for Psychical Research. (1962). 453 –483. & Morris. (1959). B.. B. Skulason. 244 –251. L. 33. E. 18.. 23. M. The conscious universe: The scienti c truth of psychic phenomena. Thorisson.. E. Journal of Parapsychology. (1997). J. Journal of the American Society for Psychical Research.. B. G. R. D. Precognition of a quantum process. Wiseman. Journal of Parapsychology. Radin. D. (1991). & Bosworth. 26. R. Journal of Parapsychology. feedback on a computerized clairvoyance task. Machin. S. Boston. 33 –44. R. 37 –40. 61. A meta-analysis of forced-choice experiments . L. The Journal of the American Society for Psychical Research. 220 –229. Journal of Parapsychology. (1969b). Experimental attempts to in uence pseudorandom number sequences. American Psychologist. revised version received 30 August 2001 . H. Radin. The variance for multiple-calling ESP data. 33. (1971). Received 28 March 2001.. J. Mood and attitude on a pretest as predictors of retest ESP performance. Stuart... 62. F Milton. Perloff. Pratt.