You are on page 1of 3

Why CERNs claims for faster-than-light neutrinos is wrong

John P. Costella, Ph.D.

Melbourne, Australia (23 September 2011)
Abstract I explain why todays claim of the OPERA group at CERN for having measured faster-than-light neutrinos is based on an incorrect statistical calculation.

Introduction Late this afternoon (Melbourne, Australia time) my work colleagues emailed me (their resident particle physicist) links to the breaking news of CERN physicists claiming to have measured neutrinos travelling faster than the speed of light. A few text messages to physicist colleagues later, I had the preprint from Five minutes later, it was clear that the OPERA group at CERN had made an embarrassing gaffe. The paper claims that the neutrinos arrived 60.7 nanoseconds faster than light would have taken to travel the 730 km, with a statistical uncertainty of 6.9 nanoseconds, and a systematic uncertainty of 7.4 nanoseconds. Combining these uncertainties together, their result was around 6 times as large as the uncertainty, which would (if correct) qualify it as a discovery. The authors of the paper spent much effort justifying their claims of the systematic uncertainty (errors in the time and distance measurements that are crucial to the result). Bloggers and tweeters who have commented so far have speculated on this aspect of the paper, but I will leave it to one side. Instead, I will focus on the claim that the statistical uncertainty is 6.9 nanoseconds. The neutrinos are produced in bursts, each burst lasting around 10.5 microseconds, or 10,500 nanoseconds. Most of the neutrinos are not detected. Those 16,111 neutrinos that were detected allowed the OPERA group to build up a distribution of travel times. Figure 12 of their paper is shown on the next page. The vertical error bars represent the statistical error in each data point; the horizontal errors represent the bins used to count the neutrinos. The crux of the issue is to determine how far each red line has to be shifted, left or right, to agree with the data points. Now, any half-competent particle physicist can look at these curves and give you a rough estimate of how uncertain that shift is, due to the statistical uncertainties in the data points. My eyeball estimate, for each graph, would be about 50 nanoseconds at best; combining all four pieces of data together, the error should be no better than 50/ 4=25 . Other particle physicists may give varying estimates, but I doubt that any would disagree that its about this size. Now go back to the OPERA paper: they claim that the statistical uncertainty is less than seven nanoseconds. That would be one-seventh of the smallest horizontal tick marks: it is tiny. Already, any physicist worth even a fraction of their weight in neutrinos will be shaking their head, knowing intuitively that the OPERA result is simply wrong. How OPERA made the mistake is a story that will no doubt emerge in the coming days or weeks; their paper contains almost no details. So let me now show you how a physicist would perform the next step: a simple statistical calculation to check what the size of the statistical uncertainty should be. (Warning: the following delves into more complicated mathematics and statistics.)

A simple calculation The distribution of event times is shown in Figure 9 of the OPERA paper. To a first approximation, it is roughly uniform (rectangular) in time, with a width of about 10,500 nanoseconds; the edges are not absolutely vertical, and the top looks more like Bart Simpsons head than a straight line, but this will be a good enough approximation for the following. Now, each of the individual neutrinos that happen to be detected will have a delay (from the start of each burst) that follows this distribution. The shift in the travel time can be estimated by simply computing the mean travel time. The statistical distribution of the sum of the 16,111 arrival times could be obtained by convolving the distribution with itself 16,111 times, but we know both intuitively and by the Central Limit Theorem that it will approach a Normal distribution, with a mean and variance that are each 16,111 times the mean and variance of the above distribution; dividing by 16,111 to compute the average, the variance of the result will be 1/16,111 of the variance of the distribution. Now, the variance of a uniform distribution of width 10,500 nanoseconds is (10,500)2 /12=9,187,500 , so the variance of the calculated mean will be about 9,187,500/16,111570 , and so the standard deviation of the mean will be about 57024 nanoseconds. This is only an approximate calculation for the standard error of the estimate of the mean, but any more sophisticated calculation should give a result of at least this size. Since the OPERA group do not give any substantial details of their calculations, I dont know how they managed to get a statistical uncertainty that is almost three and a half times smaller than it should be. The level of statistical confidence they quote would require almost 12 times as many measurements: not the 16,111 events they did measure, but rather more like 193,000 events.

Conclusions From the above, the OPERA result becomes 61 ns with a statistical uncertainty of 24 ns and a systematic uncertainty of 7 ns. Even if we were to take the systematic uncertainty to be accurate, this result is now within two standard errors, which disqualifies it as a discovery, rendering it simply an interesting result. Given the much tighter bounds that we have on the neutrino speed from other sources such as Supernova 1987A, one must conclude that OPERA has simply made a mistake, albeit a highly embarrassing one which has gathered international media coverage today.