Since the pre-publication release of its original dataset, CERN’s data has been raked over by physicists, scientists, computer scientists and statisticians all over the world. Some have looked for exotic explanations for the results (such as allowing the neutrinos to pop off into another dimension for the trip), while others have turned their attention to the possibility that there are systematic errors in the experiment itself.
Criticisms of the experiment have proposed errors in the time measurement, or have asked whether CERN properly accounted for Earth’s rotation, or queried the statistical analysis of the results.
It’s this last aspect that CERN now wants to put to bed. According to the BBC, the design of the experiment has been revised to make it easier to correlate the received neutrinos with the proton pulse that creates them.
The old experimental design, which used 10 microsecond bursts of protons at CERN to generate the neutrinos received at Gran Sasso, is being replaced by a new design in which the proton bursts will last just a couple of nanoseconds, with a 500 ns gap between bursts.
This should permit a more precise correlation between events at the two ends of the experiment, and therefore give physicists more confidence in whatever neutrino speed is inferred from the results.
As theoretical physicist Matt Strasser notes in this blog post, CERN’s original experiment wasn’t designed with highly accurate time correlations in mind. OPERA’s main research program is observing neutrino oscillations, and puts a premium on generating large numbers of neutrinos.
Welcoming the new measurements, Strasser writes: “apparently the concerns raised by the community have been strong enough to prompt OPERA to request that the CERN neutrino beam operators … send them short pulses.”
A new dataset is expected by the end of November. ®