After I watched "Particle Fever"--the movie about Large Hadron Collider (LHC) and the successful identification of the Higgs boson--I became a bit concerned with that team's handling of various negative PR incidents. Further, with the amount of money spent and the pressure to produce results, also realized that we may not sometime soon have a way to reproduce any of their results at any other facility with other research teams.
With this in mind, how much scientific confidence can we put into things like the mass of the Higgs? Not that a "Higgs-like" particle was found, but the actual calculated mass of the particle known as the "Higgs boson"?
This is a question about the scientific method and reproducibility of experimental results. Do we have any similar experiments where we confirm a similar theory without being able to reproduce those results?
UPDATE: What is the confidence level of the LHC-calculated mass of the Higgs? How was this confidence level determined? What are the implications of it being wrong? How long will it take before we know if it is wrong?
Thank you all for your comments. I do believe this is an important, specific and scientific discussion that can have specific, factual answers.
UPDATE 2: I guess I'm not the only one asking these questions, this is very interesting:
http://www.sci-news.com/physics/science-techni-higgs-discovery-higgs-boson-02266.html
“The current data is not precise enough to determine exactly what the particle is. It could be a number of other known particles,” Dr Frandsen said.
A related question might be, based on the Frandsen et al paper, what if it is not the Higgs at all?
UPDATE 4/9/15: Came across this re: the reversal of the BICEP2 "discovery" due to having a 2nd team and 2nd set of instruments via the Planck telescope. Without Planck, the BICEP2 team might still be claiming their CMB discovery... perhaps for for a long time to come, potentially leading cosmology research (time and dollars) down the wrong cosmic inflation "rabbit hole." This seems to be an example supporting the importance and relevance to the question I raised here regarding LHC and acceptance of its "discoveries" without a 2nd team using a 2nd set of instruments (ie a 2nd beam and collider): http://physicsworld.com/cws/article/news/2015/feb/03/galactic-dust-sounds-death-knell-for-bicep2-gravitational-wave-claim
Answer
Most of the reproduction of results in particle physics comes from two sources:
Competing experiments running nearly simultaneously.
In this case both ATLAS and CMS got comparable results. Now, they are both using the beam from the LHC, so how do we know the beam is properly understood? Because while they were commissioning those machines they reproduced dozens (literally multiples of twelve) results that don't actually require the full capacity of the LHC. Notably they re-discovered the top quark and re-measured its mass, lifetime and branching ratios.
Moreover, CERN has a long and very successful history in characterizing beams. They know how to do that stuff.
Commissioning of the next-generation machine.
This, of course, doesn't do us any good just now, but it is coming. Indeed, the LHC is the machine answering the same question about the Tevatron (a decade ago you could, after all, have asked something similar about CDF and D0 discovering the top quark).
As an aside, don't underestimate the extent to which ATLAS and CMS are different machines, different cultures and differ in almost all the particulars. Nor the degree to which each of these organizations is looking over the other's shoulders: yes they put on a collegial show when announcing their results together but the competition is fierce and they do take each other's reports apart in search of things to criticize.
Second aside: a lot of biomedical research has had a problem in recent years with a lack of reproductions at all. And it is a issue when studies are expensive (which is true in particle physics) and money is tight (when isn't it?), so how do we come out ahead in this matter? As far as I can tell it is a feature of building machines that are flexible, and developing a deep pool of talent when it comes to turning the data-set around to examine it from another angle.
No comments:
Post a Comment