Sunday 1 September 2019

quantum field theory - LHC data and mathematics of QFT


I'm reading Frederic's Paugam Towards the Mathematics of Quantum Field Theory, an advanced theoretical physics book.


I would like to know how I could apply the theories in this book. For example, could I use LHC data and to confirm the theories in the book? Could I apply LHC data to test the repesentation theory of the standard model ?


Are there any other sources of data for confirming/rejecting mathematical models of similar physical principles?



Answer



Yes, you could use LHC data for that purpose, and that is exactly what they are doing at LHC. Technically, however, this is a lot harder than you can imagine.



An experiment like ATLAS or CMS (the large LHC detectors) does not give you useful event counts by looking at the raw data. There are far too many error sources in both the physics at the interaction point (where the particles collide) and in the physics of the detector systems. Most of the events at the interaction points, for instance, are "weak" scattering events. In the language of field theory, this is caused by IR divergences, I believe: the scattering problem for a 1/r electrostatic potential is simply not well defined in naive scattering amplitudes, not even in classical physics, and a lot more events are "grazing" rather than high-momentum exchange collisions. This means that we have to know how to "subtract" all these weak events from our data without removing the relevant information. At the same time, these weak events will give us calibration information about the beam cross sections, which depend very much on the precise tuning of the magnetic fields that guide the beams. We need that information to make the machine perform at its highest possible level of performance, otherwise the utilization of the machine would be too low to ever make the measurements we are interested in.


In addition to these weak events, there is a slew of background events, which look exactly like the events that one is actually interested in (e.g. to identify the Higgs). This is called irreducible background, because it is based on physical interactions that mask each other. One has to have a very good theory of the standard model to calculate these irreducible backgrounds to high precision. It is only the deviations from these backgrounds that contain the "new" physics. Thankfully, many of these backgrounds have been researched before at machines before LHC was built. They have been observed at LEP and at Fermilab, for instance. Even before we could build LHC, we needed to have a very firm understanding of what these background events would look like, otherwise we would not have been able to design the detectors with enough precision to measure these events well enough to do the necessary subtraction from the actual signal.


In practice the procedure is iterative. Before such a machine is even built, the theorists and phenomenologists will use the standard model to calculate what the expected signal will look like. There is a slew of software tools for that, which create stochastic simulations for the expected events. At the core of these tools are the quantum field theoretical calculations using the standard model Lagrangian. From these calculations one determines the number, types and momenta of particles that are expected to escape the proton collisions.


This large set of simulated kinematics data is then fed into different other simulation tools, like GEANT, which calculate the particle-matter interactions with the mechanical model of the detectors. From that data an electrical detector model is driven, which simulates the electronic output of the detectors (which are all collecting ionization and/or optical scintillation/Cherenkov signals). Then there is simulation code which converts this simulated electronic signal back into the ones and zeros that the physicists are expecting to see coming from the data acquisition electronics of the real detectors.


This simulated DAQ data is then fed into yet another layer of software which uses physical knowledge about the detectors to estimate the types and momenta of the simulated particles that were going through the simulated detector. From that we derive a set of performance metrics for our simulated detector: how many of the real particles will it identify correctly, and what will the errors for the momenta measurements look like? This is then used to optimize the layout and technology of the real detectors.


Finally, when there is enough evidence from simulations, that the real detectors will perform well, mockups (usually small segments) of these detectors are built in the lab, and we stick these test items into a real particle beam that has properties which are precisely characterized by previous experiments. If the simulations agree with the calibration data from these experiments, they are ready to be used to design the real detector. If they are not, we have to tweak both the detector mockups and the simulation tools to make sure that we understand exactly what will happen when the real beam will hit the real detector.


Eventually, after about 20 years of this kind of work, the detectors can be built and all the simulation packages are calibrated for the real measurement. Now the real fun starts. We turn on the machine at a low energy and low luminosity and take data in an energy range which we believe to understand. We run the output of the detectors trough the previously calibrated reconstruction algorithms and we keep tweaking both the detectors and the reconstruction, until the results agree with our knowledge of the standard model. Only then can we ramp up the beam energy and luminosity into a parameter range that has never been performed before. That data is then run trough the reconstruction code, which takes out all the detector errors, factors in the beam parameters (which are changing over time) and which gives us physically relevant event rates. It is those event rates that we can now compare against simulated event rates for our calibrated detector...


Finally, after thousands of physicists have been sweating over this for decades, we can say with some certainty, that what we are seeing in our detectors is actually comparable to what we expect to be seeing and if there are discrepancies between the real data and the expected data that are beyond the (also calibrated) error bars, then we can talk about having discovered new physics. Needless to say, all of this has to be done in parallel on multiple detectors and with several different sets of theoretical (and software) tools to rule out, that somebody made a trivial mistake somewhere. Also needless to say that I have simplified the actual process to almost absurdity in this post. In reality it's much harder than that, still.


Would you still like to get into the business of validating the standard model against LHC data?


No comments:

Post a Comment

Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...