In QFT for high energy or condensed matter, tree level diagram means classical result or mean field. Loop diagram means quantum correction or (thermal or quantum) fluctuation above mean field. In most cases, we only need to compute $1$-loop correction, and in general it seems that higher-loop correction has no other important meaning except increasing the accuracy of numerical value. I even heard some professors say "If your advisor want you to compute higher loop correction, then you need to consider changing advisor." I'm curious about whether in high energy or condensed matter there are some cases in which higher-loop $n\ge 2$ correction has important physical consequences? Or are there some effects that can be found only after taking higher-loop correction into consideration?
Answer
Sure. Consider the following examples:
The anomalous magnetic moment of the electron is known (and needed) to five loops (plus two loops in the weak bosons). It is used to measure the fine-structure constant to a relative standard uncertainty of less than one part per billion. Similarly, the anomalous magnetic moment of the muon has been proposed as a rather clean and quantitative evidence for physics beyond the Standard Model (cf. this PSE post). More importantly (and essentially due to leptonic universality), the one-loop computation of the anomaly is mass-independent (i.e., the same for the three generations). You have to calculate the anomaly to at least two loops to be able to observe a difference (historically, this difference, and its agreement with the calculation, was the most convincing evidence for the fact that the muon is a lepton, that is, a heavy electron; for some time people thought that it could rather be the Yukawa meson).
Similarly, the muon decay width is the best parameter to measure the weak coupling constant, and the current experimental precision requires a theoretical calculation to several loops. (More generally, several precision tests of the electroweak part of the Standard Model are already measuring two-loops and beyond).
The fact that massive non-abelian Yang-Mills is non-renormalisable can only be established by computing two loops (cf. this PSE post). In the one-loop approximation, the theory appears to be renormalisable.
The fact that naïve quantum gravity (in vacuum) is non-renormalisable can only be established by computing two loops (cf. this PSE post). In the one-loop approximation, the theory appears to be renormalisable.
Some objects are in fact one-loop exact (the beta function in supersymmetric Yang-Mills, the axial anomaly, etc.). This can be established non-perturbatively, but one is usually skeptical about these results, because of the usual subtleties inherent to QFT. The explicit two-loop computation of these objects helped convince the community that there is an overall coherent picture behind QFT, even if the details are sometimes not as rigorous as one would like.
In many cases, the counter-terms that arise in perturbation theory actually vanish to one loop (e.g., the wave-function renormalisation in $\phi^4$ in $d=4$). When this happens, you need to calculate the two-loop contribution in order to obtain the first non-trivial contribution to the beta function and anomalous dimension, so as to be able to tell, for example, whether the theory is IR/UV free.
In supersymmetric theories, dimensional regularisation breaks supersymmetry (essentially because the number of degrees of freedom of fermions grow differently with $d$ from those of bosons). To one-loop order this only affects the finite part of the counter-terms (which is not a terrible situation), but from two-loops and beyond the violation of SUSY affects the divergent part of the counter-terms as well, which in turns affects the beta functions. (The solution is to use the so-called dimensional reduction scheme).
In an arbitrary theory, to one-loop order the beta function is independent of the regularisation scheme. From two loops on, the beta function becomes scheme dependent (cf. this PSE post). This has some funny consequences, such as the possibility of introducing the so-called 't Hooft renormalisation scheme (cf. this PSE post), where the beta function is in fact two-loop exact!
It was no-so-long ago suggested that there might be some choices of the gauge parameter $\xi$ that cure all divergences. For example, to one-loop, the Yennie gauge $\xi=3$ eliminates the IR divergence in QED (associated to the massless-ness of the photon), and people pondered on the possibility that this may hold to any loop order. Similarly, the Landau gauge $\xi=0$ does the same thing on the UV divergences. Now we know that in both cases this is just a coincidence, and no such cancellation holds to higher orders. But we only know this because the actual computation was performed to higher loops; otherwise, the possibility that such a cancellation works to any order would still be on the table. And it would definitely be a desirable situation!
The fact that the vacuum of the Standard Model is unstable if the Higgs mass is $m_h>129.4\ \mathrm{GeV}$ requires a two-loop computation (cf. arXiv:1205.6497). This is fun: why is the bound to close to the measured value? Could higher loops bring the number even closer? Surely that would mean something!
A meaningful and consistent estimate of the GUT point has been obtained by taking into account two loops (e.g., arXiv:hep-ph/9209232, arXiv:1011.2927, etc.; see also Grand Unified Theories, from the pdg).
Resummation of divergent series is a very important and trending topic, not only as a matter of practice but also as a matter of principle. It is essential to be able to calculate diagrams to a very high loop order so as to be able to test these resummation methods.
Historically speaking, the first tests of the renormalisation group equation were performed by comparison to an explicit two-loop computation. Indeed, the RGE allows you to estimate the two-loop large-log contributions given the one-loop calculation. The fact that the explicit two-loop computation agreed with what the RGE predicted helped convince the community that the latter was a correct and useful concept.
In the same vein, the initial one-loop calculation of the critical exponents (at the Wilson-Fisher fixed point) of certain systems was viewed with a lot of skepticism (after all, it was an expansion in powers of $\epsilon=1$, with $d=4-\epsilon$). The agreement with the experimental result could very well have been a coincidence. Higher loops consolidated the Wilsonian picture of QFT and the whole idea of integrating out irrelevant operators. Nowadays the critical exponents (in $(\boldsymbol \phi^2)^2$ theory) have been computed up to five loops, and the agreement (after Borel resummation) with experiments/simulations is wonderful. And even if the asymptotic series were not numerically accurate, one could argue that the result is still very informative/useful, at least as far as classifying universality classes is concerned.
Generically speaking, loop calculations become much more interesting (and challenging) from two loops on, because of overlapping divergences, the emergence of transcendental integrals (polylogarithms), etc. To one-loop order, naïve power counting arguments are essentially all one needs in order to establish convergence of Feynman integrals. The non-trivial structure of a local QFT can only be seen from two loops on (e.g., the factorisation of the Symanzik polynomials in the Hepp sectors, which is the key to Weinberg's convergence theorem, etc.).
Some of these examples are admittedly more contrived than others, but I hope they work together to help convince you that higher orders in perturbation theory are not merely a textbook exercise.
No comments:
Post a Comment