This may be more of a philosophical question than a physics question, but here goes. The standard line is that nonrenormalizable QFT's aren't predictive because you need to specify an infinite number of couplings/counterterms. But strictly speaking, this is only true if you want your theory to be predictive at all energy scales. As long as you only consider processes below certain energy scales, it's fine to truncate your Lagrangian after a finite number of interaction terms (or stop your Feynman expansion at some finite skeleton vertex order) and treat your theory as an effective theory. Indeed, our two most precise theories of physics - general relativity and the Standard Model - are essentially effective theories that only work well in certain regimes (although not quite in the technical sense described above).
As physicists, we're philosophically predisposed to believe that there is a single fundamental theory, that requires a finite amount of information to fully specify, which describes processes at all energy scales. But one could imagine the possibility that quantum gravity is simply described by a QFT with an infinite number of counterterms, and the higher-energy the process you want to consider, the more counterterms you need to include. If this were the case, then no one would ever be able to confidently predict the result of an experiment at arbitrarily high energy. But the theory would still be completely predictive below certain energy scales - if you wanted to study the physics at a given scale, you'd just need to experimentally measure the value of the relevant counterterms once, and then you'd always be able to predict the physics at that scale and below. So we'd be able to predict that physics at arbitrarily high energies that we would have experimental access to, regardless of how technologically advanced our experiments were at the time.
Such a scenario would admittedly be highly unsatisfying from a philosophical perspective, but is there any physical argument against it?
Answer
You suggest that we can use a nonrenormalizible theory (NR) at energies greater than the cutoff, by meausuring sufficiently many coefficients at any energy.
However, a general expansion of an amplitude for a NR that breaks down at a scale $M$ reads $$ A(E) = A^0(E) \sum c_n \left (\frac{E}{M}\right)^n $$ I assumed that the amplitude was characterized by a single energy scale $E $. Thus at any energy $E\ge M$, we cannot calculate amplitudes from a finite subset of the unknown coefficients.
On the other hand, we could have an infinite stack of (NR) effective theories (EFTs). The new fields introduced in each EFT could successively raise the cutoff. In practice, however, this is nothing other than discovering new physics at higher energies and describing it with QFT. That's what we've been doing at colliders for decades.
No comments:
Post a Comment