In this blog post, Lubos Motl claims that any
commutator may be shown to reduce to the classical Poisson brackets: $$ \lim \limits_{\hbar \to 0} \frac{1}{i\hbar} \left[ \hat{F}, \hat{G} \right] = \{F, G\},$$
where $\hat{F}$ and $\hat{G}$ are the Hermitian operators corresponding to the classical observables $F$ and $G$. How is this done?
Edit: As ACuriousMind points out, the proof is trivial if you start with a classical Hamiltonian and then quantize it via a reasonable quantization procedure. But what I have in mind is starting with a quantum Hamiltonian (and the canonical commutation relation $[\hat{x}_i, \hat{p}_j] = i\, \delta_{ij}$), then taking some limit $\hbar \to 0$ and showing that the resultant emergent classical theory has Poisson brackets that agree with the quantum commutators. Under these assumptions, you can't use any facts about your quantization procedure, because you never quantize a classical Hamiltonian at all.
Answer
I do not know about deep questions. And people seem to give pretty deep answers here. My contribution is to show
$$ \lim_{\hbar \to \infty} \frac{1}{i\hbar}[ F(p,x) , G(p,x)] = \{F(p,x), G(p,x)\}_{P.B.} $$
where $ [ F, G] = F G - G F $ and
$$ \{ F(p,x), G(p,x) \} = \frac{\partial F}{\partial x} \frac{\partial G}{\partial p} - \frac{\partial G}{\partial x} \frac{\partial F}{\partial p}. $$
Preliminars.
With $[x, p] = i \hbar$, you can show the following two equalities:
$$ [x, f(p) ] = i \hbar \frac{\partial f}{\partial p} $$
and
$$ [p , g(x) ] = - i \hbar \frac{\partial g}{\partial x}. $$
I think this is almost mandatory for every QM course, so I will skip this derivation. In any case, the standard route is to consider the commutator of x with increasing powers of p; then use induction when developing $f(p)$ as a Taylor series.
A more illustrative example is the following:
$$ [x^{2} , f(p) ] = [x ,f(p) ] x + x [x, f(p)] \\ = i \hbar f'(p)\, x + i \hbar x \, f'(p) = 2 i\hbar x f'(p) - i\hbar [x , f'(p)]\\ = 2 i \hbar x f'(p) - (i \hbar)^{2} f''(p) $$
where I have introduced the pretty useful notation $ f'(p) = d f /dp $.
By now you can see the fun is in arbitrary powers of $x$. You should more or less be able to guess the result and prove it by induction.
Lemma.
$$ [x^{n} , f(p) ] = \sum_{j=1}^{n} (-)^{j+1} \binom{n}{k} \, (i \hbar)^{j} x^{n-j} \, f^{(j)}(p) $$
Proof: You do it. Use induction. It should be more or less straightforward. By the way, $\binom{n}{k}$ denotes the binomial coefficient.
Moment of truth.
The previous argument can be used to include an analytic function of $x$. Consider
$$ [ g(x) , f(p)] = \Biggl[ \, \sum_{k=0}^{\infty} \frac{1}{k!} g^{(k)} (0) x^{k}, \, f(p) \Biggr] = \sum_{k=0}^{\infty} \frac{1}{k!} g^{(k)} (0) \Biggl[ x^{k}, f(p) \Biggr] \\ = \sum_{k=0}^{\infty} \frac{1}{k!} g^{(k)} (0) \sum_{j=1}^{k} (-)^{j+1} C^{k}_{j} \, (i \hbar)^{j} x^{k-j} \, f^{(j)}(p) \\ = \sum_{j=1}^{\infty} (-)^{j+1} \, (i \hbar)^{j} g^{(j)}(x) \, f^{(j)}(p). $$
The trick in the fourth equality is to switch the sums (and then expand $C^{k}_{j}$... everything fits nicely).
It is interesting to notice that the double summations collapsed into one. This somehow makes sense by dimensional analysis, powers of x and p decrease together so that $\hbar$ appears.
The final part is the most subtle point. A general $f(x,p)$ is tricky because $x$ and $p$ do not commute. So you would have problems with "hermiticity" and ordering. I will choose every $p$ to be the left of every $x$. Once this is agreed, a general $F(p,x)$ can be written as
$$ F(p, x) = \sum_{n=0}^{\infty} \alpha_{n} (p) \,\, f_{n} (x). $$
Now, we can compute
$$ \Biggl[ F(p,x) , G(p,x) \Biggr] = \sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \Biggl[ \alpha_{n} (p) \,\,f_{n} (x), \, \beta_{m} (p) \, \, g_{m} (x) \Biggr] \\ = \sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \alpha_{n} (p) \biggl[ \, f_{n} (x) , \beta_{m} ( p) \biggr] g_{m} (x) + \beta_{m} (p) \, \biggl[ \, \alpha_{n} ( p) , g_{m} (x) \biggr] \, f_{n} (x) \\ = \sum_{n=0}^{\infty} \sum_{m=0}^{\infty} \, \alpha_{n} (p) \, \biggl( \sum_{j=1}^{\infty} (-)^{j+1} \, (i \hbar)^{j} f^{(j)}_{n} (x) \, \beta^{(j)}_{m} (p) \biggr) \, g_{m} (x) + \beta_{m} (p) \biggl( \sum_{j=1}^{\infty} (-)^{j} \, (i \hbar)^{j} g^{(j)}_{m}(x) \, \alpha^{(j)}_{n}(p) \biggr) \, f_{n} (x) $$
specially using
$$ \sum_{n=0}^{\infty} \alpha_{n} (p) \, f_{n}^{(j)} (x) = \frac{\partial^{j}}{\partial x^{j}} \biggl( \sum_{n=0}^{\infty} \alpha_{n} (p) \, f_{n} (x) \biggr) = \frac{\partial^{j}}{\partial x^{j}} F(p,x) $$
you see that you get the desired result (after changing the summations):
$$ \Biggl[ F(p,x), G(p,x) \Biggr] = \sum_{j=1}^{\infty} (-)^{j} \frac{(i \hbar)^{j}}{j!} \Biggl( \frac{\partial^{j} G}{\partial x^{j}} \frac{\partial^{j} F}{\partial p^{j}} - \frac{\partial^{j} F}{\partial x^{j}} \frac{\partial^{j} G}{\partial p^{j}} \Biggr) $$
because you see the only term that survives after dividing by (i \hbar) is the first one. This gives you the Poisson bracket. I didn't do any involved computations because they are long. It is more or less convincing.
No comments:
Post a Comment