From this mathstack page and in particular Qmechanic's answer:
- There exists an n-dimensional generalization δn(f(x)) = f(x(0))=0∑x(0)1|det∂f(x)∂x|δn(x−x(0)) of the substitution formula for the Dirac delta distribution under pertinent assumptions, such as e.g., that the function {\bf f}:\Omega \subseteq \mathbb{R}^n \to \mathbb{R}^n has isolated zeros. Here the sum on the rhs. of eq. (1) extends to all the zeros {\bf x}_{(0)} of the function {\bf f}.
Also from this page on the Faddeev-Popov procedure they say:
For ordinary functions, a property of the Dirac delta function gives: \delta(x-x_0) = \left|\frac{df(x)}{dx}\right|_{x=x_0}\delta(f(x))\, assuming f(x)\, has only one zero at x=x_0\, and is differentiable there. Integrating both sides gives :1 = \left|\frac{df(x)}{dx}\right|_{x=x_0}\int\!dx\,\delta(f(x))\,. Extending over n variables, suppose f(x^i) = 0\, for some x^i_0\,. Then, replacing \delta(x-x_0)\, with \prod_i^n \delta^i(x^i-x^i_0)\, :1 = \left(\prod_i \left|\frac{\partial f(x^i)}{\partial x^i}\right|\right) \int\!\left(\prod_i dx^i\right)\,\delta(f(x^i))\,. Recognizing the first factor as the determinant of the diagonal matrix \frac{\partial f(x^i)}{\partial x^i}\delta^{ij}\, (no summation implied), we can generalize to the functional version of the identity: :1 = \det\left|\frac{\delta G}{\delta \Omega}\right|_{G=0} \int\!\mathcal{D}\Omega\,\delta[G_a(\phi^\Omega)]\,, where \Delta_F[\phi] \equiv \det\left|\frac{\delta F}{\delta g}\right|_{F=0}\, is the Faddeev-Popov determinant.
What I don't understand is that it seems their function f seems to be f:\Omega \subseteq \mathbb{R}^n \to \mathbb{R}. How does the generalized Dirac formula (1) work in this case? I don't really understad their notation in:
1 = \left(\prod_i \left|\frac{\partial f(x^i)}{\partial x^i}\right|\right) \int\!\left(\prod_i dx^i\right)\,\delta(f(x^i))\,
What does \frac{\partial f(x^i)}{\partial x^i} mean here?
Answer
The notation \begin{equation} \frac{ \partial f_i}{ \partial x ^i } \end{equation} means the diagonal elements of the matrix: \begin{equation} J _{ ij} = \frac{ \partial f _i }{ \partial x ^j } \end{equation} where f_i is the component of the vector \vec{f} (x).
I found this very confusing a few weeks ago so. Here is the proof I wrote up for the identity based on the response I received to an earlier question of mine here:
Recall that if f (x) has one zero at x _0 then, \begin{equation} \int d x \left| \frac{ df (x) }{ d x } \right| _{ x = x _0 } \delta \left( f (x) \right) = 1 \end{equation} We want to generalize this to instead of having f (x) we have, {\mathbf{g}} ( {\mathbf{a}} ) for vectors of arbitrary size. To do this consider the Taylor expansion of {\mathbf{g}} around its root (we assume it only has one root, {\mathbf{a}} _0 ): \begin{equation} g _i ( {\mathbf{a}} ) = \overbrace{g _i ( {\mathbf{a}} _0 )}^0 + \sum _{ j} \frac{ \partial g _i }{ \partial a _j } \bigg|_{ a _0 } ( a _j - a _{ 0,j }) + ... \end{equation} We want to insert this into a delta function, \delta ^{ ( n ) } ( {\mathbf{g}} ( {\mathbf{a}} ) ) . This will only be nonzero near {\mathbf{a}} = {\mathbf{a}} _0 . Thus we have, \begin{align} \delta \left( {\mathbf{g}} ( {\mathbf{a}} ) \right) & = \prod _i \delta \left( g _i ( {\mathbf{a}} ) \right) \\ & = \prod _i \delta \big( \sum _j J _{ ij} ( a _j - a _{ 0,j} ) \big) \end{align} where J _{ ij} is the Jacobian matrix defined by J _{ ij} \equiv \frac{ \partial g _{ i} }{ \partial a _j } \big|_{ a _0 } . We have, \begin{align} \delta \left( {\mathbf{g}} ( {\mathbf{a}} ) \right) & = \delta \big( \sum _j J _{ 1j} ( a _j - a _{ 0,j} ) \big) \delta \big( \sum _j J _{ 2j} ( a _j - a _{ 0,j} ) \big) ... \end{align} We now use the identity, \begin{equation} \delta ( \alpha ( a - a _0 ) ) = \frac{ \delta ( a - a _0 ) }{ \left| \alpha \right| } \end{equation} We choose to isolate each delta function in the equation above for a different a _j : \begin{align} \delta \big( {\mathbf{g}} ( {\mathbf{a}} ) \big) & = \frac{ \delta ( a _1 - a _{ 0,1 } ) }{ \left| J _{ 1,1 } \right| } \frac{ \delta ( a _2 - a _{ 0,2 } ) }{ \left| J _{ 2,2 } \right| } ... \end{align} If we take the Jacobian matrix to be greater then zero then we have the product: \begin{equation} ( J _{ 1,1 } J _{ 2,2} .. ) ^{-1} = \frac{1}{ \det J } \end{equation} where we have used the fact that the determinant of J is independent of a unitary transformation. So we finally have, \begin{align} \left( \int \prod _{ i} d a _i \right) \delta ^{ ( n ) } \big( {\mathbf{g}} ( {\mathbf{a}} ) \big) \det \big( \frac{ \partial g _i }{ \partial a _j } \big) & = 1 \end{align} where it is understood that the Jacobian matrix is evaluated at the root of {\mathbf{g}} .
We write the continuum generalization of this equation as, \begin{equation} \int {\cal D} \alpha (x) \delta \left( G ( A ^\alpha ) \right) \det \left( \frac{ \delta G ( A ^\alpha ) }{ \delta \alpha } \right) = 1 \end{equation}
No comments:
Post a Comment