For a system of electric charges $q_i$, at positions $\mathbf{r}_i$, with a nonzero net charge $Q=\sum_i q_i$, one can define a "centre of charge" in the obvious way as $$ \mathbf{r}_c=\frac{1}{Q}\sum_i q_i\mathbf{r}_i. $$ This concept is definitely not as useful as one might naively hope, but it does have a physical significance as the position of the origin that sets the system's dipole moment (and therefore the dipolar term in a multipolar expansion) to zero. That means, then, that the monopole approximation is far better in the far field than normally: the electrostatic potential goes as $$ \Phi(\mathbf{r})=\frac{Q}{|\mathbf{r}-\mathbf{r}_c|}+O\left(\frac{1}{|\mathbf{r}-\mathbf{r}_c|^{3}}\right) $$ where the subleading term is of order $1/r^3$, instead of $1/r^2$ as usual.
For a neutral system with $Q=0$, however, the concept of centre of charge is meaningless and there is no monopole term in the expansion. The relevant concept is then the dipole moment, $$ \mathbf{d}=\sum_i q_i(\mathbf{r}_i-\mathbf{r}_0), $$ which is independent of the position $\mathbf{r}_0$ of the origin. However, this means that the relative importance of the subleading term in the multipole expansion is higher than above: $$ \Phi(\mathbf{r})=\frac{\mathbf{d}\cdot\mathbf{r}}{|\mathbf{r}-\mathbf{r}_0|^2}+O\left(\frac{1}{|\mathbf{r}-\mathbf{r}_0|^{3}}\right) $$ My question is: for a neutral system, is it possible to find a suitable position $\mathbf{r}_0$ for the origin that will set the subleading, quadrupole term to zero? Since this entails a system of five equations (linear in $\mathbf{r}_0$ when $Q=0$), I suspect this is impossible in general geometries. If this is the case, which geometries allow for vanishing quadrupole moments and which ones don't? In the cases where one can do this, does this position have a special name? More generally, if all multipole moments up to some $l\geq0$ are zero, (when) can the subleading term be made to vanish?
Answer
The obvious example is a purely dipolar charge which has been displaced from the origin, such as a dipolar gaussian $$ \rho(\mathbf r) =p_z(z-z_0)\frac{e^{-(\mathbf{r}-\mathbf{r}_0)^2/2\sigma^2}}{\sigma^5(2\pi)^{3/2}} . $$ This system is neutral, and it has a nonzero dipole moment $p_z=\int z\rho(\mathbf r)\mathrm d\mathbf r$ along the $z$ axis. It also has a nonzero quadrupole moment; for example, $$ Q_{zz} =\int \left(z^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r =\frac{4}{3}p_z z_0. $$ However, this is obviously trivially fixable by putting the origin at $\mathbf r_0$, in which case the charge density becomes purely dipolar and all other multipole moments will vanish. This is obviously a slightly contrived example, but it shows that it is indeed possible, in general, for charge densities where the quadrupole moments are originally nonzero but can nevertheless be transformed away by a suitable translation.
Unfortunately, this is not always the case. As a counterexample, consider the charge density $$ \rho(\mathbf r) =\left( \frac{p_x x}{\sigma^2} + \frac{Q_{xx}(x^2-y^2)}{2\sigma^4} \right) \frac{e^{-\mathbf{r}^2/2\sigma^2}}{\sigma^3(2\pi)^{3/2}}. $$ Here $\rho(\mathbf r)$ is neutral, it has a nonvanishing dipole moment $\mathbf p=\int \mathbf r\rho(\mathbf r)\mathrm d\mathbf r=p_x\hat{\mathbf x}$, and it has nonzero quadrupole moments $$ Q_{xx} =-Q_{yy} =\int \left(x^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r $$ along the $x$ and $y$ axes. (Of course, $Q_{zz}$ and all off-diagonal elements are zero.)
Here you have essentially one quadrupole moment to cancel out, and one nonzero dipole moment component with which to do so, so if all you're doing is counting linearly independent components then it looks like you have a good chance. Unfortunately, this doesn't work.
To be more specific, take a passive approach where you keep $\rho(\mathbf r)$ as it is and you calculate the new multipole moments about a different origin, $$ Q'_{ij} =\int \left((x_i-x_i^{(0)})(x_j-x_j^{(0)})-\frac{(\mathbf r-\mathbf r_0)^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r. $$ In these conditions the moments transform as follows: \begin{align} Q'_{xx} & = Q_{xx} -\frac{4}{3} p_x x_0, \\ Q'_{yy} & = -Q_{xx} +\frac{2}{3} p_x x_0,\\ Q'_{zz} & =\frac{2}{3} p_x x_0. \end{align} It is therefore possible to choose the new origin's $x$ coordinate $x_0$ such that $Q'_{xx}$ vanish, but it is patently impossible to make all the components vanish. This completes the proof: some systems are not susceptible to this sort of simplification, and the subleading term in the multipole expansion is always $1/r$ behind the leading term instead of being reducible to $1/r^2$.
As to how this looks, here are some plots of our counterexample $\rho(\mathbf r)$ as the parameters go from purely dipolar to purely quadrupolar and back:
None of the displayed distributions (with the exception of the extremes) can be brought to dipole + hexadecapole form via a translation. The image link should lead to an interactive version of this animation, but it's rather sluggish so give it some time.
OK, so the result is sometimes true and sometimes not, which does very little to help us understand what's going on here. So, let's have a closer look and characterize the cases where it is and isn't possible to transform away the quadrupole components, and that will help us understand why some work and some don't.
The trick here (and here I thank my undergraduate supervisor, Eduardo Nahmad-Achar, for the crucial suggestion) is to boil down the quadrupole moment tensor to its essential components. $Q_{ij}$ is an intimidating quantity, but looks more complicated than it is. In particular, we're defining it as $$ Q_{ij}=\int \left(x_ix_j -\delta_{ij}\frac{r^2}{3}\right) \rho(\mathbf r)\mathrm d \mathbf r, $$ so it is easy to see that it is a second-rank tensor with respect to rotations, and that it is symmetric and traceless (in the sense that $\sum_iQ_{ii}=0$). The symmetry takes the number of independent components down from nine to six, and the tracelessness from there to five, but in fact you can go lower: because the tensor is symmetric, it can always be diagonalized, so the off-diagonal components can be set to zero with a suitably aligned set of axes. The tracelessness then reduces $Q_{ij}$ to only two independent components in this frame.
So, let's start by setting up a general framework for our transformed quadrupole moments. Using the passive transformation from before, and expanding the square, we get \begin{align} Q'_{ij} & = \int \left( \left(x_i-x_i^{(0)}\right)\left(x_j-x_j^{(0)}\right)-\delta_{ij}\frac{(\mathbf r-\mathbf r_0)^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = \int \left( x_ix_j-x_ix_j^{(0)}-x_jx_i^{(0)}+x_i^{(0)}x_j^{(0)}-\delta_{ij}\frac{r^2 - 2\mathbf r\cdot\mathbf r_0+r_0^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = Q_{ij} -x_j^{(0)}p_i-x_i^{(0)}p_j + \frac23 \delta_{ij}\mathbf{r}_0\cdot\mathbf p , \end{align} where the constant terms in $x_i^{(0)}x_j^{(0)}$ and in $\frac13\delta_{ij} r_0^2$ cancel out because the system is neutral. This system splits into two types of equations: three with $i\neq j$, and three with $i=j$, which look relatively different: \begin{align} Q'_{ij} &= Q_{ij}- p_ix_j^{(0)}-p_jx_i^{(0)}\text { for }i\neq j,\text{ and}\\ Q'_{ii} &= Q_{ii}-2\left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right). \end{align} Here the goal is to have the primed moments vanish, and without loss of generality we've put ourselves in a frame where the off-diagonal moments, $Q_{ij}$ for $i\neq j$, vanish to begin with. That means, then, that our equations really read \begin{align} p_ix_j^{(0)}+p_jx_i^{(0)} & = 0\text { for }i\neq j,\text{ and}\\ \left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right) & = \frac12 Q_{ii}. \end{align} These two sets are rather different, and it's better to tackle them separately.
Let's start, then, with the diagonal set of equations, \begin{align} \left(p_i\hat{\mathbf e}_i -\frac13 \mathbf p\right)\cdot \mathbf r_0 = \frac12 Q_{ii}. \end{align} which looks nice enough. However, there's a problem with this set, in that it has a built-in linear dependence that comes from the innate tracelessness of our original quadrupole moments. This becomes somewhat clearer when expressed in coordinate form as \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ -\frac13 p_x x^{(0)} - \frac13 p_y y^{(0)} + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} in that if you add all the three equations you get identically zero; that is, any equation is (the negative of) the sum of the other two. The solution space of each of this set of equations is a plane in $\mathbf r_0$ space, and we are guaranteed that if two of the planes intersect then the other plane must also coincide with that intersection.
In addition, we also know that sometimes these equations can fail to have solutions and that therefore the planes can be parallel. Indeed, we see this in our 'sometimes you can't' example, where the equations reduce to \begin{align} \frac23 p_x x^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & = - \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & =0, \end{align} which is not a consistent set of equations, and describes three parallel planes normal to the $x$ axis.
On the other hand, since the system is linearly dependent then we also know that if a solution does exist, then it will have a degeneracy of at least dimension $1$ along the common line of intersection of the planes. Finding the direction of this line tells us a lot about the planes, and we can get it directly as the cross product of any two normal vectors, so, for definiteness, \begin{align} \mathbf n & = \left(p_1\hat{\mathbf e}_1 -\frac13 \mathbf p\right) \times \left(p_2\hat{\mathbf e}_2 -\frac13 \mathbf p\right) \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13p_1\hat{\mathbf e}_1\times(p_2\hat{\mathbf e}_2+p_3\hat{\mathbf e}_3) -\frac13p_2(p_1\hat{\mathbf e}_1+p_3\hat{\mathbf e}_3)\times\hat{\mathbf e}_2 \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13(p_1p_2\hat{\mathbf e}_3-p_1p_3\hat{\mathbf e}_2) -\frac13(p_1p_2\hat{\mathbf e}_3-p_2p_3\hat{\mathbf e}_1) \\ & = \frac13\left[\vphantom{\sum} p_1p_2\hat{\mathbf e}_3+p_2p_3\hat{\mathbf e}_1+p_3p_1\hat{\mathbf e}_2 \right]. \end{align} This is a really cool result, and as a bonus we know that we get the same vector for the pairs $(2,3)$ and $(3,1)$ of planes (and $-\mathbf n$ of the reversed pairs).
This then gives us a first criterion: if, in an eigenframe of the original quadrupole moment, any two components of the dipole moment are nonzero, then the planes describing the equations for the vanishing of the diagonal components will intersect in a line. If only one of these components is nonzero, then the planes are parallel, and there will only be solutions if all the planes coincide.
Having said all this, the analysis above is not enough to solve the problem, because we still need to follow up on what happens with the off-diagonal moments, which were originally zero because of the frame we chose but need to remain at zero. Those equations read \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} + p_y z^{(0)} & = 0 \\ p_z x^{(0)} \phantom{+p_i x^{(0)}} + p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0, \end{align} and being a homogeneous system they're somewhat easier to handle: zero is always a solution, and they only have nonzero solutions (which we do want, or our translation by $\mathbf r_0$ doesn't do anything) if their determinant $$ \det\begin{pmatrix} 0 & p_z & p_y \\ p_z & 0 & p_x \\ p_y & p_x & 0 \end{pmatrix} =2p_xp_yp_z $$ vanishes. That puts us in a bit of a conundrum, though, because we wanted to have nonvanishing dipole components in this frame, which means that (as we really should have expected) the system is riding the edge of solvability.
We have, therefore, two conflicting demands on how many components of the dipole are allowed to vanish in this frame. They can't all be nonzero (or the determinant above kills the off-diagonal equations) and they can't all be zero (by initial hypothesis of nonzero $\mathbf p$), so that leaves two distinct cases
Exactly one component of the dipole vanishes. Choosing that component as $p_z$, our equations read in component form \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} & = \frac12 Q_{yy}, \end{align} and \begin{align} \phantom{p_i x^{(0)} + 0 y^{(0)} + } p_y z^{(0)} & = 0 \\ \phantom{ 0 x^{(0)} +p_i x^{(0)} + } p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0; \end{align} and from the latter we get that $z^{(0)}=0$. To get the other two components, we fist solve the first set of equations, which are now guaranteed a unique solution, \begin{align} \begin{pmatrix} x^{(0)} \\ y^{(0)} \end{pmatrix} & = \frac32 \begin{pmatrix} 2 p_x & - p_y \\ - p_x & 2 p_y \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} 2 p_y & p_y \\ p_x & 2 p_x \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} , \end{align} and then comes the final test - whether this solution satisfies the final equation, \begin{align} 0 &= p_y x^{(0)}+ p_x y^{(0)} = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y & p_x\end{pmatrix} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} \\ & = \frac{3}{10p_xp_y} \bigg[ p_y^2(2Q_{xx} + Q_{yy}) + p_x^2(Q_{xx} + 2 Q_{yy}) \bigg]. \end{align} Since both $p_x$ and $p_y$ need to be nonzero here, both of the coefficients are forced to vanish, and since that reads in matrix form as $$ \begin{pmatrix} 2 & 1 \\ 1 & 2\end{pmatrix} \begin{pmatrix} Q_{xx} \\ Q_{yy} \end{pmatrix}=0, $$ with a nonsingular constant matrix, then we're forced to conclude that $Q_{xx}=0=Q_{yy}$ and that therefore $Q_{zz}=-Q_{xx}-Q_{yy}$ also vanishes.
That is, this option doesn't work: it requires that our original quadrupoles be zero to begin with, and the whole thing collapses to zero.
The other option, then is that exactly two components of the dipole vanish. Putting our only remaining component along $z$, our equations read \begin{align} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} and \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} \phantom{+ p_y z^{(0)}}& = 0 \\ p_z x^{(0)} \phantom{ + p_i x^{(0)} + p_x z^{(0)} } & = 0. \end{align} Here the second set requires that we remain on the symmetry axis of the dipole, i.e. $x^{(0)}=y^{(0)}=0$, and the first set gives us a nonzero displacement $z^{(0)}$ along that axis - together with a strong (but not fully stifling) constraint on the quadrupole moments, namely, that $$ Q_{xx} = Q_{yy} = -\frac12 Q_{zz}. $$ In other words, the quadrupole moment needs to be zonal (as opposed to tesseral or sectoral) and it needs to have full axial symmetry.
This means that our initial 'sometimes you can' example was completely representative and it exhausted, in its essence, all the possible scenarios. To phrase the theorem in full, then, we have:
Summary
Given a neutral charge distribution with nonzero dipole and quadrupole moments, it is possible to choose an origin about which all quadrupole moments $Q'_{ij}$ vanish if and only if the quadrupole moment tensor is cylindrically symmetric - i.e., if and only if two of its eigenvalues are equal - and the dipole moment lies along its axis of symmetry.
This is really restrictive (particularly compared to the absolutely-no-qualms case one order above), but hey, that's life.
This answer supersedes two previous, incorrect versions, which are accessible via the revision history.
No comments:
Post a Comment