Tuesday 28 February 2017

acoustics - If two sound waves that are different frequencies create beats that occur several hundred times per second, can you hear this effect as its own tone?


If you have multiple waves of different frequencies, the interference from the different waves cause "beats".



(Animation from https://en.wikipedia.org/wiki/Group_velocity)


https://en.wikipedia.org/wiki/Group_velocity#/media/File:Wave_group.gif


Let's say that a green dot in the above animation reaches your ear a few hundred times per second.


Is it possible to hear this phenomenon (wave groups occurring at frequencies in the audible range) as its own tone?



Answer



No, one cannot hear the actual beat frequency. For example, if both waves are ultrasonic and the difference in frequency is 440 Hz, you won't hear the A (unless some severe nonlinearities would come into play; edit: such nonlinear effects are at least 60 dB lower in sound pressure level).


When two ultrasonic waves are close in frequency, the amplitude goes up and down with the beat frequency. A microphone can show this on an oscilloscope. But the human ear does not hear the ultrasonic frequency. It is just silence varying in amplitude :)


(I know a physics textbook where this is wrong.)


Edit: in some cases the mind can perceive the pitch of a "missing fundamental". For example, when sine waves of 880 and 1320 Hz are played, the mind may perceive a tone of pitch A. This is a psychoacoustic phenomenon, exploited for example in the auditory illusion of an Escher's staircase.


quantum mechanics - Energy quantization in the path integral and the Fourier spectrum of the action


I offered a bounty on this question for a simple way to see that the Feynman path integral yields discrete energy levels for bound states, in one dimensional quantum mechanics. As shown there, there's in theory a simple explanation. The path integral computes the propagator by $$K(x_i, x_f, t) \equiv \langle x_f | e^{- i H t} | x_i \rangle = \int_{x(0) = x_i, x(t) = x_f} \mathcal{D}x(t)\, e^{iS[x(t)]/\hbar}.$$ On the other hand, applying a semiclassical approximation, the path integral is just $$K(x_i, x_f, t) \sim \bigg| \frac{\partial^2 S_0(x_i, x_f, t)}{\partial x_i \partial x_f}\bigg|\, e^{i S_0(x_i, x_f, t)/\hbar}$$ where $S_0$ is the on-shell action, i.e. the action for the classical path that goes from $x_i$ to $x_f$ in time $t$, where I'm ignoring issues about the existence and uniqueness of such a path. This makes perfect sense because in the semiclassical approximation we just expand about that classical path, and all the path integral does is provide the extra factor out in front, reflecting how much the nearby paths in the path integral amplify or suppress the classical one.


On the other hand, working in the energy eigenbasis, we have $$K(x_i, x_f, t) = \sum_{n,m} \langle x_f | n \rangle \langle n| \ e^{-iHt} |m \rangle \langle m | x_i \rangle = \sum_n \langle x_f | n \rangle \langle n | x_i \rangle e^{-i E_n t/\hbar}$$ so we get discrete energy if the Fourier transform of $K(x_i, x_f, t)$ in time has discrete support, which is equivalent to the same being true for $S_0(x_i, x_f, t)$. That means we can read off the energy discretization directly from the classical action. Indeed, for the case of the harmonic oscillator, $S_0(t)$ is a periodic function, reflecting the fact that the quantum energy levels are evenly spaced.


My issue is that I can't see how this works for a general potential well. I've tried calculating $S_0(t)$ for situations besides the harmonic oscillator, and it doesn't seem to have discrete spectrum at all. Is there a direct way to see this result, if it's true?




quantum field theory - The meaning of Goldstone boson equivalence theorem


The Goldstone boson equivalence theorem tells us that the amplitude for emission/absorption of a longitudinally polarized gauge boson is equal to the amplitude for emission/absorption of the corresponding Goldstone boson at high energy. I'm wondering what's the physical meaning of this theorem. Is there any relation between equivalence theorem and Higgs mechanism ?




homework and exercises - Help with Conservation of Angular Momentum Question


An ice skater executes a spin about a vertical axis with her feet on a frictionless ice surface. In each hand she holds a small 5kg mass of which are both 1m from the rotation axis and the angular velocity of the skater is 10rad/s. The skater then moves her arms so that both masses are 0.5m from the rotation axis. The skaters own moment of intertia can be taken as being 50kgm^2, independent of her arm position


a)Find the total angular momentum of the skater and the masses both before and after the arm movement. Explain any difference


b) Find the total kinetic energy of the skater and the masses both before and after the arm movement. Explain any difference.



My attempt at part a) was that quite simply plug in the numbers into the equation L=Iw and gather the summation of the 3 objects however I assumed the arms of the skater were two rods with masses at the end and with axis of rotation at the end therefore meaning I use I = 1/3MR^2 however that is not the case, the answer simply uses I = MR^2 which confuses me.


My attempt at part b) was that K=1/2*I*w^2 but I am unable to generate a term of the kinetic energy before and after.


Any help on this would be greatly appreciated. Also any specific topics I could read up on to understand these concepts would be much appreciated.



Answer



The angular momentum of the two masses is computed independent of the skater - you were given the total angular momentum of the skater (including arms and hands which are normally considered part of the person) and ONLY have to compute the moment of inertia / angular momentum of the masses. A point mass at the end of a string has $$I=mr^2$$ as you know. The arms of the skater were already accounted for, and the mass of the weights is not distributed along the arms, it is all at the end.


Angular momentum is $I\omega$. You should now be able to compute it from $I_{total}=I_{skater}+I_{masses}$, and $\omega$ is given. It will, of course, not change when the skater pulls in her arms - conservation of angular momentum, and there is no external torque on the skater-plus-masses system.


The moment of inertia of the masses does change when the skater pulls in her arms - you can compute it for the masses, but not for the arms (which are also coming closer). That is a problem with the question - you must assume a massless arm if you want to compute the moment of inertia when the arms are pulled in.


And you need the moment of inertia for the last part, since you can write the angular kinetic energy as


$$KE = \frac12 I \omega^2$$


So it is not enough to know $L$, you actually need to be able to compute the new angular velocity. And for that you must make a simplifying assumption (massless arms).



On that assumption, you can compute the increased kinetic energy from the above (because you know the new angular velocity from the new moment of inertia).


Before:


$$\omega = 10 rad/s\\ I_{skater}=50 kg m^2\\ I_{weights} = 10 kg m^2\\ KE = \frac12 I_{total} \omega^2$$


and you should be able to figure the rest from here...


Monday 27 February 2017

quantum mechanics - How does one account for the momentum of an absorbed photon?


Suppose I have an atom in its ground state $|g⟩$, and it has an excited state $|e⟩$ sitting at an energy $E_a=\hbar\omega_0$ above it. To excite the atom, one generally uses a photon of frequency $\omega$ equal (or sufficiently close to) the transition frequency $\omega_0$, and this will stimulate a transition.


One thing that is often left by the roadside,* though, is the fact that the incoming photon has momentum as well as energy, and that if the atom wants to swallow the energy it also needs to swallow the momentum. So, in the nuts and bolts of states and operators,




how does one describe the transfer of momentum during an atomic transition?



In addition to this, the fact that this recoil momentum is rarely mentioned is a good indication that it is also rarely an issue. Why is it that in most circumstances we can safely ignore the photon's momentum when describing electronic transitions?


*Apart from treatments of Doppler cooling, which simply take the momentum transfers for granted and do not explain how and why they happen.



Answer



Introduction


The transfer of momentum gets included properly when one incorporates the motion centre-of-mass $\mathbf R$ of the atom as a dynamical variable. Performing the dipole approximation allows one to treat all the electrons as interacting with some field at the centre of the atom, $\mathbf F(\mathbf R,t)$, but now $\mathbf R$ is an operator on the centre-of-mass degrees of freedom, which means that transition probabilities need to take this into account.


In hand-waving terms, the interaction hamiltonian can be rephrased as $$ \hat H_\mathrm{int}=\mathbf d\cdot\mathbf F(\mathbf R,t), $$ where $\mathbf d$ is some dipole operator which acts on the internal, electronic degrees of freedom, and $\mathbf F(\mathbf R,t)$ is a field operator which depends on $\mathbf R$. Transition probabilities must be taken between an initial state $|\Psi_i⟩=|\chi_i⟩|\psi_i⟩$ which is a joint state of the internal degrees of freedom in state $|\psi_i⟩$ and the centre of mass motion in state $|\chi_i⟩$, and an analogous final state. The total transition probability then includes a spatial-matching factor $$\left\langle\chi_f|\mathbf F(\mathbf R,t)|\chi_i\right\rangle$$ which controls the momentum transfer. Thus, if both $|\chi_i⟩$ and $|\chi_f⟩$ have definite linear momentum and the field is monochromatic, then the field momentum $\hbar\mathbf k$ needs to match, exactly, the momentum difference between the two, or the transition amplitude will vanish.


I provide, below, a more detailed account of this calculation. References are relatively hard to find because they are drowned in a sea of Doppler-cooling papers and textbooks, but SJ van Enk's Selection rules and centre-of-mass motion of ultracold atoms (Quantum Opt. 6, 445 (1994), eprint) gives a good introduction, which I follow below.


Relevance



Before I get down to some nitty-gritty maths, I want to address why it's generally OK to not do any of what follows. Very few introductory textbooks include any of this, and it is rarely a consideration in day-to-day physics, but it is definitely required by energy and momentum conservation. So what gives?


There are two reasons for this.




  • The first is that the energy changes involved are really not that big to begin with. Consider, for instance, the Lyman-$\alpha$ line of hydrogen, which has a relatively high frequency (and hence photon momentum) and happens on a light atom, so the effect should be relatively strong. The photon momentum feels like it's significant, at $p=m_\mathrm{H}\times 3.3\:\mathrm{m/s}$, but the velocity change it imparts is tiny with respect to the atomic unit of velocity, $\alpha c=2.18\times 10^{6}\:\mathrm{m/s}$.


    More importantly, the kinetic energy for the change is small, at $\tfrac1{2m_\mathrm{H}}p^2=55\:\mathrm{neV}$, so it accounts for a fractional detuning of the order of $5\times 10^{-9}$ with respect to the frequency the transition would have if the atom were fixed. This is doable with precision spectroscopy, but you need all of those nine significant figures in your detection apparatus to be able to detect it.




  • To add insult to injury, the tiny photon pushes are generally drowned out by the comparatively huge fluctuations in the atom's position from its thermal motion. At room temperature, $k_B T\approx 26\:\mathrm{meV}$, which means that the atom's motion, and its accompanying (uncontrolled) Doppler shift will cause a large Doppler broadening that will completely mask the photon recoil. (For hydrogen at room temperature, the effect is a fractional broadening of the order of $10^{-5}$, so the line is still looks narrow, but it's on the order of $30\:\mathrm{GHz}$, compared to the $530\:\mathrm{MHz}$ shift from the photon recoil.)


    This is not a problem, though, if you can cool your atoms to a proper temperature. If you can get down to temperatures of the order of $p^2/2mk_B\approx0.64\:\mathrm{mK}$, then the effects will be clearly measurable. Indeed, typically you use the photon recoil to help you cool using Doppler cooling to get there (though that's typically not enough, and you need additional steps of sub-Doppler cooling such as Sisyphus or sideband cooling to finish the job).





On the other hand, all of these challenges have been overcome and observing photon recoil has been more or less routinely possible for forty years or so. Modern high-precision spectroscopy techniques can reach well past 15 or 16 significant figures, and photon recoil is an integral part of the theory and the experimental toolkit.


Nuts and bolts


Consider a bunch of particles of charge $q_i$ and mass $m_i$ at positions $\mathbf r_i$, which are exposed to a radiation field described by the vector potential $\mathbf A(\mathbf r,t)$ in the radiation gauge (so $\nabla\cdot\mathbf A(\mathbf r,t)=0$), and subject to a (translation-invariant) potential $\hat V=V(\mathbf r_0,\ldots,\mathbf r_N)$. The full hamiltonian for the system is given by \begin{align} \hat H &= \sum_i \frac1{2m_i}\left(\mathbf p_i-q_i\mathbf A(\mathbf r_i,t)\right)^2+\hat V \\&= \sum_i\left[\frac{\mathbf p_i^2}{2m_i}-\frac{q_i}{m_i}\mathbf p_i\cdot\mathbf A(\mathbf r_i,t)+\frac{\mathbf A(\mathbf r_i,t)^2}{2m_i}\right]+\hat V \\&= \sum_i\frac{\mathbf p_i^2}{2m_i}+\hat V-\sum_i\frac{q_i}{m_i}\mathbf p_i\cdot\mathbf A(\mathbf r_i,t) +\sum_i\frac{\mathbf A(\mathbf r_i,t)^2}{2m_i}. \end{align} The quadratic term $\sum_i\frac{\mathbf A(\mathbf r_i,t)^2}{2m_i}$ is known as the diamagnetic term and it is generally safe to ignore because it can be eliminated with a trivial gauge transformation within the dipole approximation. (Outside it, you do need to worry about it.)


The main interaction hamiltonian is then $$ \hat H_\mathrm{int}=-\sum_i\frac{q_i}{m_i}\mathbf p_i\cdot\mathbf A(\mathbf r_i,t). $$ (In most cases, this 'velocity gauge' interaction hamiltonian of the form $\mathbf p\cdot\mathbf A$ can be rephrased, via a gauge transformation, to a more familiar $\mathbf r\cdot\mathbf E$-style interaction in the length gauge. However, this isn't really necessary here so I'll stick with the velocity gauge.)


Coordinate transformations


To expose the role of the centre of mass, we transform to the variables $$ \mathbf R=\sum_{i=0}^N\frac{m_i}{M}\mathbf r_i \quad\text{and}\quad \newcommand{\rro}{\boldsymbol{\rho}} \rro_i=\mathbf r_i-\mathbf r_0 \quad\text{for }i=1,\ldots, N $$ with $M=\sum_im_i$, and where the position of the zeroth particle (i.e. the nucleus) drops out as a dynamical variable. The momenta transform as $$ \mathbf P=\sum_{i=0}^Np_i \quad\text{and}\quad \newcommand{\ppi}{\boldsymbol{\pi}} \ppi_i=\mathbf p_i-\frac{m_i}{M}\sum_{j=0}^N\mathbf p_j $$ and the inverse relations read \begin{align} \mathbf r_0&=\mathbf R-\sum_{j=1}^N\frac{m_j\rro_j}{M} & & \mathbf r_i=\mathbf R+\rro_i-\sum_{j=1}^N\frac{m_j\rro_j}{M} \\ \mathbf p_0&=\frac{m_0}{M}\mathbf P-\sum_{j=1}^N\ppi_j & & \mathbf p_i=\frac{m_i}{M}\mathbf P+\ppi_i .\end{align}


The vector potential, finally, can simply be approximated at the centre of mass, so $$\mathbf A(\mathbf r_0,t)\approx\mathbf A(\mathbf r_i,t)\approx\mathbf A(\mathbf R,t).$$ The interaction hamiltonian, then, reads \begin{align} \hat H_\mathrm{int} &= -\frac{q_0}{m_0}\mathbf p_0\cdot\mathbf A(\mathbf r_0,t) -\sum_{i>0}\frac{q_i}{m_i}\mathbf p_i\cdot\mathbf A(\mathbf r_i,t) \\&= -\frac{q_0}{m_0}\left( \frac{m_0}{M}\mathbf P-\sum_{i>0}\ppi_i \right)\cdot\mathbf A(\mathbf R,t) -\sum_{i>0}\frac{q_i}{m_i}\left( \frac{m_i}{M}\mathbf P+\ppi_i \right)\cdot\mathbf A(\mathbf R,t) \\&= \sum_{i>0} \left(\frac{q_0}{m_0}-\frac{q_i}{m_i}\right)\ppi_i\cdot\mathbf A(\mathbf R,t) \end{align} for a neutral system.


Transition amplitudes



This is really all that one needs. The transition probability from an initial state $|\Psi_i⟩$ to a possible final state $|\Psi_f⟩$ can be simply read as $$ ⟨\Psi_f|\hat H_\mathrm{int}|\Psi_i⟩, $$ with some more subtleties if one wants to be rigorous with the time evolution, and derive e.g. Fermi's golden rule.


If the centre of mass is being held fixed in space, then all that matters is the atomic dipole moment, which for this interaction hamiltonian reads $$ \sum_{i>0}\left(\frac{q_0}{m_0}-\frac{q_i}{m_i}\right)⟨\psi_f|\ppi_i|\psi_i⟩, $$ taken between internal states $|\psi_i⟩$ and $|\psi_f⟩$; this is then dotted with the fixed vector potential $\mathbf A(\mathbf R,t)$ to give the transition rate.


For a dynamical centre of mass, though, which starts off in the state $|\chi_i⟩$ and which we're probing for at the state $|\chi_f⟩$, the full transition probability reads $$ \sum_{i>0}\left(\frac{q_0}{m_0}-\frac{q_i}{m_i}\right)⟨\psi_f|\ppi_i|\psi_i⟩ \cdot ⟨\chi_f|\mathbf A(\mathbf R,t)|\chi_i⟩. $$


Here the matrix element $⟨\chi_f|\mathbf A(\mathbf R,t)|\chi_i⟩$ directly controls the absorption of one quantum of momentum into the centre-of-mass state. To get full momentum conservation, you should really consider an example with a monochromatic field, $$\mathbf A(\mathbf R,t)=\mathbf A_0\cos(\mathbf k\cdot\mathbf R-\omega t),$$ so the field gives a well-defined momentum contribution, and with initial and final states that have definite momenta $\mathbf k_i$ and $\mathbf k_f$ respectively - i.e. plane waves with those wavevectors. The matrix element then reads \begin{align} ⟨\chi_f|\mathbf A(\mathbf R,t)|\chi_i⟩ &= \mathbf A_0 \int\frac{\mathrm d\mathbf R}{(2\pi\hbar)^3} e^{i(\mathbf k_i-\mathbf k_f)\cdot\mathbf R/\hbar}\cos(\mathbf k\cdot\mathbf R-\omega t) \\&= \frac12\mathbf A_0\left( \delta(\mathbf k_i-\mathbf k_f+\mathbf k)e^{-i\omega t} + \delta(\mathbf k_i-\mathbf k_f-\mathbf k)e^{+i\omega t} \right). \end{align} In a quantized-field picture, the first, positive-frequency term becomes an annihilation operator which subtracts one photon from the field and adds $\hbar\mathbf k$ momentum to the centre-of-mass motion, and the second term becomes a creation operator which emits one photon while eliminating $\hbar\mathbf k$ momentum from the atom's motion. If you're using a classical field with quantized matter, the rotating-wave approximation will typically require you to keep only the first term for absorption and only the second term for emission, with the corresponding effects on the centre-of-mass momentum.


Energy


Finally, what about the kinetic energy? Naively, the photon energy should ideally be slightly higher than the transition energy to account for the increase in the centre-of-mass kinetic energy (this forgets that the laser can also slow the atom down if it's flying into the laser and the laser is redshifted, but it's all the same, really). How does one account for this?


In fact, you'll notice that I haven't spoken at all about energy considerations, and I certainly haven't imposed any relation between the initial and final internal states and the atomic hamiltonian. As it turns out, the external motion gets treated in exactly the same way.


At the start, I split up the hamiltonian into an atomic and an interaction part: $$ \hat H = \sum_i\frac{\mathbf p_i^2}{2m_i}+V(\mathbf r_0,\ldots,\mathbf r_N)-\sum_i\frac{q_i}{m_i}\mathbf p_i\cdot\mathbf A(\mathbf r_i,t) =\hat H_\mathrm{at}+\hat H_\mathrm{int} $$ (For a quantized field, you'd also need to include a field hamiltonian, of course.) Now the atomic hamiltonian as stated is a function of the individual coordinates, but ideally we want to rephrase it in terms of the internal plus centre-of-mass coordinates. This then gives $$ \hat H_\mathrm{at} =\frac{\mathbf P^2}{2M} +\left[ \sum_{i>0}\frac{\ppi_i^2}{2\mu_i}+\sum_{i\neq j>0}\frac{\ppi_i\cdot\ppi_j}{2m_0} + V(\mathbf 0,\rro_1,\ldots ,\rro_N) \right] =\hat H_\mathrm{COM}+\hat H_\mathrm{el}. $$ The kinetic energy of the centre of mass is directly accounted for, and the internal hamiltonian $\hat H_\mathrm{el}$ is what we actually diagonalize when we find the electronic eigenstates. (Here $\mu_i=(m_i^{-1}+m_0^{-1})^{-1}$ is the $i$th reduced mass, and the cross kinetic terms are generally suppressed by the large nuclear mass $m_0$.)


More importantly, though, if we want to say that the system went from a state of definite energy to another state of definite energy by absorbing a photon, then it needs to go from one eigenstate to another of the full atomic hamiltonian $\hat H_\mathrm{at}$, and this includes the centre-of-mass degree of freedom. The photon energy then needs to account for the change in energy in the whole thing, not just the electronic transition.


nuclear physics - Isoscalar and isovector terms in optical model potential


How does one obtain the isoscalar and isovector terms of the nucleus-nucleus interaction potential and what do they signify?




Topological order vs. Symmetry breaking: what does (non-)local order parameter mean?


Topological order are sometimes defined in opposition with the order parameter originating from a symmetry breaking. The latter one being possibly described by a Landau theory, with an order parameter.


Then, one of the distinctions would be to say that topological order can not be described by a local order parameter, as e.g. in this answer by Prof. Wen. I thus suppose that a Landau theory describes a local order parameter.


I have deep difficulties to understand what does local mean ? Does it mean that the order parameter can be inhomogeneous (explicit position dependency), as $\Delta\left(x\right)$ ? Does it mean that one can measure $\Delta$ at any point of the system (say using a STM tip for instance) ? (The two previous proposals are intimately related to each other.) Something else ?



A subsidiary question (going along the previous one of course) -> What is the opposite of local: is it non-local or global ? (Something else ?) What would non-local mean ?


Any suggestion to improve the question is warm welcome.



Answer



In theories with spontaneous symmetry breaking, the phase transition can usually be characterized by a local order parameter $\Delta(x)$, which is not invariant under the relevant symmetry group $G$ of the Hamiltonian. The expectation value of this field has to be zero outside the ordered phase $\langle\Delta(x)\rangle = 0$, but non-zero in the phase $\langle\Delta(x)\rangle \neq 0$. This shows that there has been a spontaneous breaking of $G$ to a subgroup $H\subset G$ (where $H$ is the subgroup that leaves $\Delta(x)$ invariant).


What local means in this context, is usually that $\Delta(x)$ at point $x$, can be constructed by looking at a small neighborhood around the point $x$. Here $\Delta(x)$ can be dependent on $x$ and need not be homogeneous. This happens for example when you have topological defects, such as vortices or hedgehogs. One powerful feature of these Landau-type phases, is that there will generically be gapless excitations in the system corresponding to fluctuations of $\Delta(x)$ around its expectation value $\langle\Delta(x)\rangle$ in the direction where the symmetry is not broken (unless there is a Higgs mechanism). These are called Goldstone modes and their dynamics are described by a non-linear $\sigma$-model with target manifold $G/H$.


An example is the order parameter for s-wave superconductors $\langle\Delta(x)\rangle = \langle c_{\uparrow}(x)c_{\downarrow}(x)\rangle$, which breaks a $U(1)$ symmetry down to $\mathbb Z_2$. But there are no Goldstone modes due to the Higgs mechanism, the massive amplitude fluctuations are however there (the "Higgs boson"). [Edit: see EDIT2 for correction.]


A non-local order parameter does not depend on $x$ (which is local), but on something non-local. For example, a non-local (gauge-invariant) object in gauge theories are the Wilson loops $W_R[\mathcal C] = \text{Tr}_R{\left(\mathcal Pe^{i\oint_{\mathcal C}A_\mu\text dx^\mu}\right)},$ where $\mathcal C$ is some closed curve. The Wilson loop thus depends on the whole loop $\mathcal C$ (and a representation $R$ of the gauge group) and cannot be constructed locally. It can also contain global information if $\mathcal C$ is a non-trivial cycle (non-contractible).


It is true that topological order cannot be described by a local order parameter, as in superconductors or magnets, but conversely a system described by a non-local order parameter does not mean it has topological order (I think). The above mentioned Wilson loops (and similar order parameters, such a the Polyakov and 't Hooft loop), is actually a order parameter in gauge theories which probe the spontaneous breaking of a certain center-symmetry. This characterizes the deconfinement/confinement transition of quarks in QCD: in the deconfined phase $W_R[\mathcal C]$ satisfies a perimeter law and quarks interact with a massive/Yukawa type potential $V(R)\sim \frac{e^{-mR}}R$, while in the confined phase it satisfy an area law and the potential is linear $V(R)\sim \sigma R$ ($\sigma$ is some string tension). There might be other examples of spontaneous symmetry breaking phases with non-local order parameter. [Edit: see EDIT2.]


Let me just make a few comments about topological order. In theories with with spontanous symmetry breaking, long-range correlations are very important. In topological order the systems are gapped by definition, and there is only short-range correlation. The main point is that in topological order, entanglement plays the important role not correlations. One can define the notion of long-range entanglement (LRE) and short-range entanglement (SRE). Given a state $\psi$ in the Hilbert space, loosely speaking $\psi$ is SRE if it can de deformed to a product state (zero entanglement entropy) by LOCALLY removing entanglement, if this is not possible then $\psi$ is LRE. A system which has a ground state with LRE is called topological order, otherwise its called the trivial phase. These phases have many characteristic features which are generally non-local/global in nature such as, anyonic excitations/non-zero entanglement entropy, low-energy TQFT's, and are characterized by so-called modular $S$ and $T$ matrices (projective representations of the modular group $SL(2,\mathbb Z)$).


Note that, unlike popular belief, topological insulators and superconductors are SRE and are NOT examples of topological order!



If one requires that the system must preserve some symmetry $G$, then not all SRE states can be deformed to the product state while respecting $G$. This means that SRE states can have non-trivial topological phases which are protected by the symmetry $G$. These are called symmetry protected topological states (SPT). Topological insulators/superconductors are a very small subset of SPT states, corresponding to restricting to free fermionic systems. Unlike systems with LRE and thus intrinsic topological order, SPT states are only protected as long as the symmetry is not broken. These systems typically have interesting boundary physics, such as gapless modes or gapped topological order on the boundary. Characterizing them usually requires global quantities too and cannot be done by local order parameters.




EDIT: This is a response to the question in the comment section.


I am not sure whether there are any reference which discuss this point explicitly. But the point is that you can continuously deform/perturb the Hamiltonian of a topological insulator (while preserving the gap) into the trivial insulator by breaking the symmetry along the way (they are only protected if the symmetry is respected). This is equivalent to locally deforming the ground state into the product state, which is the definition of short range entanglement. You can find the statement in many papers and talks. See for example the first few slides here. Or even better, see this (slide with title "Compare topological order and topological insulator" + the final slide).


Let me make another comment regarding the distinction between intrinsic topological order and topological superconductors, which at first seems puzzling and contrary to what I just said. As was shown by Levin-Wen and Kitaev-Preskill, the entanglement entropy of ground state for a gapped system in 2+1D has the form $S = \alpha A - \gamma + \mathcal O(\tfrac 1A)$, where $A$ is the boundary area (this is called the area law, not the same area law I mentioned in the case of confinement), $\alpha$ is a non-universal number and $\gamma$ is universal and called the topological entanglement entropy (TEE). What was shown in the above papers is that the TEE is equal to $\gamma = \log\mathcal D$, where $\mathcal D\geq 1$ is the total quantum dimension and is only strictly $\mathcal D>1$ ($\gamma\neq 0$) if the system supports anyonic excitations.


Modulo some subtleties, LRE states always have $\gamma\neq 0$, which in turn means that they have anyonic excitations. Conversely for SRE states $\gamma = 0$ and there are no anyons present.


This seems to be at odds with the existence of 'Majorana fermions' (non-abelian anyons) in topological superconductors. The difference is that, in the case of topological order you have intrinsic finite-energy excitations which are anyonic and the anyons correspond to linear representations of the Braid group. While in the case of topological superconductors, you only have non-abelian anyons if there is an extrinsic defect (vortex, domain wall etc.) which the zero-modes can bind to, and they correspond to projective representation of the Braid group. The latter type anyons from extrinsic defects can also exist in topological order, but intrinsic finite-energy ones only exist in topological order. For more details, see the recent set of papers from Barkeshli, Jian and Qi.




EDIT2: Please see my comments below for some corrections and subtleties. Such as, it is in a sense not correct that superconductors are described by a local order parameter. It only appears local in a particular gauge. Superconductors are actually examples of topological order, which is rather surprising.


quantum mechanics - A problem with the Gamow state


Consider a form of potential $U(r)$ as follows $$ U(r)=\begin{cases}0 & 0b\end{cases} $$


In this problem $r$ is the distance from the origin, $r \geq 0$.


It is known that the Schrodinger equation $$ \left[\frac{\hat P^2}{2m} + U(r) - E\right] \psi (r) = 0 $$ admits two types of solutions: real energy solutions, with $E \geq 0$ (also named "scattering states", and complex energy solutions, with $E = E_0 - i\Gamma$ ("resonant states").


From physical reasons, both these solutions have to satisfy $\psi (r=0) = 0$.


I ask opinion about two statements.




  1. The scattering states ($E$ real and positive) form a complete basis of functions according to which we can develop in a quantum superposition, any function that vanishes at $r = 0$.





  2. Therefore, the Gamow state (complex $E$) can be expanded in this basis, i.e. the Gamow state is equal to its expansion in this basis.




Note: keep in mind that the scattering states are normalized to the Dirac $\delta$,


$$ \int dr \ \psi^*(r;k) \psi (r; k') = \delta (k - k'), $$


and the Gamow state diverges for $r \to \infty $.


I know that the 1st statement is correct. My question on these statements is because of things that I see quite often in the literature: the energy spectrum of a well isolated resonance, narrow and far from the threshold, is said to be distributed Breit-Wigner. But, for finding the spectrum of real energies one needs the above expansion of the resonance state, i.e. as a quantum superposition of real-energy states. But those who claim the Breit-Wigner distribution, don't do the expansion in a quantum superposition. This is what leads me to my questions.




visible light - What is the color of a mirror?


A mirror couldn't be white, as then you wouldn't be able to see your reflection so clearly. It wouldn't be transparent, as that then won't reflect.


So what color is it?



Answer



A mirror, or a perfect mirror at least, is the same colour as a perfectly white sheet of paper.


Both a perfect mirror and a perfectly white sheet of paper reflect all the light that hits them. The difference is that the paper scatters the light so what reaches your eye is a mixture of all the light hitting the paper, while the mirror reflects the light without scattering. If you're interested in details the reflection from a mirror is specular while the reflection from the piece of paper is diffuse.


Response to comment


The eye contains three types of cone cells that detect red, green and blue light. Your brain works out the colour based on how much the three types of cells are activated e.g. if only the red cones report a signal the brain interprets this as red light. If all three types of cone are equally activated the brain interprets this as white light.


A sheet of paper scrambles the light hitting it so if you try to use the paper as a mirror all the light hitting it gets mixed up and all types of cone cell receive light from all bits of the paper. Assuming you're not standing in a red painted room or somewhere there's an obvious colour cast the paper will look white.



If you swap the sheet of paper for a mirror the light isn't scrambled. If you're looking at e.g. a beachball some of the cone cells in your eye will receive light from the red bits of the beachball, some from the yellow sand, some from the blue sky and so on. So the individual cone cells are seeing the colour of the scene you're reflecting in the mirror not the colour of the mirror itself. In this sense the mirror doesn't have a colour. However if you average over all the cone cells, i.e. mix all the light up, you'll get the same colour as when you had the sheet of paper there. That's why I claim the mirror has the same colour as the piece of paper.


Of course you could argue the paper doen't really have a colour either. After all the paper may look white in daylight, but it would look green in a jungle or red in a tomato sauce factory.


electromagnetic radiation - Which cyan colored line is produced in the Thomson e/m apparatus?


Related: Which green spectral line(s) are emitted in a Thomson tube?


After reading Lisa Lee’s OP on an electron deflection tube, although she had some misunderstandings on its operation, I still believe that her question is still relevant. If one looks at the Thomson e/m tube filled with helium gas (made by Pasco), the tracer light is created by de-excited electrons, not by an electron beam interacting with a phosphor as in Lisa Lee's OP. That is, the accelerated electrons scatters off of a low pressure helium gas and emit a tracer line to follow the electrons path, as shown below.


enter image description here


Now if you look at the bright line spectrum of helium,



enter image description here


One can clearly see that there are two “cyan” color lines around 500 nm. Following in Lisa Lee’s footsteps, there is a series of question one can ask: (1) which line is emitted by the Helium gas atoms? Or are both lines emitted? (2) How can a variable accelerating voltage for the electrons always produce the same tracer line color? In theory, I assume that if I decreased the energy of the electron beam, I could produce red (around 670 nm) or yellow (around 590 nm) tracer lines, but that doesn’t happen. Somehow, it appears to me that the e/m tube is “tuned” just right so that the emitted light is always this “cyan” color. Why?




Sunday 26 February 2017

thermodynamics - Can a glass window protect from heat radiation?


I have been reading in this and found a statement saying : " Glass will not transmit heat radiation.". So now I am confused. If glass won't transmit heat radiation, then why do we feel hot when we sit in front of a glass window in a sunny day ? Also, why do we find the car seats facing the windshield so hot in a sunny day ?


One other thing, let's say a nuclear detonation happened somewhere nearby and I was standing behind a glass window, will this window protect me from thermal or heat radiation effects of the bomb ?




Saturday 25 February 2017

sun - Does visible light heat things up?


During a sunny day the walls of my house warm up (no surprise). My question: how much of this warming up (if any) comes from visible light? I associate infrared with thermal energy. If my house was floating in space (to prevent any thermal exchange with its surroundings) and I installed a giant infrared (and UV) filter between it and the sun, would it still warm up (compared to its rest temperature in full darkness)? Thanks.



Answer



Yes, it would, though not as quickly as if you were getting the full spectrum of sunlight. All frequencies of the light spectrum carry energy, so it becomes a question of how much of that energy is absorbed by the house.


For example, if your house was completely black, all that visible light energy would be absorbed by the house and converted into heat. If it was completely white (or massively reflective), there would be almost no heat transfer whatsoever.


electromagnetism - What is the relationship between the magnetic units oersted and tesla?


How are the units oersted and tesla related? For example, how would you express $20\:\mathrm{Oe}$ in tesla?



Answer



They are technically units for incommensurate quantities, but in practice this is often just a technicality. The magnetic field that makes sense ($B$) is measured in teslas (SI) or gauss (CGS), and the magnetic field that people spoke about 100 years ago ($H$) is measured in amps per meter (SI, also equivalent to a number of other things) or oersteds (CGS).


To go between the two unit systems, we have \begin{align} 1\ \mathrm{G} & = 10^{-4}\ \mathrm{T}, \\ 1\ \mathrm{Oe} & = \frac{1000}{4\pi} \mathrm{A/m}. \end{align} To go between the two magnetic fields, we have \begin{align} \frac{B}{1\ \mathrm{G}} & = \mu_r \frac{H}{1\ \mathrm{Oe}} & \text{(CGS)}, \\ B & = \mu_r \mu_0 H & \text{(SI)}, \end{align} where $\mu_r$ is the dimensionless relative permeability of the medium ($1$ for vacuum and pretty much any material other than strong magnets) and $\mu_0 = 4\pi \times 10^{-7}\ \mathrm{H/m}$ (henries per meter) is the vacuum permeability.


Therefore a $1\ \mathrm{Oe}$ corresponds to $10^{-4}\ \mathrm{T}$ in non-magnetic materials.




One caveat is that there are cases where $B$ and $H$ are not so simply related. If you are interested in their directions and not just magnitudes, then in some materials $\mu_r$ is actually a tensor and can rotate one field relative to the other. In this case the relation is still linear. In worse cases (e.g. ferromagnets) the relationship is not linear and cannot be expressed in the forms presented above. At least the $\mathrm{G} \leftrightarrow \mathrm{T}$ and $\mathrm{Oe} \leftrightarrow \mathrm{A/m}$ relations always hold.



orbital motion - Could this planetary superalignment happen?


Here's the 'superalignment' I'm referring to:


Planetary Superalignment



We've all heard the stories about 'mystical planetary alignments' that will increase/decrease the effective surface gravity experienced on Earth (one debunked here on snopes), sometimes referred to as 'Zero G Day'.


What I'm wondering is: what would be the maximum possible effect on a given weight (ratio of 'normal' weight to 'alignment' weight)?



  1. Noon at a new moon, Venus and Mercury between the Earth and the Sun, Mars, Jupiter, Saturn, Uranus and Neptune across the sun in roughly a straight line (maximum lightness).

  2. Midnight during the same alignment (maximum heaviness - almost the same ratio, but 2 Earth radii further away from the planets and sun).


Also, how often (if ever) could this happen?


EDIT


I have calculated the resulting effects of this 'superalignment':


Planetary Superalignment Calculator



The result is that with the planets and our moon aligned as much as they can be to have their forces be additive, their gravity culminates in a $\pm0.06\%$ difference. Since I weigh 90kg, I would weigh 89.94 kg at noon and 90.05 kg at midnight.


Now, the last part of this question remains - would this superalignment, or something approximating this superalignment, ever occur, and if so would it be on a repetition and how often?




Friday 24 February 2017

quantum mechanics - Ground state of Beryllium (${rm Be}$)


Why is the ground state of Beryllium (${\rm Be}$) with electronic configuration $[{\rm He}]2s^2$ is $^1S_0$ and not $^3S_1$? The state $^3S_1$ has higher spin multiplicity.



Answer



There is no such state. A triplet $^3S$ spin state would be exchange-symmetric on the spin sector, which would require an antisymmetric state on the orbital sector, and this is impossible on a $2s^2$ orbital configuration, since both electrons are in the same state and that is intrinsically symmetric.


(On the other hand, a $^3S_1$ state is perfectly possible, say, in a $1s\: 2s$ configuration. But it will be extremely hard to find a real atomic system with a ground state of that type, as the higher energy of the $2s$ (or other such orbital) means that flipping to a $^1S$ configuration by dropping to both electrons on the lower orbital will be energetically favourable. And, indeed, a short scan using the Mathematica curated ElementData shows that no real atoms have a $^3S_1$ ground state.)


The Hund maximum-multiplicity rule does provide the ground state (except in the cases where it breaks), but you need to maximize over the set of states that do exist ;-).


special relativity - Quaternions and 4-vectors


I recently realised that quaternions could be used to write intervals or norms of vectors in special relativity:


$$(t,ix,jy,kz)^2 = t^2 + (ix)^2 + (jy)^2 + (kz)^2 = t^2 - x^2 - y^2 - z^2$$



Is it useful? Is it used? Does it bring anything? Or is it just funny?



Answer



The object you're talking about is called, in mathematics, a Clifford algebra. The case when the algebra is over the complex field in general has a significantly different structure from the case when the algebra is over the real field, which is important in Physics. In Physics, in the specific case of 4 dimensions, using the Minkowski metric as you have in your Question, and over the complex field, the algebra is called the Dirac algebra. Once you have the name Clifford algebra, you can look them up in Google, where the first entry is, unsurprisingly, Wikipedia, http://en.wikipedia.org/wiki/Clifford_algebra, which gives you a reasonable flavor of the abstract construction methods that mathematicians prefer. The John Baez page that is linked to from the Wikipedia page is well worth reading (if you spent a year learning everything that John Baez has posted over the years, almost always with unusual clarity and engagingly, you would know most of the mathematics that might be useful for Physics).


It's not so much that the Clifford algebras are funny. Their quadratic construction is interrelated, often closely, with many other constructions in mathematics.


There are people who are enthusiastic about Clifford algebras, sometimes very or too much so, and a lot of ink has been spilled (Joel Rice's and Luboš Motl's Answers are rather inadequate to the literature, except that I think they chose to interpret your Question narrowly where I've addressed what your construction has led to in Mathematics more widely), but there are many other fish in the sea to admire.


EDIT: Particularly in light of Marek's comments below, it should be said that I interpreted Isaac's Question generously. There is a somewhat glaring mistake in the OP that is pointed out by Luboš (which I hope you see, Isaac). Nonetheless there is a type of construction that is closely related to what I chose to take to be the idea of the OP, Clifford algebras.


Isaac, this is how I think your derivation ought to go, if we just use quaternions, taking $q=t+ix+jy+kz$, $$q^2=(t+ix+jy+kz)(t+ix+jy+kz)=t^2-x^2-y^2-z^2+2t(ix+jy+kz).$$ The $xy,yz,zx$ terms cancel nicely, but the $tx,ty,tz$ terms don't, unless we do as Luboš did and introduce the conjugate $\overline{q}=t-ix-jy-kz$. This, however, doesn't do what I take you to be trying to do. So, instead, we introduce a fourth object, $\gamma^0$, for which $(\gamma^0)^2=+1$, and which anti-commutes with $i$,$j$, and $k$. Then the square of $\gamma^0t+ix+jy+kz$ is $t^2-x^2-y^2-z^2$. The algebra this generates, however, is more than just the quaternions, it's the Clifford algebra $C(1,3)$.


EDIT(2): Hi, Isaac. I've thought about this way too much overnight. I think now that I was mistaken, you didn't make a mistake. I think you intended your expression $(a,b,c,d)^2$ to mean the positive-definite inner product $a^2+b^2+c^2+d^2$. With this reading, however, we see three distinct structures, the positive-definite inner product, the quaternions, and the Minkowski space inner product that emerges from using the first two together. Part of what made me want to introduce a different construction is that in yours the use of the quaternions is redundant, because you'd get the same result that you found remarkable if you just used $(a,ib,ic,id)^2$ (as Luboš also mentioned). Even the positive-definite inner product is redundant, insofar as what we're really interested in is just the Minkowski space inner product. Also, of course, I know something that looks similar and that has been mathematically productive for over a century, and that can be constructed using just the idea of a non-commutative algebra and the Minkowski space inner product.


To continue the above, we can write $\gamma^1=i$, $\gamma^2=j$, $\gamma^3=k$ for the quaternionic basis elements, together with the basis element $\gamma^0$, then we can define the algebra by the products of basis elements of the algebra, $\gamma^\mu\gamma^\nu+\gamma^\nu\gamma^\mu=2g^{\mu\nu}$. Alternatively, for any vector $u=(t,x,y,z)$ we can write $\gamma(u)=\gamma^0u_0+\gamma^1u_1+\gamma^2u_2+\gamma^3u_3$, then we can define the algebra by the product for arbitrary 4-vectors, $\gamma(u)\gamma(v)+\gamma(v)\gamma(u)=2(u,v)$, where $(u,v)$ is the Minkowski space inner product. Hence, we have $[\gamma(u)]^2=(u,u)$. Now everything is getting, to my eye, and hopefully to yours, rather neat and tidy, and nicely in line with the conventional formalism.


general relativity - How does gravity escape a black hole?


My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to me that the gravitational attraction caused by a black hole carries information about the amount of mass within the black hole. So, how does this information escape? Looking at it from a particle point of view: do the gravitons (should they exist) travel faster than the photons?



Answer




Well, the information doesn't have to escape from inside the horizon, because it is not inside. The information is on the horizon.


One way to see that, is from the fact that nothing ever crosses the horizon from the perspective of an observer outside the horizon of a black hole. It asymptotically gets to the horizon in infinite time (as it is measured from the perspective of an observer at infinity).


Another way to see that, is the fact that you can get all the information you need from the boundary conditions on the horizon to describe the space-time outside, but that is something more technical.


Finally, since classical GR is a geometrical theory and not a quantum field theory*, gravitons is not the appropriate way to describe it.


*To clarify this point, GR can admit a description in the framework of gauge theories like the theory of electromagnetism. But even though electromagnetism can admit a second quantization (and be described as a QFT), GR can't.


newtonian mechanics - Calculating mass of an orbiting body with force and acceleration


I'm new to physics, and it's a lot to take in- but there is a problem that I really can't seem to wrap my head around- finding the mass of an orbiting body, like an asteroid. I've looked around a lot and it seems to be impossible to find the mass of an object without going there and orbiting it, but why? If you can know the mass of the object that you're orbiting (for example the sun), can't you use that to discern the mass of the orbiting object (an asteroid)?



Then I thought of the formula F = ma, whereby you can find force with mass and acceleration. This is confusing to me, couldn't you rearrange it as M = F/a, and then find force and acceleration?


So really this is a two-pronged question: Can you find the mass of an object based on that of the one it is orbiting, or could you find it with force and acceleration?


I'm probably missing something big aren't I...


Edit: Thank you for the responses so far! I found a formula recently that purports to be able to find mass with only radius and velocity;


$M = L/rv$


where r= radius v= velocity M= mass L= angular momentum


To find angular momentum $L = Iw$


where I= moment of inertia (v/r) w= angular velocity (rv)/r^2


So it's a little complicated, but does it even work? I tried it on the mass of Venus, and I got it very, very wrong.



Answer




Consider some small object orbiting the Earth. By small I mean that the mass of the object is so much smaller than the mass of the Earth that we can take the Earth to be fixed i.e. the object can't move the Earth by any measurable amount.


If the mass of the object is $m$, the mass of the Earth is $M$ and the distance to the object is $r$ then the gravitational force on the object is:


$$ F = \frac{GMm}{r^2} \tag{1} $$


where $G$ is a constant called the gravitational constant. Now, you mention the equation for Newton's second law $F = ms$, and we can rearrange this to calculate the acceleration of our object:


$$ a = \frac{F}{m} \tag{2} $$


If we take the force we calculated in equation (1) and substitute it into equation (2) we get:


$$ a = \frac{\frac{GMm}{r^2}}{m} = \frac{GM}{r^2} \tag{3} $$


The mass of our object $m$ has factored out of the equation for the acceleration $a$, and that means the acceleration does not depend on the mass of the object. This is just Galileo's observation that objects with different masses fall at the same rate.


Anyhow, when we measure an orbit we are measuring the acceleration of the object. Since the acceleration doesn't depend on the mass that means the orbit doesn't depend on the mass. Therefore we can't determine the mass of the object from its orbit.


Both dmckee and CuriousOne have mentioned in comments that you can determine the orbit if the mass of the object is large enough to be comparable to the earth. That's because our equation (3) is actually only an approximation. It should be:



$$ a = \frac{GM}{r^2}\frac{m}{\mu} \tag{4} $$


where $\mu$ is the reduced mass:


$$ \mu = \frac{Mm}{M + m} $$


When $m \ll M$ the reduced mass is equal to $m$ within experimental error, and this gives us equation (3) so we can't measure $m$. If $m$ is comparable to $M$ the reduced mass is measurably different from $m$ and we can solve the resulting (rather complicated) equation to determine $m$.


quantum field theory - Can we obtain non-Lorentzian metric from Lorentzian metric, through renormalization methods?


Since low-energy, non-relativistic thermal field theories are defined in Euclidean spacetime, while high-energy relativistic theories are define in Minkowski spacetime, I was wondering if there are renormalization methods that can show such a change in metric signature.



Answer



The time contour really has nothing to do with renormalization. Rather it is something you choose at the outset for the purpose of the calculation you want to do. With any choice of time contour the renormalization theory is pretty much the same. What renormalization does (understood in terms of Kadanoff/Wilsonian renormalization group) is generate higher dimension effective operators in the Lagrangian. The addition of operators to the Lagrangian has no effect on what is your choice of time contour to integrate them on!



The reason for the choice of time contour is a little more subtle, and you've probably only seen the two most common special cases. Exposure to the general case may clarify what's going on with the imaginary time thing, even if you never use the most general case. The general correlation function (simplifying to a single scalar field) can be written


$$ \langle \phi(x_1,t_1)\cdots\phi(x_n,t_n) \rangle = \mathrm{Tr}\left\{ \rho(t_0) U(t_0,t_1) \phi(x_1,t_1) U(t_1,t_2)\cdots U(t_{n-1},t_n)\phi(x_n,t_n)U(t_n,t_0) \right\}$$


where the time evolution operators $U(t_i,t_j)$ come from working in the Heisenberg (or interaction) picture and $\rho$ is an arbitrary initial density matrix describing the system at the initial time $t_0$. This is all standard stuff similar to what you'll see in any QFT course.


Here comes a trick (part 1): you can write any density matrix you like as $\mathrm{e}^{-\beta H^M}$. Completely general. $H^M$ is not necessarily the Hamiltonian of your system, though if it is you have a thermal equilibrium state at temperature $\beta^{-1}$. Now the trick (part 2): notice that $\mathrm{e}^{-\beta H^M} = \mathrm{e}^{-i (-i\beta) H^M} = U(t_0 - i\beta, t_0; H^M)$. This is just a trick: imaginary time evolution with "Hamiltonian" $H^M$ gives you a density matrix. If $H^M = H$ this is just a thermal state. If not it's not. The general formalism can cope with the real time dynamics of an arbitrary non-equilibrium state.


Now have a look at page 107 of Stefanucci & van Leeuwen. I reproduce the relevant figure below (I believe it's fair use, but I heartily recommend you read the whole book if you get the chance):


time contours from Fig. 4.5 of Stefanucci and van Leeuwen


The first figure shows the general situation I've described: the time evolution starts at $t_0$, runs up the real axis to catch any $\phi(x,t)$ operators that are there, then back down to $t_0$ to "meet" the initial density matrix, which we make by evolving down the imaginary axis with $H^M$ which may or may not be $H$.


Now we can make approximations. If all you care about are thermal equilibrium properties and not non-equilibrium time evolution, you can measure all thermal correlations by taking all times at the initial time and $H^M=H$. The real time part of the contour collapses and you are just left with the imaginary time contour you know. It's not so much that thermal field theory is defined on an imaginary time contour. It's just that that is what's left when you don't care about anything else.


On the other hand you can start with some non-interacting state at $t_0\to -\infty$ and slowly (adiabatically) turn on an interaction and watch what happens. This gives the second set of contours (Fig. b), known as the Schwinger-Keldysh contours and often used for studying nonequilibrium situations like electric currents in nanostructures etc.


Finally if you take the density matrix to be an equilibrium density matrix at zero temperature then you can use the Gell-Mann-Low theorem to remove the backwards time contour completely. This gives you the usual one way real time contour that you probably know from ordinary QFT (Fig. c). This works because a vacuum state at $t\to -\infty$ adiabatically turns into a vacuum state at $t\to +\infty$. In a non-equilibrium situation you can't rely on this and you need the full contour.



thermodynamics - How fast do molecules move in objects?


I guess it depends on the heat or the type of the material but can you give some examples or formulas to calculate it ?


The best example would be the average speed of the air molecules (all types in the air) at room temperature or water molecules at human body temperature.




Answer



It depends on the mass of the molecule in question. Here's a quick, back-of-the-envelope answer. In a body at thermal equilibrium, every energy mode has the same average amount of energy, $\frac12kT$, where $T$ is temperature and $k$ is Boltzmann's constant. One of the energy modes is the translational kinetic energy of a molecule in some direction $x$, $\frac12mv_x^2$. We can solve


$$\frac12kT=\frac12mv_x^2$$


to find


$$v_x=\sqrt{\frac{kT}m}$$


and then plug in $k=1.38×10^{-23}\rm{m^2 kg s^{-2} K^{-1}}$, $T=300\rm{K}$, and for $m_{\rm{N}_2}=2×14\rm{u}=2×14×1.66×10^{−27} \rm{kg}=4.65×10^{−26} \rm{kg}$ to get


$$v_x=298\rm{m/s}=667mph.$$


The molecule is also moving in the $y$ and $z$ axes, so the answer depends on what exactly you mean by average speed: mean spead vs. root-mean-square speed.


This ignores rotational and vibrational degrees of freedom. Similar calculations may be performed for other substances.


Some links: http://en.wikipedia.org/wiki/Root-mean-square_speed



Thursday 23 February 2017

terminology - Kinematics: Circular Motion


What is the difference between angular velocity and angular speed? Is angular velocity after one complete rotation zero? Is the magnitude of angular velocity always equal to angular speed?




newtonian mechanics - Magnus effect: Where is the energy coming from?


I am not an expert in Physics, so please go easy on me. The magnus effect creates a perpendicular force to the axis an object spins around and the velocity vector. But where does that energy come from? Does the object loose energy by slowing down the spin or the velocity? Every object that experiences the magnus effect looses velocity through friction, but does it loose considerably more energy than without spin (assumed the object is a sphere or a cylinder)?




Answer



The Magnus effect , which applies to objects which have circular symmetry about the axis of spin, stems from the fact that the side of the object spinning towards the air flow (from forward motion) sees a higher air speed than the side spinning away from the direction of air flow. The thin layer of well-behaved (jargon: laminar) air becomes turbulent earlier on the faster side (jargon: it has a higher Reynolds number). Therefore, the turbulent wake behind the object is lopsided. This asymmetry leads to a tangential force. This phenomenon is not easily thought of in terms of energy. Yes, the object slows down and spins more slowly from drag but these only affect the forward speed and amount of tangential force respectively. I would guess that, yes, comparing objects with the same amount of kinetic energy to start but one has only translation kinetic energy while the other has translation AND rotation kinetic energy, the one with rotation will transfer more energy to the surrounding fluid.


resource recommendations - Is there an online course in Mathematical Methods for Physics, Covering Matrices and Vector Analysis?




I am taking a course in Mathematical Methods for physicsPhysics (Junior Level). We are working from > Mathematical Methods for physicists, George B.Arfken . I just need online resources covering ( Matrices, Determinants, Vector Analysis, Tensors and differential forms and Vector spaces...etc) to study and understand from.




optics - What are these rays that appear in photograph of sun?


In many images of light emitting objects we see such rays. Why do they appear ? What is the math behind their number and direction?


enter image description here



Answer



Those are artifacts of having obstructions in the optics. Ideally, we think of the intensity being recorded as the (squared magnitude of the) Fourier transform of the wavefront passing through the aperture. That is, whenever a wavefront is brought into focus, it undergoes a Fourier transform (in the Fraunhofer limit).


This transform is affected by the aperture, which excludes any part of the wavefront outside a certain range. Indeed for a circular aperture, the Fourier transform of a uniform light source is the familiar Airy pattern. Convolving an ideal image (made with an infinite aperture) with the Airy function results in the familiar bloom seen in photographs of bright light sources.


Many camera apertures, however, use polygons for ease of manufacturing, and so bright sources are convolved with a more complicated function, which can result in your rays. You should take a look at this paper, which discusses simulating these effects and more, since the prevalence of film and photographs in the modern world makes these artifacts almost necessary to lend realism to an image, despite the fact that they are rarely seen with our own eyes. Figure 3 in particular shows a camera aperture, and Figure 5 shows the relation to the Fourier transform.


In summary: the number of spikes is the number of edges in the aperture (iris), and the orientation is just the orientation of the camera.


Other obstructions within the aperture can cause similar effects. For telescopes with a secondary mirror or other installment in the optical path (a very common design), the object and its support struts have their Fourier transforms affect the image. The thin struts in particular case noticeable artifacts, even in Hubble images, as can be seen in the wiki on diffraction spikes.


Wednesday 22 February 2017

electromagnetism - Two electron beams exert different forces on each other depending on frame of reference?


I am sure there is a simple explanation for my confusion, but I am a little puzzled:


We are dealing with two parallel electron cannons that each produces a straight beam of electrons. They are placed at a distance d next to each other, which means there will be a repulsive electrostatic force between the electrons of the two beams. Also, according to the Maxwell equations, every moving electron will produce a magnetic field which then exerts another force on the other beam's moving electrons (similar to the effect that two currents in wires will cause an attractive force between the wires).


However, if we change the reference frame such that we are moving at the same speed alongside the electrons, there will be no magnetic field and no Lorentz Force. We are only left with the electrostatic force.


Do you guys know what might be the fallacy?



Answer



There is no fallacy, you're just not being particularly careful. You need to include both the electric and magnetic forces of the right magnitude and a covariant result drops out. (Of course historically it went the other way around: people noticed that frame changes were messed up unless the transformation laws were different and this led to the development of special relativity.)


For simplicity let both beams be very thin and have equal uniform charge density $\rho$ in the rest frame and suppose they run exactly parallel separated by a distance $l$. Let the velocity of the beams be $v$ in the lab frame.


Rest frame:


Taking the usual gaussian pillbox gives the electric field of one beam at the location of the other as



$$ \vec{E}_\text{rest} = \frac{\rho}{2\pi\epsilon_0 l} \hat{r}, $$


where $\hat{r}$ is the unit vector directed away from the source beam. Thus the force on a single particle in the second beam (charge $q$) is:


$$ \vec{F}_\text{rest} = \frac{\mathrm{d}p}{\mathrm{d}t_\text{rest}} = \frac{\rho q}{2\pi\epsilon_0 l} \hat{r}. $$


Lab frame:


The charge density of the beam is enhanced by the relativistic $\gamma=1/\sqrt{1-v^2/c^2}$ factor. Thus the electric field is:


$$ \vec{E}_\text{lab} = \frac{\gamma \rho}{2\pi\epsilon_0 l} \hat{r}. $$


There is also a magnetic field of magnitude


$$ B = \frac{\mu_0 \gamma\rho v}{2\pi l} $$


and directed so as to produce an attractive force. Plugging these in the Lorentz force formula


$$ \vec{F}_\text{lab} = q (\vec{E}_\text{lab} + \vec{v}\times\vec{B}) = q\left( \frac{\gamma \rho}{2\pi\epsilon_0 l} - \frac{\mu_0 \gamma\rho v^2}{2\pi l}\right) \hat{r} = \frac{\rho q}{2\pi\epsilon_0 l}\gamma\left(1 - \epsilon_0\mu_0 v^2\right) \hat{r}. $$



Using $\epsilon_0 \mu_0 = c^{-2}$ this reduces to $\vec{F}_\text{lab} = \gamma^{-1} \vec{F}_\text{rest}$ which, on noting the relativistic time dilation $\mathrm{d}t_\text{lab} = \gamma \mathrm{d}t_\text{rest}$, is exactly right! Note that I've used the fact that the force is orthogonal to the velocity implicity when writing the Lorentz transformation law for the force. You can prove the covariance for general motions using the covariant formulation of EM.


Lesson:


Relativity and electromagnetism go together like hand and glove!


statistical mechanics - A thermodynamic transformation that can be represented by a continuous quasistatic path in its state space may still be irreversible. Why?


A thermodynamic transformation that has a path (in its state space) that lies on the surface of its equation of state (e.g., $PV=NkT$) is always reversible (right?). However, if the path is a continuous quasistatic path in state space but not on the surface of equation of state, it is considered irreversible.


Why is this so? In this case the gas still has a uniform thermodynamic variables (because all the intermediate states are points in the state space). Why can it not be reversed exactly along the same path?



Answer



Let's look at your first statement:




A thermodynamic transformation that has a path (in its state space) that lies on the surface of its equation of state (e.g., $PV=NkT$) is always reversible



I don't think this is right, but there may be some implicit qualifiers in your statement that I'm missing.


Here's an example to illustrate why. Consider a system consisting of a thermally insulated container with a partition in the middle dividing it into compartments $A$ and $B$. Suppose that we fill each side of the container with an equal number of molecules of a certain monatomic ideal gas. Let the temperature of the gas in compartment $A$ be $T_A$ and the temperature of the gas in compartment $B$ be $T_B$. Suppose further that the partition is designed to very very slowly allow heat transfer between the compartments.


If $T_B>T_A$, then heat will irreversibly transfer from compartment $B$ to compartment $A$ until the gases come into thermal equilibrium. During this whole process, each compartment has a well-defined value for each of its state variables since the whole process was slow ("infinitely" slow in an idealized setup). However, the process was not reversible because heat spontaneously flowed from a hotter body to a colder body. You can also show that the sum of the entropies of the subsystems $A$ and $B$ increased during the process to convince yourself of this.


So we have found a process for which the thermodynamic evolution of each subsystem can be described by a continuous curve in its state space, but the process is irreversible.


Addendum - 2017-06-01 See also the relevant answer here: https://physics.stackexchange.com/a/297509/19976


Addendum - 2017-07-02 After more thought, although I originally conceded to Valter Moretti in the comments that the composite system $A + B$ should not be considered in equilibrium during the process because each subsystem has a different temperature, I no longer believe that this means the process cannot be considered a quasistatic process for the composite system. I currently believe that as long as the process is sufficiently slow for the energy $U$, volume $V$, and particle number $N$ to be well-defined for each subsystem $A$ and $B$ during the process, then it can be considered quasistatic for the composite system. If we consider very slow heat transfer ("infinitely slow" in order to approach the ideal), then at all points in time, the each subsystem will be very close to equilibrium in such a way that the physical process is well-described by a continuous curve in the composite thermodynamic state space with coordinates $(U_A, V_A, N_A, U_B, V_B, N_B)$ lying on the surface satisfying the equations of state, but because it will not lie on the intersection of this surface with the hyperplane of constant total entropy, the process is irreversible.


thermodynamics - Is there an equivalence between Boltzmann entropy and Shannon entropy..?



I have already parse other posts about this subject but none of them seems to answer completely to my interogation


[ this interrogation is to put in relation with this question Is there an equivalence between information, energy and matter? ]


indeed according to Bekenstein



the thermodynamic entropy and Shannon entropy are conceptually equivalent.


the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information that would be needed to implement any particular arrangement ...



...of matter and energy


with


for some, thermodynamic entropy can be seen as a specific instance of Shannon entropy. In short, the thermodynamic entropy is a Shannon entropy, but not necessarily vice versa.



for others, Shannon entropy is a mathematical quantity on ""abstract systems"" and have nothing to do with thermodynamics entropy.


so is there a consensual responce to that question:


Is there an equivalence between Boltzmann entropy and Shannon entropy..?



Answer



Boltzmann's entropy formula can be derived from the Shannon entropy formula when all states are equally probable.


Say you have $W$ microstates equiprobable with probability $p_i=1/W$. Then:


$S=-k\sum{p_i \ln p_i}=k\sum{ (\ln W)/W}=k\ln W$


Another way where this result can be obtained is maximising $S$ given that $\sum{p_i}=1$ using Lagrange multipliers:


$\max_{p_i}(S)= -k\sum{p_i \ln p_i} - \lambda(\sum{p_i}-1)$


Adding more constraints will result in a lower entropy distribution (such as the canonical entropy when adding the energy constraint and the grandcanonical when adding energy and particle constraints).



As a side note, it can also be shown that the Boltzmann entropy is an upperbound to the entropy that a system can have for a fixed number of microstates meaning:


$S\leq k \ln W$


This can also be interpreted as the uniform distribution being the distribution that provides the highest entropy (or least information, if you want someone was kind enough to prove this for me here https://math.stackexchange.com/questions/2748388/proving-that-shannon-entropy-is-maximal-for-the-uniform-distribution-using-conve).


homework and exercises - How does the Moon cause the tides?


I am considering the following question, but I can't quite figure it out...


Enter image description here


I have looked up differential gravity, but I cannot derive the equation for the effect on Earth, and I haven't found any good links for it. I have seen this question, but it doesn't seem to derive the forces on different parts of the Earth, or how the force variation affects the water. I initially tried to consider the component of the gravitational force of the Moon acting normally to the Earth's surface. This would be $F = F_0 \cos(\theta)$ where $\theta$ is the angle between a horizontal line going through $A$ and $B$ and the point on the Earth's surface in question. But now I am stuck. I wanted to deal with forces rather than equipotentials, as it seems like that is what the question wants because of the 'hence'...


How do I start with any method, be it equipotentials or forces?




cosmology - The universe appears to have a lower bound in the time dimension, why not an upper bound?


The Big Bang looks like a lower bound to the "size" of the universe in the time dimension. Could it also have an upper bound, some furthest point in time from the Big Bang?




Tuesday 21 February 2017

Is my understanding of electromagnetic waves correct




My understanding of electromagnetic waves is that earths core has charged particles, so there is an electric field, when those charged particles move they will create a magnetic field and earth has a magnetic field. Electromagnetic waves are caused by electric and magnetic waves, so sense the earth supports both of these, electromagnetic waves can exist in the air, am I correct, if not please tell me how electromagneitc waves work. I have asked why don't electromagnetic waves need a vaccum to move through but the answer are to complicated , so if your going to answer please answer as if you were talking to your fiend who knows nothing about physics, please do not anwers as if you were talking to another physicist.




classical mechanics - Example of Hamilton's Principle to Systems with Constraints (Goldstein)


I'm currently studying Goldstein's Classical Mechanics book and I can't get my head around his reasoning in section 2.4. (Extending Hamilton's principle to systems with constraints). I'd like to understand the example he gives. Here it comes:




Consider a smooth solid hemisphere of radius $a$ placed with its flat side down and fastened to the Earth whose gravitational acceleration is $g$. Place a small mass $M$ at the top of the hemisphere with an infinitesimal displacement off center so the mass slides down without friction. Choose coordinate $x, y, z$ centered on the base of the hemisphere with $z$ vertical and the $x$-$z$-plane containing the initial motion of the mass.



Let $\theta$ be the angle from the top of the sphere to the mass. The Lagrangian is $L = \frac{1}{2}\cdot M \cdot (\dot x^2 + \dot y^2 + \dot z^2) - m\cdot g\cdot z$. The initial conditions allow us to ignore the $y$ coordinate, so the constraint equation is $a - \sqrt{x^2 + y^2} = 0$. Expressing the problem in terms of $r^2 = x^2+z^2$ and $x/z = \cos(\theta)$, Lagrange's equations are $$ Ma\dot\theta^2 - M g \cos(\theta) + \lambda = 0$$ and $$ Ma^2\ddot \theta + M g\, a \sin (\theta) = 0$$


Solve the second equation and then the first to obtain $$ \dot\theta^2 = -\frac{2g}{a}\cos(\theta) + \frac{2g}{a}$$ and $$ \lambda = M g (3\cos(\theta)-2)$$ So $\lambda$ is the magnitude of the force keeping the particle on the sphere and since $\lambda = 0$ when $\theta = \cos^{-1}(\tfrac{2}{3})$, the mass leaves the sphere at that angle.




I have the following questions:




  1. Shouldn't it be $x/z = \tan \theta$?




  2. Could it be that he's mixing up $r$ and $a$? My guess is that from "Lagrange's equations are" it should say $r$ instead of $a$. I get confused whether $a$ is a system parameter or a Lagrangian multiplier.





  3. Could you give me a) an explanation or b) a good read on why setting $L' = L + \lambda\cdot f$ gives us an analogue of Hamilton's principle on constraint systems? I don't understand Goldstein's derivation. ($L$ is the original Lagrangian, $f$ is the constraint and $\lambda$ is the Lagrangian multiplier.)




  4. Why can $\lambda$ be thought of as the constraint force?




When I understand 3., I understand the example -- I reverse engineered the supposedly Lagrangian equations to see that $L'$ needs to be of form $$\frac{1}{2}M r^2 \dot\theta^2 - Mrg\cos(\theta) + \lambda \cdot f$$ with generalized coordinates $\theta$ and $r$. Then everything works out just fine.



Answer





  1. Yes, it should be $x/z = tan \theta$, this is probably a typo.

  2. The constraint should be $a - \sqrt{x^2 + z^2}=0$ for the argument to make sense. $r$ is a coordinate which is variable but due to the constraint it will always be equal to $a$, so we can use $a$ in the equations instead. ($\dot{a}=0$).

  3. You know that a gradient of $f$ is always perpendicular to the surface of constant $f$, so you can understand the extra term coming from $\partial L'/\partial x_i $ as a force acting perpendicular to the surface of $f=0$ holding the particle on it. However, $\lambda$ has to be solved so that the motion of the system is only along the constant surface. But imagine now a force equal to the solved $\lambda \partial f/\partial x_i$ - it would have the same effect as the constraint, so this is the force by which the constraint actually has to be acting to hold the particle/system. (This also answers 4.)


optics - Seeing something from only one angle means you have only seen (what?)% of its surface area at most?


Is there a logical/mathematical way to derive what the very maximum percentage of surface area you can see from one angle of any physical object?


For instance, if I look at the broad side of a piece of paper, I know I have only seen 50% of its surface area (minus the surface area of the very thin sides). Is 50% always the maximum amount of surface area you can see of any object from one angle?


Assumptions: This is assuming we aren't considering transparent/semi-transparent objects or the extra parts we can see with the help of mirrors. Just looking at the very surface of an object from one particular angle.



Answer



There is no such upper bound.


As a simple counter-example, consider a thin right-angled solid cone of base radius $r$ and height $h$, observed on-axis from some large(ish) distance $z$ away from the cone tip. You then observe the tilted sides, of area $\pi r\sqrt{r^2+h^2}$, and you don't observe the area of the base, $\pi r^2$, so you observe a fraction \begin{align} q &=\frac{\pi r\sqrt{r^2+h^2}}{\pi r^2+ \pi r\sqrt{r^2+h^2}} \\ &= \frac{\sqrt{1+r^2/h^2}}{r/h+\sqrt{1+r^2/h^2}} \\ &\approx 1- \frac rh \end{align} of the surface, in the limit where $r/h\ll 1$, and this can be arbitrarily close to $1$ so long as the cone is thin enough and long enough.



quantum mechanics - Why can't we break the speed of light in vacuum?



I was wondering what could be a possible reason/reasons because of which we cannot break the speed of light barrier. I was reading this where they stated that, Quantum Action Is 10,000 Times Faster Than Light.
I am not from a physics background, but if quantum particles interact faster than speed of light, why dont they go back in time?
(i had read it that if we can travel faster than the speed of light, we go back in time)


so my two questions are



  1. Why cant we break the speed barrier, is there a reason, or its just a fact that we have to accept?


  2. if quantum particles move faster than light, then why dont they travel back in time?


PS: i have read the possible duplicates, but i did not understand the possible reason for not being able to break the speed barrier.




acoustics - What sound frequency can be heard the greatest distance by humans?


What sound frequency can be heard the greatest distance by humans? Assuming a pure tone, single frequency, same source SPL (dB) for each frequency, outdoors with no obstacles between source and listener. I believe the answer would be the result of the combining the effects of atmospheric attenuation as a function of frequency, humidity, and temperature and the perceived loudness by humans as a function of frequency. I think the answer would be in the range of 2kHz-3kHz.



Answer



Cool question! I've been curious about this problem for a long time, but I've never spent the time to actually try the calculations. Note: I arrive at a different conclusion than @niels neilsen by adding the atmospheric absorption part.




At long distances, molecules in the atmosphere (mainly diatomic Oxygen and Nitrogen) act as a sink of acoustic energy and thus filter acoustic signals. The effect is most pronounced at high frequencies.


Where I live a typical summer day is about 13° C and 70% relative humidity. A typical atmospheric pressure is 96 kPa. The effect with distance due to atmospheric absorption is as follows: atm absorption


(The code I used to make the calculations is here, in case you want to tweak things for your part of the globe. Otherwise you're going to have to take my word that the general shape of the curve doesn't change much with temp/RH/atmospheric pressure and my answer is generally applicable.)


In any case you can see that high frequencies are significantly influenced at distance.





We can imagine a source of white noise that, by definition, has equal acoustic intensity at all frequencies. That's convenient because we can continue to scale the attenuation with atmospheric absorption until it becomes inaudible. That is, where it intersects the ISO 226:2003 equal loudness contour for the threshold of human hearing. where can you no longer hear


By this approach it would appear that at least for atmospheric conditions in my neck of the woods the last remaining audible frequency would be somewhere around 700 Hz.


Disclaimer: I'm not sure how often atmospheric absorption would be the limiting factor outdoors. More likely your source would be attenuated by terrain or ground effect well before you got far enough away for atmospheric attenuation to do the job. Additionally your sound source may be masked by other ambience before you could get far enough away for atmospheric attenuation to matter this much. Then the last frequency you would hear would depend entirely on the environment you were in.


quantum field theory - Can bosons have anti-particles?


Can bosons have anti-particles? In the past, I would have answered this question with a yes, primarily because I can imagine writing down a QFT for complex scalars that has a $U(1)$ symmetry that allows me to assign a conserved charge. That is, I expect to obtain a charged spin-0 boson with an additive quantum number. A $CP$-transformation would change these quantum numbers into their negatives and I would consider the corresponding particle an anti-particle.


Of course I know at the same time that Standard Model particles, such as the $Z$-boson and the Higgs boson, are considered not to have observable anti-particles (in the way that electrons have, for instance). On the other hand, mesons are considered (composite) bosons and are known to have anti-particles. I used to take the viewpoint that the mentioned elementary bosons are their own anti-particles, because they are charge-neutral.


After reading, by chance, an interview with Geoff Taylor (Melbourne) I am a bit confused, however. He says that bosons can not have anti-particles, because this property is restricted to Fermions and explicitly refutes the idea that they are their own anti-particles:




"Really fermions are the things where we have this idea of a particle and anti-particle pair," says Taylor, "anti-particles at the fundamental level are fermions with the opposite charge."


"The $W+$ and $W-$ bosons only differ by charge so it's an easy mistake to talk about it that way [as particle and anti-particle], but it's just a pair of different charges."


"While they behave in some sense like particle and anti-particle, we don't think of one as the anti-particle counterpart of the other because they're force carriers," says Taylor


"Fermions have conservation laws associated with them, so for example they are created in particle-anti-particle pairs, the sum of their quantum numbers cancelling to maintain the conservation laws," explains Taylor.


"Bosons operate under different laws and can be created singly. This is a crucial distinction and is in nature of being either matter particles or force carriers."



(It should perhaps be mentioned that he works in experimental HEP-data analysis and not theory, but still he could know more.)


Which, if any, of these viewpoints is correct?



Answer




In the standard model, there is no elementary spin 0 boson being electrically charged (but there are many charged spin 0 composite particles). However, in many extensions such as supersymmetry, there are such particles: the scalar partner of the electron, the selectron carries the same charge as the electron. The anti-selectron is the spin 0 partner of the positron. Thus the answer to your question is yes.


Monday 20 February 2017

hilbert space - Bra-ket notation and linear operators


Let $H$ be a hilbert space and let $\hat{A}$ be a linear operator on $H$.


My textbook states that $|\hat{A} \psi\rangle = \hat{A} |\psi\rangle$. My understanding of bra-kets is that $|\psi\rangle$ is a member of $H$ and that $\psi$ alone isn't defined to be anything, so $|\hat{A}\psi\rangle$ isn't defined.


Is $|\hat{A} \psi\rangle = \hat{A} |\psi\rangle$ just a notation or is there something deeper that I am missing?



Answer



This should be understood as a mere definition, i.e. a new label for the state you get when you apply the operator A to the ket psi.


newtonian mechanics - Acceleration in free fall




A body in free fall "feels" no gravitational force (equivalence). Why does it continue to accelerate. Why is your program refusing my question?




Analytic solutions to time-dependent Schrödinger equation


Are there analytic solutions to the time-Dependent Schrödinger equation, or is the equation too non-linear to solve non-numerically?


Specifically - are there solutions to time-Dependent Schrödinger wave function for an Infinite Potential Step, both time dependent and time inpendent cases?


I have looked, but everyone seems to focus on the time-Independent Schrödinger equation.




conservation laws - Where isn't there a quantum number associated with the Lorentz boost?


In Schwartz's QFT, there is a problem (3.2) on p. 42 asking to calculate the conserved current $K_{\mu\nu\rho}$ associated with the Lorentz transformation $x^\mu\rightarrow\Lambda^\mu{}_\nu x^\nu$. It's expression is


$$K^\mu{}_{\nu\rho}=T^\mu{}_{[\rho}x_{\nu]} \tag{1}$$ where, the square brackets imply the antisymmetrization.


One can define the conserved quantities $$Q_j=\int d^3x~K^0{}_{0j},\tag{2}$$ which induce the boosts. Being conserved, one means that $$\frac{dQ_j}{dt}=0.\tag{3}$$


However, Schwartz also stated that $\frac{dQ_j}{dt}=0$ is consistent with $$i\frac{\partial Q_j}{\partial t}=[Q_j,H]~! \tag{4}$$


Now, let us look at the definition of $Q_j$ more carefully. You will find out that $Q_j$ is a function of $t$ only. So $\frac{\partial Q_j}{\partial t}=\frac{dQ_j}{dt}$. But $[Q_j,H]\ne0$! What is going on?




Sunday 19 February 2017

metric tensor - Gauge transformation in general relativity


I am studying Weinberg's Cosmology. In chapter 5, he describes 'General theory of Small Fluctuations' where he considered a general coordinate transformation $$ x^{\mu} \to x^{\prime\mu}= x^{\mu} + \epsilon^{\mu}(x)\tag{5.3.1} $$ and how the metric transforms under this transformation. After that he writes, and I quote



Instead of working with such transformations, which affect the coordinates and unperturbed fields as well as the perturbations to the fields, it is more convenient to work with so-called gauge transformations, which affect only the field perturbations. For this purpose, after making the coordinate transformation $(5.3.1)$, we relabel coordinates by dropping the prime on the coordinate argument, and we attribute the whole change in $g_{\mu\nu} (x)$ to a change in the perturbation $h_{\mu\nu}(x)$.



Can anyone please explain what he meant by 'gauge transformation' (the dropping of the prime) here and how and why is it different from the general coordinate transformation?


I will appreciate an elaborate answer and some reference to literature.



Answer



The whole idea is that the division between the "background" and the "perturbation" is arbitrary. Hence, as long as the coordinate form of the metric is affected only linearly by the coordinate transform, we can "reassign" a different part of the metric as the "background" and as the perturbation so that the coordinate form of the metric is always fixed.



For concreteness consider the background to be the Minkowski metric in Cartesian coordinates $\eta = diag(-+++)$ with some perturbation $h$, $g=\eta +h$. Now we make an infinitesimal coordinate transform. This transform generally takes the coordinates away from Cartesian and $\eta$ is now not in the form $diag(-+++)$, it is now $\eta=diag(-+++) + \delta$ with some small $\delta$. However, since $\delta$ is small, we can redefine the division between the background and perturbation so that the background metric is $diag(-+++)$ even after the coordinate transform and the perturbation is redefined as $h' = h+\delta$.


This procedure may seem contrived at this point but it leads to a structure of equations where $h$ behaves exactly as a massless spin-2 field on a curved background. The infinitesimal transformation parameters $\epsilon^\mu$ then play the role of gauge potentials.


I.e. the same way we have a gauge transform of the electromagnetic potential $$A_\mu \to A_\mu + \chi_{,\mu}$$ for some gauge potential $\chi$, the infinitesimal transform along with the "redivision" procedure given above yield the "gauge transform" $$h_{\mu \nu} \to h_{\mu \nu} + \epsilon_{\mu;\nu} + \epsilon_{\nu;\mu}$$ which is exactly the same as a gauge transform for a spin-2 field on a curved background.


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...