Wednesday 31 October 2018

pair production - Virtual and Real particles


In a discussion with my astrophysics lecturer, we ended up talking about virtual particles. One of the most astonishing phenomena he pointed out was the fact that annihilation of virtual particles produces no light, even tho real particles clearly do. We then started thinking of the Casimir effect, where you close down two plates, Casimir plates, so close to each other that a force arises (the space is tiny, barely enough for 1 electron). The main reason is due to virtual particles appearing on the different sides of the plates.


Now let's take a positron that is on the other side of the plate and has no way of interacting with it's pair electron inside, what would happen if it were to annihilate this virtual positron with a real electron? Would you get light and if so how much of it would you get?




fluid dynamics - Propeller blades flat vs wing shape


Does a propeller blade with the shape of a wing produce more lift than a flat blade and will a blade with a wing shape produce lift in a ducted fan situation?





quantum field theory - Mass gap for photons


I am puzzled by the answers to the question:


What is a mass gap?


There, Ron Maimon's answer gives a clear-cut definition, which I suppose applies to any quantum field theory with Hamiltonian $H$, that the theory has a mass gap if there is a positive constant $A$ such that $$\langle \psi| H |\psi \rangle\geq \langle 0 |H | 0 \rangle +A$$ for all nonzero (normalized) $\psi$.


But then, Arnold Neumaier says


QED has no mass gap, as observable photons are massless states.


I would quite appreciate a brief explanation of this statement. The definition is concerned with the minimum possible energy for non-zero states. So I don't see why the photons having zero mass would imply the absence of a mass gap.




Tuesday 30 October 2018

commutator - Quantizing a complex Klein-Gordon Field: Why are there two types of excitations?


In most references I've seen (see, for example, Peskin and Schroeder problem 2.2, or section 2.5 here), one constructs the field operator $\hat{\phi}$ for the complex Klein-Gordon field as follows:


First, you take the Lagrangian density for the classical Klein-Gordon field


$$ \mathcal{L}=\partial_\mu \phi^\dagger\partial^\mu\phi-m^2\phi^\dagger\phi \tag{1} $$ and find the momentum conjugate to the field $\phi$ via


$$ \pi=\frac{\partial\mathcal L}{\partial \dot\phi}=\dot\phi^\dagger.\tag{2} $$ Then, one imposes the usual canonical commutation relations on $\hat\phi$ and $\hat\pi$:


$$ [\hat\phi(x),\hat\pi(y)]=i\delta^3(x-y).\tag{3} $$ So, one needs to find operators $\hat{\phi}$ and $\hat\pi$ such that they obey the above commutation relations, and such that $\hat\pi=\dot\phi^\dagger$. The textbooks then go on to show that defining


$$ \hat{\phi}(x)=\int\frac{d^3\vec{p}}{(2\pi)^3}\frac{1}{\sqrt{2p_0}}[a_p^\dagger e^{-i p_\mu x^\mu}+b_pe^{i p_\mu x^\mu}]\tag{4} $$ $$ \hat{\pi}(x)=i\int\frac{d^3\vec{p}}{(2\pi)^3}\sqrt{\frac{p_0}{2}}[a_p e^{i p_\mu x^\mu}-b_p^\dagger e^{-i p_\mu x^\mu}]\tag{5} $$ where $a$ and $b$ are bosonic annihilation operators, satisfies these properties.


My question is: Why do we need two different particle operators to define $\hat\phi$ and $\hat\pi$? It seems to me that one could simply define


$$ \hat{\phi}(x)=\int\frac{d^3\vec{p}}{(2\pi)^3}\frac{1}{\sqrt{2p_0}}a_p e^{-i p_\mu x^\mu}\tag{6} $$ $$ \hat{\pi}(x)=i\int\frac{d^3\vec{p}}{(2\pi)^3}\sqrt{\frac{p_0}{2}}a_p^\dagger e^{i p_\mu x^\mu}\tag{7} $$ with $\hat{a}_p$ a single bosonic annihilation operator. Then clearly $\hat{\pi}=\dot{\hat{\phi}}^\dagger$, and also


$$ \begin{array}{rcl} [\hat\phi(x),\hat\pi(y)]&=&i\int\frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\sqrt{\frac{q_0}{4p_0}}e^{i(q_\mu y^\mu-p_\mu x^\mu)}[a_p,a_q^\dagger]\\ &=&i\int\frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\sqrt{\frac{q_0}{4p_0}}e^{i(q_\mu y^\mu-p_\mu x^\mu)}(2\pi)^3\delta^3(p-q)\\ &=&i\int\frac{d^3p}{(2\pi)^3}\frac{1}{2}e^{ip_\mu (y^\mu-x^\mu)}\\ &=&\frac{i}{2}\delta^3(y-x)\\ \end{array}\tag{8} $$



which is, up to some details about normalizing the $\hat{a}_p$, correct. We would then have a Klein-Gordon field with just one kind of excitation, the $\hat{a}_p$ excitation. Why do all textbooks claim we need two separate bosonic excitations, $\hat{a}_p$ and $\hat{b}_p$?



Answer



The point is that the quantization procedure is usually only valid for real-valued physical observables. All versions of treat the classical observables as real functions on phase space (things get more complicated for fermions, which I will ignore for this issue), and associate quantum observables to those. For instance, the harmonic oscillator annihilation operator $a = x + \mathrm{i}p$ is not really an object one is allowed to look at in classical Hamiltonian mechanics - complex valued functions do not occur, or rather, they are no different from just a pair of real-valued functions that represent the real and imaginary part.


Therefore, to quantize a complex scalar field $\phi$, we must write it as $\phi = \phi_1(x) + \mathrm{i}\phi_2(x)$, and quantize both of the real scalar field separately. This yield the usual mode expansion of the complex scalar field with two different sets of creation/annihilation operators. For a real field, we can treat $a_p$ and $a^\dagger_p$ as operators because can obtain them from the Fourier transform of the fields $\phi(x)$ and $\pi(x)$, which are real-valued and hence operators after quantization. Both the Fourier transform and the computation of $a_p$ and $a_p^\dagger$ must be thought of as being carried out after quantization to be consistent with the derivation of the commutation relations of $a_p,a_p^\dagger$ from the CCR of $\phi$ and $\pi$.


Additionally, note that your attempt is inconsistent with the quantization of the real scalar field in another way: When we impose $\phi = \phi^\dagger$ on your scalar field, we also get $a = a^\dagger$ because $\dot{\phi} = \dot{\phi}^\dagger = \pi$ in that case, which contradicts their non-zero commutation relation. So your version of the quantization of the complex scalar field does not reduce to the quantization of the real scalar field, and is hence an entirely different quantization prescription.


resource recommendations - Good book on the history of Quantum Mechanics?




Can anyone recommend a good book on the history of Quantum Mechanics, preferably one that is technical and not afraid to explain the maths (I did a degree in Physics many years ago) and also that explains the developments using the technical language of the time rather than using treatments as we know understand them. If it also explained the input from experiments that would be great too.



Answer



You might like Inward Bound by Abraham Pais.


Author was a particle physicist. The book is mostly a history of particle physics, but quantum mechanics is heavily intertwined. Otherwise it meets your criteria perfectly.


Monday 29 October 2018

homework and exercises - Determining the refractive index of a foil


(59th Polish Olympiad in Physics, final stage, experimental part, 2010)



You have at your disposal:



  • a sample of blue foil of a homogeneous material, placed between two glass panes in a slide frame

  • a laser pointer

  • a meter of the power of light composed from a photodiode, a battery and a voltmeter whose readings are proportional to the power of light falling on the active surface of the photodiode

  • graph paper

  • two wooden blocks


  • adhesive tape


Determine the refractive index of the material the blue foil is made of for the laser wavelength of light.


Note 1: The thickness of the foil is approximately 0.1 mm and the thickness of the glass panes on its both sides is approximately 1 mm. Between the panes and foil there is a very thin layer of a liquid, which has a refractive index very close to the refractive index of glass.


Note 2: The frame is not shut completely - it should not be pressed or opened. Make sure not to smear the surface of the panes.



There is no official solution available for this problem. How could it be solved?




optics - Effect of lens diameter on spot size


I would like to know if the lens diameter, i.e the size of the lens, affect the focused spot size. Given a parallel beam of diameter d, is there a lower limit of lens diameter D, after which aberration effects, simple linear ray-tracing fails, and the spot is no longer the smallest size possible?


I am participating in a competition which requires focusing laser beam. Since the beam diameter of diode lasers are small, I wish to know if I could use lens of size 10s of orders of magnitude of the beam diameter to focus it to a small spot.




probability - Why was quantum mechanics regarded as a non-deterministic theory?


It seems to be a wide impression that quantum mechanics is not deterministic, e.g. the world is quantum-mechanical and not deterministic.


I have a basic question about quantum mechanics itself. A quantum-mechanical object is completely characterized by the state vector. The time-evolution of state vector is perfectly deterministic. The system, equipment, environment, and observer are part of the state vector of universe. The measurements with different results are part of state vector at different spacetime. The measurement is a complicated process between system and equipment. The equipment has $10^{23}$ degrees of freedom, the states of equipment we neither know nor able to compute. In this sense, the situation of QM is quite similar with statistical physics. Why can't the situation just like statistical physics, we introduce an assumption to simply calculation, that every accessible microscopic state has equal probability? In QM, we also introduce an assumption about the probabilistic measurement to produce the measurement outcome.


PS1: If we regarded non-deterministic is intrinsic feature of quantum mechanics, then the measurement has to disobey the Schrödinger picture.


PS2: The bold phase argument above does not obey the Bell's inequality. In the local hidden variable theory from Sakurai's modern quantum mechanics, a particle with $z+$, $x-$ spin measurement result corresponds to $(\hat{z}+,\hat{x}-)$ "state". If I just say the time-evolution of universe is $$\hat{U}(t,t_0) \lvert \mathrm{universe} (t_0) \rangle = \lvert \mathrm{universe} (t) \rangle.$$ When the $z+$ was obtained, the state of universe is $\lvert\mathrm{rest} \rangle \lvert z+ \rangle $. Later the $x-$ was obtained, the state of universe is $\lvert\mathrm{rest}' \rangle \lvert x- \rangle $. It is deterministic, and does not require hidden-variable setup as in Sakurai's book.


PS3: My question is just about quantum mechanics itself. It is entirely possible that the final theory of nature will require drastic modification of QM. Nevertheless it is outside the current question.



PS4: One might say the state vector is probabilistic. However, the result of measurement happens in equipment, which is a part of total state vector. Given a probabilistic interpretation in a deterministic theory is logical inconsistent.




Sunday 28 October 2018

refraction - Ray tracing in a inhomogeneous media



If I have an optical transparent slab with refractive index $n$ depending on the distance $x$ from the surface of the slab, the refractive index can be described by: $$n(x)=f(x)$$ where $f(x)$ is a generic function of $x$. so, we can write: $$\dfrac{dn(x)}{dx}=f'(x)$$ The snell law of refraction states: $$n_1\sin(\theta_1)=n_2\sin(\theta_2)$$ How can I write the equation of the ray tracing through the slab? Thanks



Answer



You need to learn about the Eikonal equation and its implications. When the electromagnetic field vectors are locally plane waves, i.e. over length scales of several wavelengths and less they are well approximated by plane waves, then the phase of either $\vec{E}$ or $\vec{H}$ (or of $\vec{A}$ and $\phi$ in Lorenz gauge) can be approximated by one scalar field $\varphi(x,\,y,\,z)$ which fulfils the Eikonal equation:


$$\left|\nabla \varphi\right|^2 = \frac{\omega^2\,n(x,\,y,\,z)^2}{c^2}$$


where, of course, $n(x,\,y,\,z)$ describes your refractive index as a function of position. This equation can be shown to be equivalent to Fermat's principle of least time and also implies Snell's law across discontinuous interfaces. The ray paths are the flow lines (exponentiation) of the vector field defined by $\nabla\,\varphi$. Otherwise put: the rays always point along the direction of maximum rate of variation of $\varphi$, whilst the surfaces normal to the rays are the surfaces of constant $\varphi$, i.e. the phase fronts. A little fiddling with the Eikonal equation shows that the parametric equation for a ray path, i.e. $\vec{r}(s)$ as a function of the arclength $s$ along the path is defined by:


$$\frac{\mathrm{d}}{\mathrm{d}\,s}\left(n(\vec{r}(s))\,\,\frac{\mathrm{d}}{\mathrm{d}\,s} \vec{r}\left(s\right)\right) = \left.\nabla n\left( \vec{r}(s)\right)\right|_{\vec{r}\left(s\right)}$$


This is where you can take things up. You have $n(x,y,z)$ depends only on $x$, so $\nabla n$ will always be in the $\vec{x}$ direction. Everything stays on one plane; let this be the $x-z$ plane and the position of the point on the path is $(x(s),\,z(s))$. We thus get two nonlinear DEs which can be quite hard to solve:


$$\frac{{\rm d}\,n(x)}{{\rm d} s}\,\frac{{\rm d}\,x}{{\rm d} s} + n(x)\, \frac{{\rm d}^2 x}{{\rm d} s^2} = n^\prime(x)$$


$$\frac{{\rm d}\,n(x)}{{\rm d} s}\,\frac{{\rm d}\,z}{{\rm d} s} + n(x)\, \frac{{\rm d}^2 z}{{\rm d} s^2} = 0$$


so you generally need to make some approximation depending on what kind of ray you are dealing with. In fibre optics, for example, you may want to assume that the rays make small angles with the $z$ direction so that $s\approx z$, whence you would get:



$$\frac{{\rm d}\,n(x(z))}{{\rm d} z}\,\frac{{\rm d}\,x(z)}{{\rm d} z} + n(x)\, \frac{{\rm d}^2 x}{{\rm d} s^2} = n^\prime(x)$$


and then you would need to make further approximations depending on the fibre profile.


A good reference for all this is Born and Wolf, Principles of Optics, Chapter 4 or the first half of Snyder and Love, Optical Waveguide Theory.


Saturday 27 October 2018

fluid dynamics - Viscous drag proportional to $r$ or $r^3$?


When a spherical object is falling at terminal velocity through a fluid: $W=U+F$, where $W$ is weight, $U$ is upthrust and $F$ is viscous drag. Rewriting, using Stoke's Law, we get: $$\frac{4}{3}\pi r^3\rho_{object}.g=\frac{4}{3}\pi r^3\rho_{fluid}.g+6\pi r\eta v_t$$ $$6\eta v_t= \frac{4}{3} r^2\rho_{object}.g - \frac{4}{3} r^2\rho_{fluid}.g$$ $$v_t = \frac{2r^2(\rho_{object}-\rho_{fluid}).g}{9\eta},$$


where $r$ is the radius of the object, $\rho$ is density, $g$ is the gravitational acceleration, $\eta$ is the viscosity coefficient, and $v_t$ is the terminal velocity of the object.



So $v_t$ varies proportionally to $r^2$.


However, clearly upthrust varies proportionally to $r^3$. This implies that $F$ varies proportionally to $r$ so that $v_t$ can be proportional to $r^2$. But $F=6\pi r \eta v_t$, so as we increase $r$, $v_t$ increases quadratically, meaning that $F$ must increase cubically.


This seems to be a contradiction - $v_t$ is proportional to $r^2$ so $F$ is proportional to $r$, but also $F$ is proportional to $r^3$.


So where is my reasoning wrong, and does $F$ vary proportionally to $r$ or $r^3$?


EDIT:


I found this quote on the Wikipedia page for Stokes' law:



Note that since buoyant force increases as $R^3$ and Stokes drag increases as $R$, the terminal velocity increases as $R^2$ and thus varies greatly with particle size as shown below.





quantum field theory - Feynman rules for coupled systems


I have the following system of two coupled real scalar fields $\sigma$ and $\phi$:


$S[\phi,\sigma]=\int{d^4x[-\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-\frac{1}{2}m^2\phi^2-\frac{1}{2}M^2\sigma^2-\frac{\lambda}{2}\sigma\phi^2]}$.


What would the Feynman rules be for this system? I realise that something will be different about the propagator for $\sigma$ as it has no kinetic term, but I'm not sure how that translates into the rules.


Furthermore, how would you draw the Feynman diagrams which determine the amplitude for a 2 -> 2 scattering process associated to the field $\phi$ (at tree level)?


The whole idea is perplexing me a bit and I was wondering if there was any insight here!




gravity - If a photon goes up, does it come down?


If light can be bent by gravity, does a mass as dense as a star pull any fraction of photons back towards itself?



Answer



For visible stars, the answer is no. In Newtonian physics, a star that would pull something travelling at light speed back to itself, i.e. a star for which the escape velocity were $c$, was called a dark star and seems to have been first postulated by the Rev. John Mitchell in a paper to the Royal Society in London in 1783. The great Simon Pierre de Laplace postulated the same idea some years later. It is important to take heed that in theory there was nothing stopping something escaping a dark star by climbing a rope let down by a helpful spaceship, nor was there any known lightspeed limit in those days.


In General Relativity, the analogous concept is a black hole. By definition, if a star is not a black hole, light shone upwards will escape the star's gravitational field, although light is red-shifted in doing so, heavily so if the star is massive. Moreover, in GTR there is no faster than light communication, and gravity is not thought of as a force. In GTR, a black hole is no longer something that a friendly spaceship dangling a rope could help you escape from. A black hole curves space and time such that the futures of anything within the Schwarzschild horizon must lie wholly within the black hole. You can no more escape from a black hole than you can go backwards in time; indeed these two deeds are the same thing in the "curved" spacetime of the black hole.


Edit As CuriousOne points out, a quantum mechanical treatment of the black hole shows that photons can be emitted as Hawking Radiation. This theoretical foretelling was made by Stephen Hawking in 1974: the theory is piecemeal and ad hoc, but it is very simple and fundamental, so I don't believe many physicists seriously believe Hawking radiation will be absent from a full quantum theory of gravity. For "normal sized" black holes formed from collapse of stars, this radiation is exquisitely faint, but microscopic black holes emit much stronger Hawking radiation.


electric circuits - Why does current density changes but not current?



let us assume that we have a conductor with a specified resistance ( case 1 ) and a normal conductor ( case 2 ) as shown in the figure , and now we apply an external electric field with a battery on it, we know that current density will be different in both the parts ,however , both the parts of the wire have different resistances as there is a change in its cross section area but the potential difference will be equal as the length of both the parts is equal so is it correct to say that current running through both the parts is equal ?answer for both the cases enter image description here



Answer



The potential difference across them wouldn't be equal because their lengths are equal. It would only be equal if their resistances were equal, which as you point out they are not.


Friday 26 October 2018

group theory - Lie Algebra Conventions: Hermitian vs. anti-Hermitian


Consider the Lie algebra of $SU(2)$.


To find the infinitesimal generators we linearise about the identity $$U=I+i\alpha T$$ where $\alpha$ is some small parameter. To find the form of $T$ use the condition $\textrm{det}(U)=1$ to find $\textrm{Tr}(T)=0$ and also $U^{\dagger}U=I$ to give $T=T^{\dagger}$ Hermitian.


But instead linearising as $$U=I+\alpha T$$ we would find the conditions $\textrm{Tr}(T)=0$ and $T=-T^{\dagger}$ anti-Hermitian, which seemingly results in a different Lie algebra. I think the former approach is the one usually used (and results in a nicer answer). Is there some rule that determines whether the factor of $i$ should be used in this process, or is it just a matter of convenience?



Answer



The factor of $i$ is generally a matter of convention. Essentially, it boils down to choosing what constant you'd like sitting in front of the defining equation,



$$[T^a,T^b] = f_{abc} T^c$$


of the structure constants $f_{abc}$ of the Lie group. We could have instead a factor of $i$ or any constant in our definition and it is a matter of convention.


There is also some freedom in choosing the normalisation of the 'inner product' $\mathrm{Tr}(T^a T^b)$ though there are restrictions depending on if the group is compact for instance.


In my own experience, physicsts keep a factor of $i$ explicit and in the mathematical literature it is usually omitted.


general relativity - Do photons lose energy due to gravitational redshift? If so, where does the lost energy go?




In the gravitational redshift, the frequency of photons radiated from some source is reduced. As the energy of a photon is given by $\hbar\omega$, if the frequency is reduced where is the lost energy?




lagrangian formalism - Prove energy conservation using Noether's theorem



I wonder how you prove that energy is conserved under a time translation using Noether's theorem. I've tried myself but without success. What I've come up with so far is that I start by inducing the following symmetry transformation \begin{align} \mathrm{h}_s:\ &q \rightarrow \mathrm{h}_s(q(t)) = q(t)\\ \hat{\mathrm{h}}_s:\ &\dot{q}(t) \rightarrow \hat{\mathrm{h}}_s(\dot{q}(t)) = \dot{q}(t)\\ &t \rightarrow t^\prime = t+s\epsilon \end{align} $\mathrm{h}_s$ is a symmetry of the Lagrangian if: $$ L(\mathrm{h}_s(q(t)),\hat{\mathrm{h}}_s(\dot{q}(t)),t^\prime) = L(x,\dot{x},t) + \frac{\textrm{d}}{\textrm{dt}}F_s $$ Then I derivative with respect to $s$ and look for minimum. $$ \frac{\partial}{\partial s}\Big(L(\mathrm{h}_s(q(t)),\hat{\mathrm{h}}_s(\dot{q}(t)),t^\prime) - \frac{\textrm{d}}{\textrm{dt}}F_s\Big)=0 $$ I find the derivative to be $$ \frac{\partial L}{\partial \mathrm{h}_s(q(t))}\frac{\mathrm{h}_s(q(t))}{\partial s}+\frac{\partial L}{\partial \hat{\mathrm{h}}_s(\dot{q}(t))}\frac{\hat{\mathrm{h}}_s(\dot{q}(t))}{\partial s}+\frac{\partial L}{\partial t^\prime}\frac{\partial t^\prime}{\partial s}- \frac{\textrm{d}}{\textrm{dt}}\frac{\partial F_s}{\partial s}=0 $$ $$ \Rightarrow \frac{\partial L}{\partial t^\prime}\epsilon-\frac{\textrm{d}}{\textrm{dt}}\frac{\partial F_s}{\partial s} = \frac{\partial L}{\partial t}\frac{\mathrm{dt}}{\mathrm{dt^\prime}}\epsilon -\frac{\textrm{d}}{\textrm{dt}}\frac{\partial F_s}{\partial s} = \frac{\partial L}{\partial t}\epsilon -\frac{\textrm{d}}{\textrm{dt}}\frac{\partial F_s}{\partial s} = 0 $$ Here is the part where I get stuck. I don't know what to do next. I'm trying to find my Noether charge that corresponds to a time translation to be the Hamiltonian. Is there an easier or better way to do this? Please teach me, I'm dying to learn!


I found this book, Lanczos, The variational principles of mechanics, page 401, which explicit shows the energy conservation using Noether's theorem. Thou It seems that I can not follow the step from equation 7 to 8. Can someone explain to me why the intregal looks the way it does? Have they taylor expanded the expression somehow?



Answer



Comments to OP's post (v4):




  1. OP is trying to prove via Noether's theorem that no explicit time dependence of the Lagrangian leads to energy conservation.




  2. OP's transformation seems to be a pure horizontal infinitesimal time translation $$\tag{A} t^{\prime} - t ~=:~\delta t ~=~-\epsilon, \qquad \text{(horizontal variation)}$$ $$\tag{B} q^{\prime i}(t) - q^i(t)~=:~\delta_0 q^i ~=~0, \qquad \text{(no vertical variation)}$$ $$\tag{C} q^{\prime i}(t^{\prime}) - q^i(t)~=:~\delta q^i ~=~-\epsilon\dot{q}. \qquad \text{(full variation)}$$ It is explained in my Phys.SE answer here why this transformation (A)-(C) cannot be used to prove energy conservation.





  3. In eq. (1) on p. 401, the Ref. 1 is instead considering the following infinitesimal transformation $$\tag{A'} t^{\prime} - t ~=:~\delta t ~=~-\epsilon, \qquad \text{(horizontal variation)}$$ $$\tag{B'} q^{\prime i}(t) - q^i(t)~=:~\delta_0 q^i ~=~\epsilon\dot{q}, \qquad \text{(vertical variation)}$$ $$\tag{C'} q^{\prime i}(t^{\prime}) - q^i(t)~=:~\delta q^i ~=~0. \qquad \text{(full variation)}$$ This is the same infinitesimal transformation as Section IV in my Phys.SE answer here, except for the fact that $\epsilon\equiv\alpha$ is allowed to be a function of time $t$. Therefore the variation of the action $S\equiv A$ is not necessarily zero, but of the form $$ \tag{8} \delta S ~=~\int\! dt ~j \frac{d\epsilon}{dt}, $$ where the bare Noether current $j=h$ is the energy function, cf. eq. (8) on p. 402 in Ref. 1. The $t$-dependence in $\epsilon$ is tied to the Noether trick explained in this Phys.SE post. This in turn can be pieced together into a proof of the on-shell energy conservation $$ \tag{9}\frac{dh}{dt}~\approx~0,$$ cf. eq. (9) on p. 402 in Ref. 1.




References:



  1. C. Lanczos, The variational principles of mechanics, 1970; Appendix II.


mathematics - What areas of physics depend on the sum $1 + 2 + 3 + 4 + 5 + 6+ 7+ldots= -1/12$?



This youtube video from Numberphile, http://youtu.be/w-I6XTVZXww



shows how the value is derived. In the video, one interviewee claims that "this result is used in many areas of physics". In the video, only string theory is mentioned.


Which areas of physics use or depend on the sum $$1 + 2 + 3 + 4 + 5 + 6+ 7+\ldots= -1/12?$$




optics - Expectation value for the time of a photon reflection



A photon is reflected by matter (by an electron in empty space). How long does the reflection take? (i.e. is there any infinitesimal time elapsing during the reflection process?), or more precisely, what is the time difference, the time retardation by the interaction compared to the light velocity in vacuum. A very approximate number would be sufficient for the answer.




Thursday 25 October 2018

thermodynamics - Ultra-relativistic gas



What is the physical significance of the relation $E=3NkT$ for classical ultra-relativistic gas? Why is it greater than ideal gas for which $E=(3/2)NkT$?



Answer



Nice question. Sometimes we get used to a certain fact, such as equipartition with $(1/2)kT$ per degree of freedom, that we forget that it's not always true, or what assumptions are required in order to make it true. I had to refresh my memory on how equipartition works.


Basically the $(1/2)kT$ form of the equipartition theorem is a special case that only works if the energy consists of terms that are proportional to the squares of the coordinates and momenta. The 1/2 comes from the exponent in these squares.


The WP article on equipartition has a discussion of this. There is a general equipartition theorem that says that


$$\langle x \frac{\partial E}{\partial x} \rangle = kT,$$


where $x$ could be either a coordinate or a conjugate momentum. If $E$ has a term proportional to $x^m$, the partial derivative has a factor of $m$ in it. In the ultrarelativistic case, where $E\propto\sqrt{p_x^2+p_y^2+p_z^2}$, you don't actually have a dependence on the momenta (momentum components) that breaks down into terms proportional to a power of each momentum. However, I think it's pretty easy to see why we end up with the result we do, because in one dimension, we have $|\textbf{p}|=|p_x|$, which does have the right form, with an exponent of 1.


linear algebra - Linearity in Quantum Mechanics that make superposition possible



As a beginner in QM, all the video lectures that i have seen talk about superposing wave functions in order to get $\psi$. But from what i know from linear algebra, the system must be linear in order for us to do this superposition.


So, what tells us that quantum systems are linear systems? Does it come out of experimental results or from some intuitive physical explanation? If it's the first, then if we treat all quantum mechanical systems as linear, how can we find a non-linear system that might exist but has not been seen in labs yet? (I mean that in this way we exclude all possibilities that a non linear quantum system might exist). If it's the second, then can you give me that intuitive explanation?




cosmology - The initial conditions of the CMB spectrum


The CMB spectrum shows the intensity of fluctuation at a certain angular scale:


Power spectrum


The achievement is the correspondence between the predicted power spectrum and the observed one.


My question is as follows:


Isn't the prediction terribly dependent on the initial conditions/the pattern of acoustic oscillations at that exact moment? Since the maxima correspond to modes caught, at that particular moment, at their oscillation extrema.




What exactly is 'Dark Matter'?



There are many documentaries, forums, blogs and more dedicated to Dark Matter. I have been frantically searching for an answer to my question however none of my sources have clarity to the matter of hand. I would really love a clear explanation to: What exactly is Dark Matter? Please help me to have a clear understanding.



Answer



We don't know.



Though there are several ideas what dark matter could be, e.g. the humorously abbreviated WIMPs, all we know about dark matter is that is it massive (by light deflection, etc., etc.) and that it does not interact electromagnetically, and probably also not with the strong force. Other than that, there is no sufficently tested theory of dark matter to pronounce with confidence what it is. We only know what it is not (i.e. not EM charged, not strongly charged, and there are probably a few other constraints from observation).


Also, though highly unlikely, it could be that it is our theory of gravity, i.e. GR, that needs to be modified. In that case, it could be that there is no additional unknown matter, just different gravitational interactions from what we currently think.


experimental physics - The synthesis of $^{254}text{No}$


How is $^{254}\text{No}$ synthesised?


Could you explain the reaction where it is preceded by $^{208}\text{Pb}(^{48}\text{Ca}, 2\text{n})$?


References to articles are well enough—I was somehow unable to find anything sufficiently detailed and informative.



Answer



The notation X(Y,Z)W is a compact way of describing nuclear and particle experiments.




  • Particles that appear to the left of the comma (,) are in the initial state and those that appear to the right are in the final state.


  • The energy and/or momenta of particles that appear inside the parenthesis are measured. Particles that appear outside have unmeasured energy and/or momentum.


    One caveat here: some (or all) unmeasured initial energy and momentum may be deduced on the basis that they represent a fixed target material.




Unobserved final-state particles are often omitted, and sometimes a notation like $X$ is used to imply many possible final state (i.e. an inclusive measurement).


So, when I say that my dissertation looked at $A(e,e'p)$, I mean that I fired an electron beam at fixed nuclear targets (we used $^1\mathrm{H}$ for calibration and acceptance; $^2\mathrm{H}$; $^{12}\mathrm{C}$; and $^{56}\mathrm{Fe}$) and measured the coincident electrons and protons emerging from quasi-elastic scattering events. The recoiling nucleus was unobserved, and other events were cut during analysis.


Similarly the notation above suggests that calcium nuclei were accelerated into a lead target, and the fast ejecta was observed. Those events with exactly two ejected neutrons were selected, leaving a unobserved heavy nucleus assumed to be $^{254}\mathrm{No}$ (the assumption is good if you really understand the measured ejecta).


newtonian mechanics - Stone thrown in empty space


This is a supplementary question to What happens if object is thrown in empty space?


Via the following logic:


$$E = \frac{(mv)v}{2}\\ E = \frac{pv}{2}\\ \Delta E = \Delta p\times \Delta \frac{v}{2}\\ \Delta p = m\times \Delta v \\ \Delta p = ma\\ \Delta p = F\\ \Delta E = F \times \Delta \frac{v}{2}$$


Just when an object leaves our hand, the energy gets stored as Kinetic Energy, which can be represented as $\frac12 mv^2$. If we say that there was an initial kinetic energy and calculate the change in that, then $\Delta E$ gets related to $F$. If there is a force then there is acceleration as well, which means there should be acceleration in object.



Is the above logic correct? If not, why?




electromagnetism - Magnetic Flux conservation



enter image description here



My teacher said that after switch is shifted (after very long time), $\phi_i = \phi_f$ $\implies i_oL = i3L \implies i = \dfrac{i_o}{3} $ where $i_o$ is $\dfrac{\varepsilon}{R}$


So the initial current in the circuit after switch is shifted is $\dfrac{i_o}{3}$


But, I really didn't understand why the flux should be conserved in this case i.e. why $\phi_i = \phi_f$. I would like to know about this concept and the reasons involved.


$\phi_i$ and $\phi_f$ are the total flux in both inductors immediately before and immediately after the switch is shifted?




About Zeroth Law of thermodynamics vs. Geology


In Zeroth law of thermodynamics, it clearly says that the systems connected together are in thermal equilibrium.


My question is:



In Earth Science, scientists say that Earth's core temperature is 6000 °C, but the Earth's crust temperature is only 200 °C — 392 °C, why the temperature in Earth's crust, mantle, core aren't the same? It is stated in the law of thermodynamics that any system will have a thermal equilibrium when they are in contact, like thermometers if they get contact with us, we and them will have the same temperature, right?


But why earth's layers don't have thermal equilibrium? If scientists of geology were right, then thermodynamics is? If thermodynamics is right, then geology is? What is right? Law of Thermodynamics or geology? Because, if earth's layer will have a thermal equilibrium earth's core must not be at 6000 °C, or if earth's core is 6000 °C then earth's crust must be at 6000 °C also, right?




nuclear physics - Energy Released in a Fission Reaction


I've been told the following is incorrect, but I can't really see how.


Consider the fission event described by the equation $$ \rm ^{235}U+n\rightarrow{}^{93}Rb+{}^{140}Cs+3n $$ The energy released is given by \begin{align} Q&=\Delta mc^2 \\&=(m_\mathrm U+m_\mathrm n-m_\mathrm{Rb}-m_\mathrm{Cs}-3m_\mathrm n)c^2 \\&=(m_\mathrm U-m_\mathrm {Rb}-m_\mathrm {Cs}-2m_\mathrm n)c^2 \end{align} and writing the nuclear masses in terms of the masses of their constituent nucleons less than nuclear binding energy we find $$ Q=B_\mathrm{Rb}+B_\mathrm{Cs}-B_\mathrm{U} $$ You could then estimate these binding energies using the SEMF giving a value of roughly $$Q=145\rm\,MeV$$ (in reality it should be something like $200\rm\,MeV$). However I'm just concerned about whether or not the general method is correct or not. Thanks in advance.




quantum mechanics - Rigorous Mathematical Proof of the Uncertainty Principle from First Principles



While looking at an intuitive explanation for the Heisenberg Uncertainty Principle (related question below), there was a mention of an axiomatic approach to establishing the uncertainty principle. Could someone please point out a source with detailed steps and explanations from first principles?


Related Question


Can the Heisenberg Uncertainty Principle be explained intuitively?


Some (re)-search will reveal the below proof (and others) which are perhaps not immediately grasped by people unfamiliar with certain terminology / concepts.


Related Proof


Heisenberg Uncertainty Principle scientific proof


In the above answer, It is not clear





  1. How the product of vectors, $PQ$, is decomposed into real and imaginary parts?




  2. How the expected value of $PQ$ squared is the square of the imaginary and real parts separately?




  3. How both square things are positive since a complex portion is involved?





  4. How square things being positive means that the left hand side is bigger than one quarter the square of the commutator?




  5. Why the commutator is unchanged by the shifting $[P,Q]=[p,x]=ℏ$ ?




Please note, I was a decent physics student (perhaps not, but am still deeply interested) who has wandered off into the social sciences for graduate studies. Hence, I am a bit rusty on the notation and terminology. Any pointers to brush up the concepts and fill the gaps in existing explanations would be much appreciated. I understand my questions might seem very trivial or obvious to experts, hence please pardon my ignorance of any basic concepts.




Answer



1) It is a product of operators. And they are not so much decomposed into real and imaginary parts, but rather into self-adjoint and antiself-adjoint parts.



If we take self-adjoint $A,B$ linear operators on some suitable Hilbert space, it is clear that $$ AB=\frac{1}{2}(AB+BA)+\frac{1}{2}(AB-BA), $$ since $$ \frac{1}{2}(AB+BA)+\frac{1}{2}(AB-BA)=\frac{1}{2}AB+\frac{1}{2}BA+\frac{1}{2}AB-\frac{1}{2}BA=2\frac{1}{2}AB=AB. $$ Now, since $A$ and $B$ are self-adjoint, $$ (AB+BA)^\dagger=B^\dagger A^\dagger+A^\dagger B^\dagger=BA+AB=AB+BA, $$ so this "anticommutator", $AB+BA$, is self-adjoint if $A$ and $B$ are.


However, if we look at the commutator, $AB-BA$, $$ (AB-BA)^\dagger=B^\dagger A^\dagger-A^\dagger B^\dagger=BA-AB=-(AB-BA), $$ the commutator of self-adjoint operators $A,B$ is antiself-adoint.


Now, the reason he called them "real" and "imaginary", is because in the space of all linear operators of a unitary vector space, self-adjoint operators are analogous to real numbers within the complex number field, and antiself-adjoint operators are analogous to imaginary numbers.


2, and the rest) First we should note that we take expectation values with respect to quantum states. If our particle is in the state $|\psi\rangle$, then the expectation value of $A$ with respect to the state $|\psi\rangle$ is $\langle A\rangle_\psi=\langle\psi|A|\psi\rangle$, from which we can see that the expectation value is linear.


Now then, $$ \langle AB\rangle=\left\langle\frac{1}{2}(AB+BA)+\frac{1}{2}(AB-BA)\right\rangle=\frac{1}{2}\langle AB+BA\rangle+\frac{1}{2}\langle AB-BA\rangle .$$


We should note that the expectation value of an operator is related to its eigenvalues. The expectation value of a self-adjoint operator is real, because its eigenvalues are real, and the expectation value of an antiself-adjoint operator is imaginary, because the eigenvalues are imaginary. Also, because the commutator $[A,B]=AB-BA$ is antiself-adjoint, there exists a self-adjoint operator $C$, for which $[A,B]=iC$, since $iC$ is then antiself-adjoint ($C$ is self-adjoint, but the $i$ swaps sign).


Note now, that the post you were quoting was wrong in the sense we do not take the square of the expectation value, but the square of the absolute value of the expectation value.


But since $\langle AB-BA\rangle=\langle[A,B]\rangle=\langle iC\rangle=i\langle C\rangle$, and then we take the absolute value square of $\langle AB\rangle$: $$ |\langle AB\rangle|^2=\frac{1}{4}\langle AB+BA\rangle^2+\frac{1}{4}\langle C\rangle^2,$$ but then $$ |\langle AB\rangle|^2\ge \frac{1}{4}\langle C\rangle^2 ,$$ and everything here is less than $(\Delta A)^2(\Delta B)^2$, which means that $$(\Delta A)(\Delta B)\ge\frac{1}{2}\langle C\rangle,$$ but for $x$ and $p$, $C=-i[x,p]=-ii\hbar\mathbb{I}=\hbar\mathbb{I},$ so $\frac{1}{2}\langle C\rangle=\hbar/2$.


Do note, that the proof you linked is slightly wrong, based on my first glance, or it used some implicit algebraic manipulations I find nontrivial, but the general line of thought is the same.


Wednesday 24 October 2018

mass - Why is the prospective new kilogram standard a sphere?


I can understand the choice of material, silicon 28, but why is it a sphere rather than (say) a cube? Article here


I would have thought that a sphere would have been the hardest shape to machine accurately.



Answer



If you know the diameter of the sphere, you know everything you need to know about the dimensions. It all comes down to one single value.


Any other shape requires multiple dimensions and thus multiple values. Further, measuring a cube or another shape for accuracy is harder than measuring a sphere.


Making very accurate spheres is not as difficult as you might think - it's no different than making optical glass or mirrors using grinding techniques, and, in fact, they are measured much the same way with lasers for very high accuracy.


This video goes into a little more detail as to why they are doing this, how they achieved it, and how the sphere is made.



string theory - Since when were Loop Quantum Gravity (LQG) and Einstein-Cartan (EC) theories experimentally proven?


Can this template at Wikipedia be true? It seems to suggest that Einstein-Cartan theory, Gauge theory gravity, Teleparalleism and Euclidean Quantum Gravity are fully compatible with observation!



It also suggests that Loop Quantum Gravity and BEC Vacuum Theory among others, are experimentally constrained whereas string theory/M theory are disputed!


What I understand by "Fully compatible with observation" is that all its predictions are confirmed by experiments and it has been found to be more accurate than General Relativity. Has such evidence really been found? Or am I misinterpreting "Fully compatible with observation"? Maybe it means it has been tested only when it reduces to General Relativity? But if that where the case, shouldn't M theory/String theory also be listed under "Fully Compatible" since their predictions also go down to Classical General Relativity at the low-energy, classical limit, if all other forces (other than gravity?) are gotten rid off?


What I understand by "Experimentally constrained" is that it is true given certain modifications. However, as far as I know, Loop Quantum Gravity violates Lorentz symmetry and has thus been experimentally "excluded" while BEC Vacuum theory isn't even mainstream?


What I understand by "Developmental/Disputed" is that it is still undergoing development OR it has almost been experimentally proven wrong but it is still not settled in mainstream physics. If LQG doesn't go to the excluded section, it should at least come here? Since the violation of Lorentz symmetry has been disproven according to this.


So my question is "Is this template really reliable?"



Answer



"Fully compatible with observations" is a rather vague statement. Actually, two aspects of adequacy to reality have to be distinguished when a new theory reaches a degree of explicitation. These are




  • compatibility with older theories, in domains where the new theory is not supposed to bring more than a new formulation. For instance, special relativity is compatible with newtonian mechanics when velocities are small compared with c. Since older theories taken in reference have been usually thoroughly tested (otherwise you don't take them as reference), this is a good first check for your new theory.





  • compatibility with new phenomena. Indeed what makes a new theory interesting is the change of insight that it might bring on reality. And this means that beyond proposing a new description of reality, it shall predict new observable features which older theories don't account for.




As far as LQG is concerned, my understanding is that the first aspect has been addressed in the sense that right from the outset, conpatibility with GR has been used as a guide to develop the theory. For the second aspect, this one of the topics which focuses a good part of the efforts of the LQG community. This means finding new observable features that survive going from the Planck scale to the scales that are accessible to us in experiments or astrophysical observations. It's tricky but not impossible.


So as far as the statement "fully compatible with observations", I would advise to replace it with "compatible with previous observation-tested theories, but still expecting genuine experimental predictions for testing".


Tuesday 23 October 2018

explosions - Sedov-Taylor blastwave resources


What are some resources for learning about blast waves? Particularly the Sedov-Taylor solution? The specific application is modeling supernovae.




computational physics - Is there a normalized form of the Euler equation discretized with finite volumes?



I want to calculate a flux on my fpga using the Euler equations with the finite volume method. Unfortunately the values of the state variables differ a lot. For example the pressure has a value of 100000 and the density 1.16. This makes it complicated to calculate on the FPGA. Now I'm wondering if there is a normalized form for the Euler equations with finite volumes, so that the values of the state variables are in the same range. I've tried to set them all to one, but my simulation crashed. I think that's not possible because of the non-linearity of the equations.




special relativity - Kleppner derivation of Lorentz transformation


I am reading Kleppner.(Lorentz transformations) He said,we take the most general transformation relating the coordinates of a given event in the two systems to be of the form $$x'=Ax +Bt, y'=y, z'=z, t'=Cx +Dt,$$ and then he found out the constants considering four cases in which we know a priori how an event appears in the two systems.


but why the transformations are linear? He said ,a nonlinear transformation would predict acceleration in one system even if the velocity were constant in the other. But i think thats what happen when we consider Lorentz force(without electric field) if $v$(velocity of a charged particle) is zero in one inertial frame then there is no Lorentz force on the particle(hence the particle has no acceleration in that frame). but this may not be the case in other inertial frames(where the velocity of the charged particle is not zero)? whats wrong here?




thermodynamics - Entropy of the Sun



  • Is it possible to measure or calculate the total entropy of the Sun?

  • Assuming it changes over time, what are its current first and second derivatives w.r.t. time?

  • What is our prediction on its asymptotic behavior (barring possible collisions with other bodies)?




electromagnetism - Could there be equivalence between anisotropic space and the presence of a field?



What if we are so used to the curvature of space caused by mass and the range of its effects that we totally ignore the possibility of the existence of "opposite" curvature1, i.e. objects that bend space opposite to mass and cause repulsion by creating local space anisotropy?


What if we are so used to space curvature caused by mass that we invent a force that has no charge to explain possible effects of opposite curvature (repelling force - diamagnetism)?


What if the oscillating electric field has the ability to cause tiny local (anisotropy) fluctuations in space and its effects on objects are interpreted as "magnetic field"?


Could magnets be just an example of objects with anomalous gravitational fields due to their ability to distort the isotropy of space?


I guess my question boils down to:



How do we know that the force at the poles of a magnet is not gravitational (large but local curvature caused by anisotropy) with opposite signs, rather than what we call magnetic?


Can space curvature arise from something else different than mass?


What would change on the r.h.s of the Einstein's equation (components of stress-energy tensor) if we assume a connection with torsion?




1. Same size of the volume element $dV$, but with stretch in one element, say $dx$ and proportional compression in the other two $dy$, $dz$.




homework and exercises - Isn't friction on an incline coefficient of friction times the Reactional force?


enter image description here



Question - "As shown in figure a body of mass 1 kg is shifted from A to D on inclined planes by applying a force slowly such that the block is always in contact with the plane surfaces. Neglecting the jerks experienced at C and B, what is the total work done by the force?"




Given - $\mu$AB = 0.1 $\mu$BC = 0.2 $\mu$CD = 0.4


My approach was to simply calculate the frictional forces by using $\mu mg\cos\theta$ and multiplying them by their respective distances covered in each part. After that, I calculated the gain in potential energy.


But when I check the solutions to the problem, it is stated that the work done by friction is $\mu mgl$ in each case.


Shouldn't the Frictional force be $\mu R$ and then we substitute the Reactional force $R$ as $R\cos\theta$?




soft question - What strategies can a researcher use when confronted with a long and complicated symbolic expression?


When doing research in theoretical physics, a frequent task one encounters is trying to express some physical quantity as a function of other quantities. A lot of times this can't be done analytically, but even when it can - it sometimes results in very long and complicated symbolic expressions.


Although technically such an expression is a "solution", it is not of much use for a researcher that wants to gain physical insight from the solution, and maybe rely on it to do more research.



What are some general strategies a researcher can use to gain insight when confronted with such expressions?


For starters - here are some ideas I use:



  • Check certain limits of the expression, i.e. when one of the variables is very low or very high - these are often simpler and can shed light on the behavior in the general case.

  • Look for a recurring pattern in the expression and give it a name. The newly defined variable usually has some physical significance in itself, and also when all the occurrences of the pattern are replaced with the new variable the whole expression becomes simpler.

  • Substitute some of the variables with reasonable numerical estimates, and plot the expression as a function of the rest of the variables.



Answer



In addition to what you have listed:





  1. Use the Buckingham Pi theorem to create as many non-dimensional numbers as possible from combinations of dimensional numbers. This simplifies the expression but also allows you to reduce the number of variations on variables that need studied.




  2. Non-dimensionalize all of your variables using suitable reference measures for the problem you are studying. Use this to do an order-of-magnitude assessment of terms to decide if some terms are insignificant under certain conditions.




  3. Perform a perturbation analysis. For example, if you have an equation for a wave, $\psi$, substitute in $\psi = \overline{\psi}+\psi'$ where $\overline{\psi}$ is the average wave value (in time, space or any of your other independent variables) and $\psi'$ is a disturbance on that mean. Then expand all terms and collect the means together and the perturbations together. Do a lot of manipulation and you'll get an expression for the mean behavior and the response to disturbances. This isn't always "simpler" but gives tremendous insight. You could then substitute in simple functions, like trigonometric functions, for that disturbance and study how it grows or shrinks and under what conditions.





  4. Learn what types of terms do what. Which terms transport or convect your variable? Which terms produce or dissipate your variable? These types of terms usually have typical forms in terms of derivatives. You can group terms together to come up with a simple conservation equation (time change = transport + production - dissipation) where all those terms can be lumped together. You can then attempt models for each individual group of terms.




Of course, all of these can be combined with one another to do all sorts of complex study.


Monday 22 October 2018

quantum field theory - How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)times SU(2)sim SO(3,1)$ ?


This question is based on problem II.3.1 in Anthony Zee's book Quantum Field Theory in a Nutshell



Show, by explicit calculation, that $(1/2,1/2)$ is the Lorentz Vector.



I see that the generators of SU(2) are the Pauli Matrices and the generators of SO(3,1)is a matrix composed of two Pauli Matrices along the diagonal. Is it always the case that the Direct Product of two groups is formed from the generators like this?



I ask this because I'm trying to write a Lorentz boost as two simultaneous quatertion rotations [unit quaternions rotations are isomorphic to SU(2)] and tranform between the two methods. Is this possible?


In other words, How do I construct the SU(2) representation of the Lorentz Group using the fact that $SU(2)\times SU(2) \sim SO(3,1)$?


Here is some background information:


Zee has shown that the algebra of the Lorentz group is formed from two separate $SU(2)$ algebras [$SO(3,1)$ is isomorphic to $SU(2)\times SU(2)$] because the Lorentz algebra satisfies:


$$\begin{align}[J_{+i},J_{+j}] &= ie_{ijk}J_{k+} & [J_{-i},J_{-j}] &= ie_{ijk} J_{k-} & [J_{+i},J_{-j}] &= 0\end{align}$$


The representations of $SU(2)$ are labeled by $j=0,\frac{1}{2},1,\ldots$ so the $SU(2)\times SU(2)$ rep is labelled by $(j_+,j_-)$ with the $(1/2,1/2)$ being the Lorentz 4-vector because and each representation contains $(2j+1)$ elements so $(1/2,1/2)$ contains 4 elements.



Answer



Here is a mathematical derivation. We use the sign convention $(+,-,-,-)$ for the Minkowski metric $\eta_{\mu\nu}$.


I) First recall the fact that




$SL(2,\mathbb{C})$ is (the double cover of) the restricted Lorentz group $SO^+(1,3;\mathbb{R})$.



This follows partly because:




  1. There is a bijective isometry from the Minkowski space $(\mathbb{R}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ Hermitian matrices $(u(2),\det(\cdot))$, $$\mathbb{R}^{1,3} ~\cong ~ u(2) ~:=~\{\sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}) \mid \sigma^{\dagger}=\sigma \} ~=~ {\rm span}_{\mathbb{R}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$ $$\mathbb{R}^{1,3}~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ u(2), $$ $$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma), \qquad \sigma_{0}~:=~{\bf 1}_{2 \times 2}.\tag{1}$$




  2. There is a group action $\rho: SL(2,\mathbb{C})\times u(2) \to u(2)$ given by $$g\quad \mapsto\quad\rho(g)\sigma~:= ~g\sigma g^{\dagger}, \qquad g\in SL(2,\mathbb{C}),\qquad\sigma\in u(2), \tag{2}$$ which is length preserving, i.e. $g$ is a pseudo-orthogonal (or Lorentz) transformation. In other words, there is a Lie group homomorphism
    $$\rho: SL(2,\mathbb{C}) \quad\to\quad O(u(2),\mathbb{R})~\cong~ O(1,3;\mathbb{R}) .\tag{3}$$





  3. Since $\rho$ is a continuous map and $SL(2,\mathbb{C})$ is a connected set, the image $\rho(SL(2,\mathbb{C}))$ must again be a connected set. In fact, one may show so there is a surjective Lie group homomorphism$^1$
    $$\rho: SL(2,\mathbb{C}) \quad\to\quad SO^+(u(2),\mathbb{R})~\cong~ SO^+(1,3;\mathbb{R}) , $$ $$\rho(\pm {\bf 1}_{2 \times 2})~=~{\bf 1}_{u(2)}.\tag{4}$$




  4. The Lie group $SL(2,\mathbb{C})=\pm e^{sl(2,\mathbb{C})}$ has Lie algebra $$ sl(2,\mathbb{C}) ~=~ \{\tau\in{\rm Mat}_{2\times 2}(\mathbb{C}) \mid {\rm tr}(\tau)~=~0 \} ~=~{\rm span}_{\mathbb{C}} \{\sigma_{i} \mid i=1,2,3\}.\tag{5}$$




  5. The Lie group homomorphism $\rho: SL(2,\mathbb{C}) \to O(u(2),\mathbb{R})$ induces a Lie algebra homomorphism $$\rho: sl(2,\mathbb{C})\to o(u(2),\mathbb{R})\tag{6}$$ given by $$ \rho(\tau)\sigma ~=~ \tau \sigma +\sigma \tau^{\dagger}, \qquad \tau\in sl(2,\mathbb{C}),\qquad\sigma\in u(2), $$ $$ \rho(\tau) ~=~ L_{\tau} +R_{\tau^{\dagger}},\tag{7}$$ where we have defined left and right multiplication of $2\times 2$ matrices $$L_{\sigma}(\tau)~:=~\sigma \tau~=:~ R_{\tau}(\sigma), \qquad \sigma,\tau ~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{8}$$





II) Note that the Lorentz Lie algebra $so(1,3;\mathbb{R}) \cong sl(2,\mathbb{C})$ does not$^2$ contain two perpendicular copies of, say, the real Lie algebra $su(2)$ or $sl(2,\mathbb{R})$. For comparison and completeness, let us mention that for other signatures in $4$ dimensions, one has


$$SO(4;\mathbb{R})~\cong~[SU(2)\times SU(2)]/\mathbb{Z}_2, \qquad\text{(compact form)}\tag{9}$$


$$SO^+(2,2;\mathbb{R})~\cong~[SL(2,\mathbb{R})\times SL(2,\mathbb{R})]/\mathbb{Z}_2.\qquad\text{(split form)}\tag{10}$$


The compact form (9) has a nice proof using quaternions


$$(\mathbb{R}^4,||\cdot||^2) ~\cong~ (\mathbb{H},|\cdot|^2)\quad\text{and}\quad SU(2)~\cong~ U(1,\mathbb{H}),\tag{11}$$


see also this Math.SE post and this Phys.SE post. The split form (10) uses a bijective isometry


$$(\mathbb{R}^{2,2},||\cdot||^2) ~\cong~({\rm Mat}_{2\times 2}(\mathbb{R}),\det(\cdot)).\tag{12}$$


To decompose Minkowski space into left- and right-handed Weyl spinor representations, one must go to the complexification, i.e. one must use the fact that




$SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ is (the double cover of) the complexified proper Lorentz group $SO(1,3;\mathbb{C})$.



Note that Refs. 1-2 do not discuss complexification$^2$. One can more or less repeat the construction from section I with the real numbers $\mathbb{R}$ replaced by complex numbers $\mathbb{C}$, however with some important caveats.




  1. There is a bijective isometry from the complexified Minkowski space $(\mathbb{C}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ matrices $({\rm Mat}_{2\times 2}(\mathbb{C}),\det(\cdot))$, $$\mathbb{C}^{1,3} ~\cong ~ {\rm Mat}_{2\times 2}(\mathbb{C}) ~=~ {\rm span}_{\mathbb{C}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$ $$ M(1,3;\mathbb{C})~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}) , $$ $$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma).\tag{13}$$ Note that forms are taken to be bilinear rather than sesquilinear.




  2. There is a surjective Lie group homomorphism$^3$

    $$\rho: SL(2,\mathbb{C}) \times SL(2,\mathbb{C}) \quad\to\quad SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})~\cong~ SO(1,3;\mathbb{C})\tag{14}$$ given by $$(g_L, g_R)\quad \mapsto\quad\rho(g_L, g_R)\sigma~:= ~g_L\sigma g^{\dagger}_R, $$ $$ g_L, g_R\in SL(2,\mathbb{C}),\qquad\sigma~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{15} $$




  3. The Lie group $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ has Lie algebra $sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})$.




  4. The Lie group homomorphism
    $$\rho: SL(2,\mathbb{C})\times SL(2,\mathbb{C}) \quad\to\quad SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{16}$$ induces a Lie algebra homomorphism $$\rho: sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})\quad\to\quad so({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{17}$$ given by $$ \rho(\tau_L\oplus\tau_R)\sigma ~=~ \tau_L \sigma +\sigma \tau^{\dagger}_R, \qquad \tau_L,\tau_R\in sl(2,\mathbb{C}),\qquad \sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}), $$ $$ \rho(\tau_L\oplus\tau_R) ~=~ L_{\tau_L} +R_{\tau^{\dagger}_R}.\tag{18}$$





The left action (acting from left on a two-dimensional complex column vector) yields by definition the (left-handed Weyl) spinor representation $(\frac{1}{2},0)$, while the right action (acting from right on a two-dimensional complex row vector) yields by definition the right-handed Weyl/complex conjugate spinor representation $(0,\frac{1}{2})$. The above shows that



The complexified Minkowski space $\mathbb{C}^{1,3}$ is a $(\frac{1}{2},\frac{1}{2})$ representation of the Lie group $SL(2,\mathbb{C}) \times SL(2,\mathbb{C})$, whose action respects the Minkowski metric.



References:




  1. Anthony Zee, Quantum Field Theory in a Nutshell, 1st edition, 2003.





  2. Anthony Zee, Quantum Field Theory in a Nutshell, 2nd edition, 2010.






$^1$ It is easy to check that it is not possible to describe discrete Lorentz transformations, such as, e.g. parity $P$, time-reversal $T$, or $PT$ with a group element $g\in GL(2,\mathbb{C})$ and formula (2).


$^2$ For a laugh, check out the (in several ways) wrong second sentence on p.113 in Ref. 1: "The mathematically sophisticated say that the algebra $SO(3,1)$ is isomorphic to $SU(2)\otimes SU(2)$." The corrected statement would e.g. be "The mathematically sophisticated say that the group $SO(3,1;\mathbb{C})$ is locally isomorphic to $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$." Nevertheless, let me rush to add that Zee's book is overall a very nice book. In Ref. 2, the above sentence is removed, and a subsection called "More on $SO(4)$, $SO(3,1)$, and $SO(2,2)$" is added on page 531-532.


$^3$ It is not possible to mimic an improper Lorentz transformations $\Lambda\in O(1,3;\mathbb{C})$ [i.e. with negative determinant $\det (\Lambda)=-1$] with the help of two matrices $g_L, g_R\in GL(2,\mathbb{C})$ in formula (15); such as, e.g., the spatial parity transformation $$P:~~(x^0,x^1,x^2,x^3) ~\mapsto~ (x^0,-x^1,-x^2,-x^3).\tag{19}$$ Similarly, the Weyl spinor representations are representations of (the double cover of) $SO(1,3;\mathbb{C})$ but not of (the double cover of) $O(1,3;\mathbb{C})$. E.g. the spatial parity transformation (19) intertwine between left-handed and right-handed Weyl spinor representations.


homework and exercises - Knowing the mass and force acting on a particle, how do we derive the relativistic function for velocity with respect to time?


Use this scenario:


An electron gains speed in the Stanford Linear Accelerator (SLA) across 3000 meters, reaching a final velocity of 0.95c due to a constant force pushing on the electron. Given the mass of the electron and the constant force pushing on the electron, what is the function for velocity in terms of time?



How do we derive this?




homework and exercises - Need help with solution of the Dirac equation


$$\left(\vec\sigma \cdot \vec{p} \right)^2=\left(\vec\sigma \cdot \vec{p}\right) \left(\vec\sigma \cdot \vec{p} \right)=\vec{p} \cdot \vec{p}+\mathrm{i}\left(\vec\sigma \cdot \left[ \vec{p} \times \vec{p} \right] \right)=p^2$$ Where does this $\left(\vec\sigma \cdot \left[ \vec{p} \times \vec{p} \right] \right)$ come from? Cause isn't $\sigma^2=\mathbb{I}$? It will really be a great help if someone can point me in the right direction.


It does not explain anything from here.



Answer



In general : $$ \left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{a}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{b}\right)= \left(\mathbf{a}\boldsymbol{\cdot}\mathbf{b}\right)\mathrm{I}+i\left[\boldsymbol{\sigma}\boldsymbol{\cdot}\left(\mathbf{a}\boldsymbol{\times}\mathbf{b}\right)\right] \tag{01} $$ since


\begin{align} \left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{a}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{b}\right)&= \left(a_{1}\sigma_{1}+a_{2}\sigma_{2}+a_{3}\sigma_{3}\right) \left(b_{1}\sigma_{1}+b_{2}\sigma_{2}+b_{3}\sigma_{3}\right)\\ & = a_{1}b_{1}\sigma_{1}^{2}+a_{2}b_{2}\sigma_{2}^{2}+a_{3}b_{3}\sigma_{3}^{2}+\\ & \quad \:\: \left(a_{2}b_{3}-a_{3}b_{2}\right)\sigma_{2}\sigma_{3}+\left(a_{3}b_{1}-a_{1}b_{3}\right)\sigma_{3}\sigma_{1}+ \left(a_{1}b_{2}-a_{2}b_{1}\right)\sigma_{1}\sigma_{2}\\ & =\underbrace{\left(a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}\right)\mathrm{I}}_{\sigma_{1}^{2}\boldsymbol{=}\sigma_{2}^{2}\boldsymbol{=}\sigma_{3}^{2}\boldsymbol{=}\mathrm{I}}+\\ &\quad \underbrace{i\Biggl(\begin{vmatrix}a_{2}&a_{3}\\b_{2}&b_{3}\end{vmatrix} \sigma_{1} + \begin{vmatrix}a_{3}&a_{1}\\b_{3}&b_{1}\end{vmatrix}\sigma_{2}+ \begin{vmatrix}a_{1}&a_{2}\\b_{1}&b_{2}\end{vmatrix}\sigma_{3}\Biggr)}_{ \sigma_{2}\sigma_{3}\boldsymbol{=}i\sigma_{1}\boldsymbol{=}\boldsymbol{-}\sigma_{3}\sigma_{2}\:,\: \sigma_{3}\sigma_{1}\boldsymbol{=}i\sigma_{2}\boldsymbol{=}\boldsymbol{-}\sigma_{1}\sigma_{3}\:,\: \sigma_{1}\sigma_{2} \boldsymbol{=}i\sigma_{3}\boldsymbol{=}\boldsymbol{-}\sigma_{2}\sigma_{1}}\\ &=\left(\mathbf{a}\boldsymbol{\cdot}\mathbf{b}\right)\mathrm{I}+i\left[\boldsymbol{\sigma}\boldsymbol{\cdot}\left(\mathbf{a}\boldsymbol{\times}\mathbf{b}\right)\right] \tag{02} \end{align}





Now, equation (01) has an interpretation in case that $\:\mathbf{a},\mathbf{b}\:$ are unit vectors. So let the identity (01) with unit vectors \begin{equation} \left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}_{2}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}_{1}\right)= \left(\mathbf{n}_{1}\boldsymbol{\cdot}\mathbf{n}_{2}\right)\mathrm{I}-i\left[\boldsymbol{\sigma}\boldsymbol{\cdot}\left(\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}\right)\right], \quad \text{where} \:\: \Vert\mathbf{n}_{1}\Vert=1=\Vert \mathbf{n}_{2}\Vert \tag{03} \end{equation} If the angle between $\:\mathbf{n}_{1},\mathbf{n}_{2}\:$ is $\:\phi\:$ and $\:\mathbf{n}\:$ the unit vector normal to the plane of $\:\mathbf{n}_{1},\mathbf{n}_{2}\:$ \begin{align} \cos\phi & = \mathbf{n}_{1}\boldsymbol{\cdot}\mathbf{n}_{2} \tag{04a}\\ \mathbf{n} & = \dfrac{\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}}{\Vert\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}\Vert}= \dfrac{\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}}{\sin\phi} \tag{04b} \end{align} then the rhs of equation (03) is expressed as \begin{equation} \mathrm{Q}= \left(\mathbf{n}_{1}\boldsymbol{\cdot}\mathbf{n}_{2}\right)\mathrm{I}-i\left[\boldsymbol{\sigma}\boldsymbol{\cdot}\left(\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}\right)\right]=\cos\left(\dfrac{2\phi}{2}\right)-i\sin\left(\dfrac{2\phi}{2}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}\right) \tag{05} \end{equation} that is a special unitary matrix $\:\mathrm{Q} \in \mathrm{SU(2)}\:$ or a unit quaternion, representation of a rotation around the axis $\:\mathbf{n}\:$ through an angle $\:\theta=2\phi$. Note that the matrix $\:-\mathrm{Q} \in \mathrm{SU(2)}\:$ as expressed by \begin{equation} -\mathrm{Q}= \cos\left(\dfrac{2\pi+2\phi}{2}\right)-i\sin\left(\dfrac{2\pi+2\phi}{2}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}\right) \tag{06} \end{equation} represents a rotation through $\:\theta'=2\pi+2\phi$, that is the same rotation as $\:+\mathrm{Q}\:$ does.


Now, the special unitary matrix \begin{equation} \mathrm{R_\jmath}=-i\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n_\jmath}\right) =\cos\left(\dfrac{\pi}{2}\right)-i\sin\left(\dfrac{\pi}{2}\right)\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n_\jmath}\right) \tag{07} \end{equation} represents a rotation around the axis $\:\mathbf{n_\jmath}\:$ through an angle $\:\pi$, that is a reflection through the axis $\:\mathbf{n_\jmath}$.


So equation (03) is written as


\begin{equation} \bigl[-i\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}_{2}\right)\bigr]\bigl[-i\left(\boldsymbol{\sigma}\boldsymbol{\cdot}\mathbf{n}_{1}\right)\bigr]= -\biggl[\left(\mathbf{n}_{1}\boldsymbol{\cdot}\mathbf{n}_{2}\right)\mathrm{I}-i\left[\boldsymbol{\sigma}\boldsymbol{\cdot}\left(\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}\right)\right]\biggr] \tag{08} \end{equation} or \begin{equation} \mathrm{R_2}\mathrm{R_1}=-\mathrm{Q} \tag{09} \end{equation} meaning that a reflection through an axis $\:\mathbf{n}_1\:$ followed by a reflection through a second axis $\:\mathbf{n}_2\:$ is a rotation around $\:\mathbf{n}_{1}\boldsymbol{\times}\mathbf{n}_{2}\:$ by an angle $\:2\phi\:$ where $\:\phi\:$ the angle between $\:\mathbf{n}_{1},\mathbf{n}_{2}$ as shown in the Figure below.


enter image description here


black holes - What determines the outcome of a supernova?


Question: What determines the outcome of a supernova? When a sun collapses into itself creating a supernova what decides if it will be a white dwarf star, neutron star or black hole? I’ve pondered over the question myself but alas I feel as if I need help from my peers.



Answer



Supernovae do not produce white dwarfs. This is a distinct evolutionary pathway followed by stars with mass below about 8 times the Sun ($8M_{\odot}$) - their cores are destined to become electron-degenerate white dwarfs with masses less than about $1.25 M_{\odot}$.


Neutron stars and perhaps black holes are produced in supernovae, involving the collapse of the core of a star that is more massive than $\simeq 8M_{\odot}$.


If the collapsing core is not particularly massive (resulting from a progenitor say of $<15M_{\odot}$, though the boundary is not certain, and also probably depends on the initial composition of the star), then it is likely that the collapse will be halted by the strong nuclear repulsion felt between neutrons (produced by electron capture onto protons) and to a lesser extent by neutron degeneracy pressure. This results in "core bounce", and subsequently a transfer of a tiny fraction of the collapse energy into the stellar envelope, causing a supernova.


If the proto-neutron star is too massive ($>2-3M_{\odot}$), or it accretes more mass, then it may further collapse into a black hole. Alternatively, the collapse may never be halted in the first place if the initial core was too massive ($>3M_{\odot}$) and there may be direct collapse to a black hole without a supernova at all.


The key parameters determining the fate of a massive star are the initial progenitor mass - the more massive, the more likely to form a black hole. Metallicity is also important. If a star is born from gas that has a higher concentration of heavier elements, its envelope is more opaque and the progenitor is likely to lose more of its initial mass through a radiatively driven wind. Low metallicity progenitors probably have more massive cores at the time of core-collapse and are therefore more likely to form black holes.


The plot below (from work by Heger et al. 2003) illustrates the argument above and shows the likely remnant as a function of initial mass and metallicity. Compact remnant outcome


newtonian mechanics - Work done by static friction on a car


The tires of a car execute pure rolling. Therefore, the work done by friction on the tires (and hence the car) is zero. If no external work is done, how does a car's kinetic energy increase?



Answer



The increase in the car's kinetic energy comes from the internal energy of the car, stored, for example, in its gasoline or batteries.


The engine exerts torque over the wheels, which are prevented by the friction from simply rotating in place. The reaction from the ground on the car (wheels) makes it move faster.


Can we write a Lagrangian for the classical system with $H=kqp$?


Say we have the following Lagrangian:


$$ H = kqp. $$


The equations of motion are easy to find:



$$ \dot{q} = kq \\ \dot{p} = -kp, $$


and to solve:


$$ q=q_0 e^{kt} \\ p=p_0 e^{-kt}. $$


I'm curious if we can write a meaningful Lagrangian for this system, and if not, why not?


$$ H(q,p,t) = p\dot{q} - L(q,\dot{q},t) $$


so the Lagrangian should equal


$$ p\dot{q} - kqp $$


To get $L(q,\dot{q},t)$, we must write $p$ in terms of $q$ and $\dot{q}$. Using the explicit solutions above, there are two ways we can do this:


$$ p = p_0 q_0/q $$ or $$ p = p_0 q_0 k/\dot{q} $$


I suppose the most general form is: $$ p = p_0 q_0(\alpha/q + \beta k/\dot{q}) $$ with $\alpha+\beta=1$. Substituting $p$ with this expression, then removing constants and constant factors, we get:



$$ L \sim \alpha \dot{q}/q - \beta k^2 q/\dot{q}. $$


Now:


$$ \frac{\partial L}{\partial q} = -\frac{\beta k^2}{\dot{q}} - \frac{\alpha \dot{q}}{q^2} \\ \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} = \frac{\beta k^2}{\dot{q}} - \frac{\alpha \dot{q}}{q^2} - \frac{2\beta k^2 q \ddot{q}}{\dot{q}^3} $$


So the Euler-Lagrange equation gives us: $$ \dot{q}^2 = q \ddot{q} $$ The solution here is $q=e^{wt}$ for all $w$, instead of just for $w=k$, disagreeing with the solution to Hamilton's equations.


Furthermore, if we had chosen $\beta=0$ above, we would have $L \sim \dot{q}/q$, which gives an Euler-Lagrange equation of $0=0$.


What is going on here? I feel like I probably did something illegitimate when deriving the Lagrangian from the Hamiltonian, but I don't know exactly what.


Is this particular Hamlitonian just illegitimate to start out with? Is it just not possible to use the Lagrangian formalism with this system? Is there any Lagrangian that will give equations of motion $\dot{q} = k q$ (or, more likely, $\ddot{q} = k \dot{q}$)?


Finally, does any of this matter? Is there any imaginable physical systems where we can define a coordinate $q$ such that $H\sim qp$ or $L\sim q/\dot{q}$ or $L\sim \dot{q}/q$?




general relativity - Why do we need coordinate-free descriptions?


I was reading a book on differential geometry in which it said that a problem early physicists such as Einstein faced was coordinates and they realized that physics does not obey man's coordinate systems.


And why not? When I am walking from school to my house, I am walking on a 2D plane the set of $\mathbb{R} \times \mathbb{R}$ reals. The path of a plane on the sky can be characterized in 3D parameters. A point on a ball rotates in spherical coordinates. A current flows through an inductor via cylindrical coordinates.


Why do we need coordinate-free description in the first place? What things that exist can be better described if we didn't have a coordinate system to describe it?



Answer



That's a very good question. While it may seem "natural" that the world is ordered like a vector space (it is the order that we are accustomed to!), it's indeed a completely unnatural requirement for physics that is supposed to be built on local laws only. Why should there be a perfect long range order of space, at all? Why would space extend from here to the end of the visible universe (which is now some 40 billion light years away) as a close to trivial mathematical structure without any identifiable cause for that structure? Wherever we have similar structures, like crystals, there are causative forces that are both local (interaction between atoms) and global (thermodynamics of the ordered phase which has a lower entropy than the possible disordered phases), which are responsible for that long range order. We don't have that causation argument for space (or time), yet.



If one can't find an obvious cause (and so far we haven't), then the assumption that space "has to be ordered like it is" is not natural and all the theory that we build on that assumption is built on a kludge that stems from ignorance.


"Why do we need coordinate free in the first place?"... well, it's not clear that we do. Just because we have been using them, and with quite some success, doesn't mean that they were necessary. It only means that they were convenient for the description of the macroscopic world. That convenience does, unfortunately, stop once we are dealing with quantum theory. Integrating over all possible momentum states in QFT is an incredibly expensive and messy operation that leads to a number of trivial and not so trivial divergences that we have to fight all the time. There are a few hints from nature and theory that it may actually be a fools errand to look at nature in this highly ordered way and that trying to order microscopically causes more problems than it solves. You can listen to Nima Arkani Hamed here giving a very eloquent elaboration of the technical (not just philosophical) problems with our obsession with space-time coordinates: https://www.youtube.com/watch?v=sU0YaAVtjzE. The talk is much better in the beginning when he lays out the problems with coordinate based reasoning and then it descends into the unsolved problem of how to overcome it. If anything, this talk is a wonderful insight into the creative chaos of modern physics theory.


As a final remark I would warn you about the human mind's tendency to adopt things that it has heard from others as "perfectly normal and invented here". Somebody told you about $\mathbb R$ and you have adopted it as if it was the most natural thing in the world that an uncountable infinity of non-existing objects called "numbers" should exist and that they should magically map onto real world objects, which are quite countable and never infinite. Never do that! Not in physics and not in politics.


Sunday 21 October 2018

experimental physics - What is the smallest item for which gravity has been recorded or observed?


What is the smallest item for which gravity has been recorded or observed? By this, I mean the smallest object whose gravitational effect upon another object has been detected. (Many thanks to Daniel Griscom for that excellent verbiage.)


In other words, we have plenty of evidence that the planet Earth exhibits gravitational force due to its mass. We also have theories that state that all mass, regardless of size, results in gravitational force.


What is the smallest mass for which its gravity has been recorded or observed?


(By the way, I hoped this Physics SE question would contain the answer, but it wound up being about gravity at the center of planet Earth.)





rotational kinematics - Minimum velocity of the particle at the highest point




A particle of mass m is fixed to one end of a light rod of length l and rotated in a vertical circular path about its other end. What is the minimum speed of the particle at the highest point?



It occurs to me that the answer should be zero. But I am not getting a strong conceptual reasoning behind it. Can anyone please answer the question with proper explanation?


Thanks.



Answer



In order to understand this motion, let us check first the case of a particle's motion in a vertical circle; here the particle of mass $m$ is attached to an inextensible string of length $R$ which provides the necessary centripetal force along with gravity.


Let the initial velocity be $u$ at the lowest point of the vertical circle. After time $dt$, it moves to some other point transversing angle $\theta$. The height at which it is now located is $$h = R(1 - \cos\theta)$$. Neglecting all non-conservative force, from work-kinetic energy theorem, we get the velocity at the point after $dt$ is $$v^2 = u^2 -2gh$$. The necessary centripetal force is $$T - mg\cos\theta = \dfrac{mv^2}{R}$$



The particle will complete the circle when at the highest point if the string doesn't slack at the highest point when $\theta = \pi$. There must be centripetal force to make this happen. To find the minimum centripetal force, we make $T = 0$ so that our new centripetal force is $$mg = \dfrac{mv^2}{R} \implies v= \sqrt{gR}$$ . Now this is the minimum velocity at the top which the particle must have in order to cover the whole circle. Why?? Why in order to find the minimum centripetal force, I have made $T = 0$? Even if it is so, why doesn't the gravity bring down the particle as if it were a free-fall instead of acting as a centripetal force?? Because if there is velocity more than $\sqrt{gR}$, then only is tension required to provide the extra centripetal force for the extra velocity. Regarding free-fall, yes the particle would had a free-fall if it had zero velocity at the top; remember if the velocity vector is perpendicular to the force which is pointing radially to some point, it is a configuaration of __circular-motion. Using $v_\text{min}^2 = u_\text{min}^2 -2gh \implies u_\text{min} = \sqrt{5gR} $




Now come to your case where in place of the string a light rod is placed; the constraint has been changed. Now we need not bother about whether the particle would fall when at the top. So, from this point we can say the particle can have any velocity. It need not, thus, have the minimum initial velocity $\sqrt{5gR}$. It can have any velocity but WARNING the velocity CANNOT be equal to zero at the topmost point because if the velocity becomes zero, there would be no centripetal force to move the particle to complete the whole circle. But it can be very, very, very, close to zero ie, $v > 0$ but $v \approx 0$. To complete the case, I am adding the minimum initial velocity which we get from $$\dfrac{m(u^2 - v ^2)}{2} = mgh \implies u = 2\sqrt{gR}$$. So the minimum initial velocity must be infinitesimally larger than $2\sqrt{gR}$. The minimum velocity at the topmost point is thus infinitesimally greater than $0$.


Note of Apology: Yes, really sorry for writing some irrelevant stuffs but I deem it apt enough that in order to understand that the minimum topmost velocity cannot be $0$ in order to have centripetal force for covering the whole circle; here most of the texts baffle and write $v_\text{min} = 0$ at the topmost point but they forget that there must be centripetal force in order to transverse the whole circle & if the velocity is zero, from $\dfrac{mv^2}{R}$, we get the centripetal force also becomes $0$. However it can have the velocity at the top very, very, very close to zero as long as it doesn't nullify the centripetal force. Thus the minimum topmost-point velocity is just infinitesimally greater than zero.


electromagnetism - Can a magnetic field be induced without an electric field?


Can a magnetic field be induced without an electric field? Because, as far as I know, a time varying electric field induces a magnetic field an vice versa. But in the case of conductors carrying currennt, it doesn't seem that electric field varies with time, then how is a magnetic field induced?



Answer




One of Maxwell’s four equations for electromagnetism in a vacuum shows how magnetic fields are produced:


$$\nabla\times\mathbf{B}=\frac{1}{c}\left(4\pi\mathbf{J}+\frac{\partial\mathbf{E}}{\partial t}\right).$$


(I’ve written it in Gaussian units.)


From this equation you can see that there are two different sources for magnetic fields: the first is a current density, and the second is a changing electric field.


So to have a magnetic field you do not need to have a time-varying electric field. You can just have moving charge. But when a magnetic field is produced by moving charge, physicists don’t call it “induced”.


thermodynamics - How would water inside a closed container boil in zero gravity?


Let's say you have a spherical container in a zero gravity environment filled completely with water. When you heat the sphere uniformly how would the water boil?


My intuition is that the water would start boiling at the very outer layer of the sphere first. So you would have a ball of water inside the sphere that gets smaller and smaller as more water turns into gas. Thus the pressure rises and the boiling point goes up and the layer of water vapour gets bigger and bigger, which means the water in the middle will take longer and longer to boil.


Is this correct or am I missing any important concepts here?



Answer



Given your conditions (a strong rigid container filled with water) it would never boil no matter how much heat you add. The lack of gravity would not change this answer.


Boiling causes water to change from a liquid to a gas. As a liquid, the molecules are always touching (but they are not rigidly connected as they are in ice). So as water they take up little space. In a gas, as steam, the molecules are flying around freely and spend relatively little time bumping into each other. They take up a lot more space as a gas. The pressure of the gas on the walls of a container arises from the constant bombardment of the molecules hitting the walls of the container. If there is less space for the steam then there will be more bumping into the walls so there will be more pressure.



What would happen in your question is that as you add heat the water would try to expand a little. Finding no room for expansion, the pressure would rise dramatically. The boiling point of water is very dependent on the pressure, so as the pressure goes up so does the boiling point.


At some point it will pass the "critical point" of water, beyond which there is no phase change between liquid and gas. The critical point of water is known to be about 374° C and 3212 PSI, which is well over 200 atmospheres.


Now, if you change your conditions to an open container in no gravity, but in a space craft with normal atmospheric pressure, some of the water might gradually float out and form balls because of surface tension. But this is not boiling. It would be hard to heat the water this way.


If you change the conditions again so that the open container is exposed to the vacuum of space, then the vacuum means the boiling point of water pretty much matches the freezing point of water. Water at 20° C would way above the boiling point and so it would instantly (perhaps explosively) boil until the cooling effect of doing so brought the temperature of any remaining water down to near freezing.


At 0.01° C and just under 1% of an atmosphere of pressure, you have reached the triple point of water. This is where ice, liquid water and steam can exist in equilibrium. But in the vacuum of space you have zero pressure and there can be no liquid water. At zero pressure, water sublimes (changes directly from ice to steam with no liquid state, like dry ice) at about -60° C.


Search "critical point of water" and "triple point of water" for more info, Wikipedia articles and YouTube videos.


optics - Analytic solution for angle of minimum deviation?



enter image description here


Consider a simple prism with a prism angle $A$, angle of incidence $\theta_1$, angle of emergence $\theta_4$ and the first and second angle of refraction as $\theta_2,\theta_3$. the refractive index for the prism (w.r.t the surroundings) is $n$. The angle of deviation is $\delta$.I wanted to derive an equation that could give the relation between $\theta_1$ and $\delta$, plot of which for a monochromatic light is as in the animation here. Below is my failed attempt (equations 2 and 3 are from the geometry of the figure):- $$\theta_4=\sin^{-1}n\sin(\theta_3)$$ $$A+\delta=\theta_1+\theta_4$$ $$A=\theta_2+\theta_3$$ $$\delta=\theta_1+\sin^{-1}n\sin(\theta_3)-A$$ $$\delta=\theta_1+\sin^{-1}n\sin(A-\theta_2)-A$$ $$\delta=\theta_1+\sin^{-1}n\sin(A-\sin^{-1}\frac{\sin(\theta_1)}{n})-A$$ Equation, when I plotted it on Wolframalpha for an equilateral prism with $n$=$1.5$ yielded the required plot in the limit $28.5^\circ<\theta_1<90^\circ$ (to avoid total internal reflection). But then, how do I use this equation, to analytically find the angle of minimum deviation, and the fact that at minimum deviation $\theta_1=\theta_4$. (I tried taking the derivative, but it turned out to be too complex).




Saturday 20 October 2018

optics - Why aren't all objects transparent?


I know that for an object to be transparent, visible light must go through it undisturbed. In other words, if the light energy is sufficiently high to excite one of the electrons in the material, then it will be absorbed, and thus, the object will not be transparent. On the other hand, if the energy of the light is not sufficient to excite one of the electrons in the material, then it will pass through the material without being absorbed, and thus, the object will appear transparent.


My question is: For a non-transparent object like a brick, when the light is absorbed by an electron, it will eventually be re-emitted. When the light is re-emitted won't the object appear transparent since the light will have essentially gone through the object?



Answer




For an object to be transparent, the light must be emitted in the same direction with the same wavelength as initially. When light strikes a brick, some is reflected in other directions, and the rest is re-emitted in longer, non-visible wavelengths. That is why a brick is opaque to visible light.


Some materials we consider transparent, like glass, are opaque to other wavelengths of light. Most window glass these days, for example, is coated with infrared- and ultraviolet-reflective films to increase insulative capacity. You can see through these fine with your eyes, but an infrared-based night vision system would see them as opaque objects. Another example is that most materials are transparent to radio waves, which is why both radio broadcasts and radio telescopes are so successful.


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...