Sunday, 30 June 2019

thermodynamics - Spontaneous conversion of heat into work at negative temperatures


Consider a heavy macroscopic object moving in a gas. Friction causes its kinetic energy to be converted into heat. Thermodynamically, there is (effectively) no entropy associated with the kinetic energy because all the energy is concentrated in a single degree of freedom. Therfore, if an amount $J$ of energy is converted from kinetic energy into heat, the total entropy change is $J/T$, so we can see that this is a spontaneous process.


But now consider an object moving relative to a gas with negative temperature. Such a thing has been created in the laboratory, so this is not just idle theoretical speculation. If an amount $J$ of kinetic energy gets converted into heat, the total entropy change is still $J/T$, but now this is negative. This seems to mean that the opposite process - conversion of heat into kinetic energy, accelerating the object - would be spontaneous.


This generalises to all other processes that convert work into heat. For example, performing the Joule heating experiment with a negative-temperature gas should cause the paddle to turn, and negative-temperature gas flowing through a pipe should experience an accelerating force rather than a decelerating one. Just as superfluids have zero viscosity, it seems that negative-temperature fluids must have negative viscosity.


I realise that this does not lead to perpetual motion. As heat is converted into work the inverse temperature ($1/T$) will increase until it reaches zero. But what does look odd is that in some ways the arrow of time appears to be reversed.


I realise that experimentally we're very far from being able to produce the macroscopic quantities of negative-temperature fluids that would be required in order to observe these things. But is it possible in principle? And if it is, would we actually see the phenomena I described, or is there some fundamental reason why they wouldn't happen after all? And has such a connection between negative temperatures and the arrow of time been discussed or debated in the literature?




quantum mechanics - Angular momentum - maximum and minimum values for $m_{ell}$


I want to work out the maximum and minimum values for $m_{\ell}$. I know that $\lambda \geq m_{\ell}$, therefore $m_{\ell}$ is bounded. In the lectures notes there is the following assumption: $$ \hat{L_{+}}|\lambda,m_{max}\rangle=|0\rangle \\ \hat{L_{-}}|\lambda,m_{min}\rangle=|0\rangle $$ I think I understand this. Since the action of the ladder operators is the keep the value of $\lambda$ and raise (or lower) $m_{\ell}$, you cannot "go up" from $m_{max}$ or down from $m_{min}$. However, I do not understand why the result of the operation should be $|0\rangle$.


It turns out we can write the produt of $\hat{L_{-}}\hat{L_{+}}$ as: $$ \hat{L_{-}}\hat{L_{+}}= \hat{L^2}-\hat{L_{z}^2}-\hbar\hat{L_{z}} $$ Then we we evalute the following expression: $$ \hat{L^2} |\lambda,m_{max}\rangle = (\hat{L_{-}}\hat{L_{+}}+\hat{L_{z}^2}+\hbar\hat{L_{z}})|\lambda,m_{max}\rangle $$ Since $\hat{L_{+}}|\lambda,m_{max}\rangle=|0\rangle $, then $\hat{L_{-}}\hat{L_{+}}|\lambda,m_{max}\rangle=\hat{L_{-}}|0\rangle =|0\rangle $. And $\hat{L_z}|\lambda,m_{max}\rangle = \hbar m_{\ell}|\lambda,m_{max}\rangle $. These two relations imply: $$ \hat{L^2} |\lambda,m_{max}\rangle =\hbar^2 m_{max}(m_{max}+1)|\lambda,m_{max}\rangle $$ Now I want to know how to compute $\hat{L^2} |\lambda,m_{min}\rangle $ since my lecture notes only state the result. My problem is that I will have $\hat{L_{-}}\hat{L_{+}}|\lambda,m_{min}\rangle$, but I can no longer say that $\hat{L_{+}}|\lambda,m_{min}\rangle=|0\rangle$. I tried to compute $\hat{L_{+}}\hat{L_{-}}$ and try to plug in the expression, but I had no success. How can I solve this?


PS. This is not homework, I'm just trying to derive the expression stated in the lecture notes.




Answer



Your lecture notes, or your transcription of them, are in error. You should have $$ \hat{L_{+}}|\lambda,m_\mathrm{max}\rangle= 0 \\ \hat{L_{-}}|\lambda,m_\mathrm{min}\rangle= 0 $$ That is, raising the maximum-projected state doesn't give you $\left|\lambda,m\right> = \left|0,0\right>$, a state with no angular momentum, and it doesn't give you a vacuum state $\left|0\right>$ with no particles in it. It gives you the number zero. This means, among other things, that the overlap of any state $\left|x\right>$ with $\hat{L_{+}}|\lambda,m_\mathrm{max}\rangle$ is zero.


As for your question about computing $ \hat {L^2} \left| \lambda,m \right>, $ your lecture notes should contain enough information for you to prove that the commutator between $L^2$ and $L_z$ is zero: \begin{align} L^2 L_z \left|x\right> = L_z L^2 \left|x\right>, \quad\quad \text{ for any state $\left|x\right>$} \end{align} which means that the eigenvalue of $L^2$ cannot depend on the eigenvalue $m$ of $L_z$. In fact the eigenvalue of $L^2$ on a state $\left|\lambda,m\right>$ is always $\hbar^2\lambda(\lambda+1)$, which is the same as your result since $m_\mathrm{max} = -m_\mathrm{min} = \lambda$.


astrophysics - Can a gas cloud of pure helium collapse and ignite into a star?


Assuming there could be a giant gas cloud with negligible amount of hydrogen and metal (elements with atomic number $Z\geq3$), could it collapse gravitationally and form a pure helium star that would skip the main sequence entirely?



I think it's possible in principle but my main concern is that the ignition of $^4$He is a 3-body reaction (the triple alpha process) and requires a higher temperature and a higher density than the pp-chain or the CNO cycle. Would the gas keep collapsing until it reaches the critical density for carbon formation or would it reach hydrostatic equilibrium before that point, preventing further collapse and star formation?


Perhaps it's only a matter for the cloud to have a minimal mass that allows the nuclear reaction? If so, how can I predict this minimal mass?



Answer



The answer is that if you ignore degeneracy pressure, then indeed a collapsing cloud of helium must eventually reach a temperature that is sufficient to initiate the triple alpha $(3\alpha)$ process. However, electron degeneracy pressure means that below a threshold mass, the centre will not become hot enough to ignite He before the star is supported against further collapse.


The virial theorem tells us that (roughly) $$ \Omega = - 3 \int P\ dV\ ,$$ where $\Omega$ is the gravitational potential energy, $P$ is the pressure and the integral is over the volume of the star.


In general this can be tricky to evaluate, but if we (for a back of the envelope calculation) assume a uniform density (and pressure), then this transforms to $$ -\frac{3GM^2}{5R} = -3 \frac{k_B T}{\mu m_u} \int \rho\ dV = -3 \frac{k_B T M}{\mu m_u}\ , \tag*{(1)}$$ where $M$ is the stellar mass, $R$ the stellar radius, and $\mu$ the number of mass units per particle ($=4/3$ for ionised He).


As a contracting He "protostar" radiates away gravitational potential energy, it will become hotter. From equation (1) we have $$ T \simeq \left(\frac{G \mu m_u}{5k_B}\right) \left( \frac{M}{R} \right)$$ Thus for a star of a given mass, there will be a threshold radius at which the contacting protostar becomes hot enough to ignite He ($T_{3\alpha} \simeq 10^{8}$ K). This approximation ignores any density dependence, but this is justified since the triple alpha process goes as density squared, but temperature to the power of something like 40 (Clayton, Principles of Stellar Evolution and Nucleosynthesis, 1983, p.414). Putting in some numbers $$ R_{3\alpha} \simeq \left(\frac{G \mu m_u}{5k_B}\right) \left( \frac{M}{T_{3\alpha}} \right) = 0.06 \left(\frac{M}{M_{\odot}}\right) \left( \frac{T_{3\alpha}}{10^8\ {\rm K}}\right)^{-1}\ R_{\odot} \tag*{(2)}$$


The question then becomes, can the star contract to this sort of radius before degeneracy pressure steps in to support the star and prevent further contraction?


White dwarfs are supported by electron degeneracy pressure. A "normal" white dwarf is composed of carbon and oxygen, but pure helium white dwarfs do exist (as a result of mass transfer in binary systems). They have an almost identical mass-radius relationship and Chandrasekhar mass because the number of mass units per electron is the same for ionised carbon, oxygen or helium.


A 1 solar mass white dwarf governed by ideal degeneracy pressure has a radius of around $0.008 R_{\odot}$, comfortably below the back-of-the envelope threshold at which He burning would commence in a collapsing protostar. So we can conclude that a 1 solar mass ball of helium would ignite before electron degeneracy became important. The same would be true for higher mass protostars, but at lower masses there will come a point where electron degeneracy is reached prior to helium ignition. According to my back-of-the-envelope calculation that will be below around $0.3 M_{\odot}$ (where a white dwarf would have a radius of $0.02 R_{\odot}$), but an accurate stellar model would be needed to get the exact figure.



I note the discussion below the OP's question. We could do the same thing for a pure hydrogen protostar, where $\mu=0.5$ and the number of mass units per electron is 1. A hydrogen white dwarf is a factor of $\sim 3$ bigger at the same mass (since $R \propto $ the number of mass units per electron to the power of -5/3 e.g. Shapiro & Teukolsky, Black Holes, White Dwarfs and Neutron Stars). But of course it is not the triple alpha reaction we are talking about now, but the pp-chain. Figuring out a temperature at which this is important is more difficult than for the $3\alpha$ reaction, but it is something like $5\times 10^{6}$ K. Putting this together with the new $\mu$, the leading numerical factor in equation (2) grows to 0.5. Thus my crude calculation of a threshold radius for pp hydrogen burning would intersect the hydrogen white dwarf mass-radius relationship at something like $0.15M_{\odot}$, which is quite near the accepted star/brown dwarf boundary of $0.08M_{\odot}$ (giving some confidence in the approach).


quantum field theory - chirality oscillations in weak interaction



As far as I have understood, the mass $m$ of a fermion causes a coupling of the both chiralities $\psi_L$ and $\psi_R$. This coupling would induce an oscillation of the chirality within a time scale determined by $\frac 1 m$.


Furthermore, it is known that the weak interaction only couples to the left-handed particles, i.e. only to $\psi_L$.


Combining these two statements, one would have to conclude that the weak interaction of a massive fermion is time dependent, i.e. it is stronger when $\psi_L$ dominates and vanishes completely half an oscillation period later. However, I have never heard about such a strange phenomenon and I conjecture that there's a mistake in my reasoning somewhere.


I'd be grateful if someone could help me find it.




Are orbiting planets an example of perpetual motion?


I understand that perpetual motion is impossible, but I became confused when a friend of mine told me that the orbits of the planets is an example of perpetual motion. Will earth's orbit ever end, and whats wrong with his statement. I'm asking for a simple answer to 'disprove' my friends 'theory'.



Answer



Perpetual motion is a system that produces work (energy) while maintaining its state (which implies it can forever produce energy without changing state).



Orbiting planets are not a perpetual motion system. NOTHING we know of or can explain is a perpetual motion system.


Orbiting planets APPEAR to be a perpetual motion system, because the length of time needed for an OBSERVABLE (measurable) state change to occur is enormous (many, many, many, many lifetimes).


Saturday, 29 June 2019

maxwell equations - Justification of Physical Laws



I'm a maths student, and I've studied quite a lot of mathematical physics. All my courses have a similar style - we state the laws of the system, and then deduce the physical consequences as theorems. It has become more and more apparent to me (especially studying electromagnetism and QM) that I have no idea why you would expect these equations to be true.


What I'm looking for is either a kind of physical justification "this is how object X behaves, here are mathematical statements that say this precisely" (similar to momentum-based arguments in fluid dynamics), or some kind of experimental justification (the more direct the better), for physical laws. I'm particularly interested in this with respect to Maxwell's equations, but I'm interested in any others as well.


Edit for clarification:

Perhaps I worded myself poorly. The axioms of a physical theory (which is what I meant by 'laws'), like any set of axioms, are invented by human beings with the aim of capturing certain intuitive properties of an object. For example, the axioms defining a vector space in mathematics are not completely arbitrary - they aim to capture the properties of various objects ($\mathbb{R}^{n}$, or function spaces for example) that we have an intuitive idea of.


I am asking the equivalent question with respect to physical theories, and especially in terms of Maxwell's equations, e.g.: what are the intuitive properties of charges and currents that lead one to write down Maxwell's equations? Note that the answer 'they match up with experiment' doesn't fit, because many theories could match the same observations and be mutually contradictory (e.g. Newton vs relativity at small velocities). There also is no condition on the correctness of the interpretation - any explanation of, say, Newtonian mechanics, would necessarily not be an absolutely correct picture of the world (or it would be the correct physical theory, too!).


Obviously such a justification would have to be empirical to have any grounding in reality. For instance if we want to know if Newton's second law is true, we can set up an experiment to check it. Less experimentally, most people take the existence of absolute time, or inertial frames of reference, to be intuitively obvious. Similarly, properties like acceleration and momentum have physical interpretations, and a good explanation could be given in terms of such quantities.




mathematical physics - Is there a "covariant derivative" for conformal transformation?


A primary field is defined by its behavior under a conformal transformation $x\rightarrow x'(x)$: $$\phi(x)\rightarrow\phi'(x')=\left|\frac{\partial x'}{\partial x}\right|^{-h}\phi(x)$$



It's fairly easy to see that the gradient of the field doesn't have this nice property under the same transformation, it gets a non homogeneous term. Still, is it possible to construct a derivative that would behave nicely under conformal mappings and give the usual derivative for Lorentz transformations? By adding a "connection" similarly as what is done in general relativity or gauge theories. And if not, why?



Answer



I) Here we discuss the problem of defining a connection on a conformal manifold $M$. We start with a conformal class $[g_{\mu\nu}]$ of globally$^{1}$ defined metrics


$$\tag{1} g^{\prime}_{\mu\nu}~=~\Omega^2 g_{\mu\nu}$$


given by Weyl transformations/rescalings. Under mild assumption about the manifold $M$ (para-compactness), we may assume that there exists a conformal class $[A_{\mu}]$ of globally defined co-vectors/one-forms connected via Weyl transformations as


$$\tag{2} A^{\prime}_{\mu}~=~A_{\mu} + \partial_{\mu}\ln(\Omega^2). $$


In particular it is implicitly understood that a Weyl transformation [of a pair $(g_{\mu\nu},A_{\mu})$ of representatives] act in tandem/is synchronized with the same globally defined function $\Omega$ in eqs. (1) and (2) simultaneously.


II) Besides Weyl transformations, we can act (in the active picture) with diffeomorphisms. Locally, in the passive picture, the pair $(g_{\mu\nu},A_{\mu})$ transforms as covariant tensors


$$ \tag{3} g_{\mu\nu}~=~ \frac{\partial x^{\prime \rho}}{\partial x^{\mu}} g^{\prime}_{\rho\sigma}\frac{\partial x^{\prime \sigma}}{\partial x^{\nu}}, $$


$$ \tag{4} A_{\mu}~=~ \frac{\partial x^{\prime \nu}}{\partial x^{\mu}} A^{\prime}_{\nu}. $$



under general coordinate transformations


$$ \tag{5} x^{\mu} ~\longrightarrow~ x^{\prime \nu}~= ~f^{\nu}(x). $$


III) We next introduce the unique torsionfree tangent-space Weyl connection $\nabla$ with corresponding Christoffel symbols $\Gamma^{\lambda}_{\mu\nu}$ that covariantly preserves the metric in the following sense:


$$ \tag{6} (\nabla_{\lambda}-A_{\lambda})g_{\mu\nu}~=~0. $$


The Weyl connection $\nabla$ and its Christoffel symbols $\Gamma^{\lambda}_{\mu\nu}$ are independent of the pair $(g_{\mu\nu},A_{\mu})$ of representatives within the conformal class $[(g_{\mu\nu},A_{\mu})]$. (But the construction depends of course on the conformal class $[(g_{\mu\nu},A_{\mu})]$.) In other words, the Weyl Christoffel symbols are invariant under Weyl transformations


$$ \tag{7} \Gamma^{\prime\lambda}_{\mu\nu}~=~\Gamma^{\lambda}_{\mu\nu}.$$


The lowered Weyl Christoffel symbols are uniquely given by


$$ \Gamma_{\lambda,\mu\nu}~=~g_{\lambda\rho} \Gamma^{\rho}_{\mu\nu} $$ $$ ~=~\frac{1}{2}\left((\partial_{\mu}-A_{\mu})g_{\nu\lambda} +(\partial_{\nu}-A_{\nu})g_{\mu\lambda}-(\partial_{\lambda}-A_{\lambda})g_{\mu\nu} \right) $$ $$\tag{8} ~=~\Gamma^{(g)}_{\lambda,\mu\nu}+\frac{1}{2}\left(A_{\mu}g_{\nu\lambda}-A_{\nu}g_{\mu\lambda}+A_{\lambda}g_{\mu\nu} \right), $$


where $\Gamma^{(g)}_{\lambda,\mu\nu}$ denote the lowered Levi-Civita Christoffel symbols for the representative $g_{\mu\nu}$. The lowered Weyl Christoffel symbols $\Gamma_{\lambda,\mu\nu}$ scale under Weyl transformations as


$$ \tag{9} \Gamma^{\prime}_{\lambda,\mu\nu}~=~\Omega^2\Gamma_{\lambda,\mu\nu}.$$



The corresponding determinant bundle has a Weyl connection given by


$$ \tag{10} \Gamma_{\lambda}~=~\Gamma^{\mu}_{\lambda\mu}~=~(\partial_{\lambda}-A_{\lambda})\ln \sqrt{\det(g_{\mu\nu})}.$$


IV) Let us next define a conformal class $[\rho]$ of a density $\rho$ of weights $(w,h)$, who scales under Weyl transformations as


$$ \tag{11} \rho^{\prime}~=~ \Omega^w\rho $$


with Weyl weight $w$, and as a density


$$\tag{12} \rho^{\prime}~=~\frac{\rho}{J^h}$$


of weight $h$ under general coordinate transformations (5). Here


$$\tag{13} J ~:=~\det(\frac{\partial x^{\prime \nu}}{\partial x^{\mu}}) $$


is the Jacobian.


Example: The determinant $\det(g_{\mu\nu})$ is a density with $h=2$ and $w=2d$, where $d$ is the dimension of the manifold $M$.



V) The concept of (conformal classes of) densities $\rho$ of weights $(w,h)$ can be generalized to (conformal classes of) tensor densities $T^{\mu_1\ldots\mu_m}_{\nu_1\ldots\nu_n}$ of weights $(w,h)$ in a straightforward manner. For instance, a vector density of weights $(w,h)$ transforms as


$$ \tag{14} \xi^{\prime \mu}~=~ \frac{1}{J^h}\frac{\partial x^{\prime \mu}}{\partial x^{\nu}} \xi^{\nu} $$


under general coordinate transformations (5), and scales as


$$ \tag{15} \xi^{\prime \mu}~=~\Omega^w \xi^{\mu} $$


under Weyl transformations. Similarly, a co-vector density of weights $(w,h)$ transforms as


$$ \tag{16} \eta^{\prime}_{\mu}~=~ \frac{1}{J^h}\frac{\partial x^{\nu}}{\partial x^{\prime \mu}} \eta_{\nu} $$


under general coordinate transformations (5), and scales as


$$ \tag{17} \eta^{\prime}_{\mu}~=~\Omega^w \eta_{\mu} $$


under Weyl transformations. And so forth for arbitrary tensor densities $T^{\mu_1\ldots\mu_m}_{\nu_1\ldots\nu_n}$.


Example: The metric $g_{\mu\nu}$ is a tensor density with $h=0$ and $w=2$. The one-form $A_{\mu}$ is not a tensor density, cf. eq. (2).



VI) Finally, one can discuss the definition of covariantly conserved (conformal classes of) tensor densities $T^{\mu_1\ldots\mu_m}_{\nu_1\ldots\nu_n}$. A density $\rho$ of weights $(w,h)$ is covariantly conserved if


$$\tag{18} (\nabla_{\lambda}-\frac{w}{2}A_{\lambda})\rho~\equiv~ (\partial_{\lambda}-h \Gamma_{\lambda}-\frac{w}{2}A_{\lambda})\rho~=~0. $$


A vector density of weights $(w,h)$ is covariantly conserved if


$$\tag{19} (\nabla_{\lambda}-\frac{w}{2}A_{\lambda})\xi^{\mu}~\equiv~ (\partial_{\lambda}-h \Gamma_{\lambda}-\frac{w}{2}A_{\lambda})\xi^{\mu}+\Gamma_{\lambda\nu}^{\mu}\xi^{\nu} ~=~0. $$


A co-vector density of weights $(w,h)$ is covariantly conserved if


$$\tag{20}(\nabla_{\lambda}-\frac{w}{2}A_{\lambda})\eta_{\mu}~\equiv~ (\partial_{\lambda}-h \Gamma_{\lambda}-\frac{w}{2}A_{\lambda})\eta_{\mu}-\Gamma_{\lambda\mu}^{\nu}\eta_{\nu} ~=~0. $$


In particular, if $T^{\mu_1\ldots\mu_m}_{\nu_1\ldots\nu_n}$ is a tensor density of weights $(w,h)$, then the covariant derivative $(\nabla_{\lambda}-\frac{w}{2}A_{\lambda})T^{\mu_1\ldots\mu_m}_{\nu_1\ldots\nu_n}$ is also a tensor density of weights $(w,h)$.


--


$^{1}$ We ignore for simplicity the concept of locally defined conformal classes.


thermodynamics - How to combat the black-body temperature of an object?



I'm trying to model the temperature of a large spacecraft for a space colony simulation game I'm working on. In another question, I checked my calculations for the steady-state black-body temperature of an object, considering only insolation and radiation, and it appears I'm on the right track.


My understanding is that this black-body temperature formula works only for passive bodies with no active heating or cooling. Now I want to add active heating and cooling elements. But how?


For cooling, I think I can model radiators as simply increasing the surface area of the craft, with no significant change to insolation (since radiators are placed edge-on to the sun). Please correct me if I'm wrong on that.


For heating, I'm stumped. I can increase the amount of energy dumped into the system, by presuming a nuclear reactor or beamed power or some such, but when I try that, the effect is much smaller than I would expect. I end up having to dump many MW of power into a large craft just to raise it up to room temperature.


So I'm wondering: does it matter how the extra energy is used within the system? Is a kW poured into a big electrical space heater going to get things hotter than a kW spent twirling a beanie, and if so, how?


As a possibly related question, it's claimed that the greenhouse effect significantly raises the temperature of a planet -- for example, Venus's black-body temperature would be 330 K, but due to atmospheric warming, its actual surface temperature is 740 K (*). How is this possible? Isn't it Q_out = Q_in, no matter what? And however this works for Venus, can we do the same thing to warm our spacecraft?



Answer



OK, I think I've got it, thanks to your comments above as well as this link, which shows how to calculate the temperature of a solar oven. (My situation is very similar to a solar oven, except that the power dumped inside the craft is electrical -- but watts are watts, right?)


So, I believe that what I need to do is:




  1. Calculate the steady-state temperature of the outside of the craft, as described here, but considering only insolation (no internal energy dissipation).

  2. To calculate internal temperature, observe that P_out = P_in in the steady state, and then apply this critical formula: P_out = U A (T_in - T_out), which describes the power leaving as a function of U (the combined heat transfer coefficient of the walls), A (the area of the walls), and the temperature difference. Using P_out = P_in and solving for T_in, I get T_in = T_out + P_in / (U A).

  3. Now I need only plug in the power dissipated inside the craft, and use the skin temperature found in step 1 for T_out, and I can find T_in.


This all makes sense to me, but I'm obviously no physicist. If anybody sees a mistake here, please let me know!


Would an antimatter beam create multiple matter-antimatter explosions?


To expand upon the question - in an atmosphere, would a beam of pure antimatter (disregarding technical difficulties creating such a beam or passing it through a medium) interact with the matter in the atmosphere to annihilate each other and release energy in the form of a blast?


Just had a thought that if the counterparts of antimatter and matter - say hydrogen and anti-hydrogen or proton-antiproton met in such a way, would it create such an outcome?


Obviously, if its a beam, then the density of particles in the beam matters a lot for the annihilation rate and so does the density and composition of the atmosphere.




electricity - Why is the anode (+) in a device that consumes power & (-) in one that provides power?


I was trying to figure out the flow of electrons in a battery connected to a circuit. Conventionally, current is from the (+) terminal to the (-) terminal of the battery. Realistically it flows the other way round; from the (-) terminal to the (+) terminal. My question is, assuming electron flow is from the (-) terminal, would the battery's cathode be located at the (+) terminal and it's anode at the (-) terminal or would it be vice versa?


Another question: Why would the anode be positive in a device that consumes power and negative in a device that provides power?



Answer



Electric current is the rate of flow of electric charges across any cross-sectional area of a conductor. The direction of electric current is taken as the direction of flow of positive ions or opposite to the direction of flow of free electrons. Your assumption is not necessary here... Electrons always flow from negative terminal to positive terminal.


$$i=\frac{dq}{dt}$$


When current flows through an electrolytic solution or during the process of electrolysis, The plate towards which positive ions (cations) flow is called the cathode and the plate towards which negative ions (anions) flow is called the anode.


Wikipedia says clearly,




In an electrochemical cell, The electrode at which electrons leave the cell and oxidation occurs is called anode and the electrode at which electrons enter the cell and reduction occurs is called cathode. Each electrode may become either the anode or the cathode depending on the direction of current through the cell. A bipolar electrode is an electrode that functions as the anode of one cell and the cathode of another cell.



So, the convention is totally based on our definition of the direction of current flow that it always flows opposite to the direction of electrons (i.e) electrons can be called as cations or anions depending on the usage. And based on this, we dump our thought that cathode should always be negative, etc...


Friday, 28 June 2019

particle physics - How can neutrinos oscillate though the lepton flavors have differing masses?


Since the total mass-energy for the neutrino presumably does not change when a neutrino changes lepton flavor, though the mass is different, what compensates for the gain or loss of mass? Does the propagation speed of the neutrino change?





Why do dark objects radiate thermal electromagnetic energy faster than light objects?


Kirchhoff's law of thermal radiation says that:



For a body of any arbitrary material, emitting and absorbing thermal electromagnetic radiation at every wavelength in thermodynamic equilibrium, the ratio of its emissive power to its dimensionless coefficient of absorption is equal to a universal function only of radiative wavelength and temperature, the perfect black-body emissive power.




I can imagine why dark objects have higher absorption of electromagnetic radiation: The darker the object is the less radiation it reflects back. An ideal black body would absorb all the incident electromagnetic radiation.


Is there a similarly simple and intuitive explanation of why dark objects emit thermal electromagnetic radiation faster that light objects and why is the Kirchhoff's law valid? For me it is not intuitive at all and I was not able to find any simple explanation.



Answer



Generally speaking solids absorb light by converting the EM radiation to lattice vibrations (i.e. heat). The incident light causes electrons in the solid to oscillate, but if there is no way for electrons to dissipate the energy then electrons will simply reradiate the light and the light is reflected.


In metals the transfer of energy from oscillations of the conduction electrons to lattice vibrations is slow, so the light is mostly reflected. By contrast in graphite the light is absorbed by exciting $\pi$ electrons, and the excited orbitals efficiently transfer energy to the bulk so the light is mostly absorbed.


But as dmckee says in his comment, the microscopic physics is reversible. If it's hard for oscillating electrons to transfer energy to bulk lattice vibrations then it's equally hard for those lattice vibrations to transfer energy back to the electrons and hence back out as light. So a shiny metal will be equally bad at absorbing and emitting light.


Similarly, in graphite if coupling of $\pi$ orbitals to lattice vibrations is efficient then energy flows equally fast both ways, and graphite will be equally good at absorbing and emitting light.


In practice black body radiation is a mish mash of all sorts of different mechanisms, and the two cases I've mentioned are just examples. However in all cases when you look in detail at how energy is being transferred you'll find it's a reversible process and the energy flows equally fast in both directions.


physical chemistry - Why water expands when freezes?



I'm sure this is for most of you a basic question, but it really puzzles me:


How it is that, even though all materials expand as they get warmer, and contract (maybe these are not the correct terms) when get colder, water exapands when freezes.


Thanks a lot.



Answer



The expansion upon freezing comes from the fact that water crystallizes into an open hexagonal form. This hexagonal lattice contains more space than the liquid state.


speed of light - Maximum wavelength of a photon/electromagnetic radiation?




  1. This asked: What is the minimum wavelength of electromagnetic radiation?




  2. And also this: What is the maximum possible frequency and wavelength?




The second question is contradictory; maximum frequency -> minimum wavelength.



I am asking the very opposite;


What is the minimum frequency and maximum wavelenght of electromagnetic radiation?


The lowest measured/defined seems to be 3 Hz; ELF-waves Which means a wavelenght 1/3 of the speed of light; ~100 000 000 m.


But this can't be the physical limit for the wavelenght.
Does such a physical limit for the wavelength exist? (Similar limit like the speed of light is for velocity).



Answer



There is no theoretical physical limit on the wavelength, though there are some practical limits on the generation of very long wavelengths and their detection.


To generate a long wavelength requires an aerial of roughly one wavelength in size. The accelerated expansion of the universe due to dark energy means the size of the observable universe is tending to a constant, and that will presumably make it hard to generate any wavelengths longer than this size.


As for detection, we tend to measure the change in the electric field associated with an EM wave not its absolute value. As frequencies get lower we will need either increased intensity waves or ever more sensitive equipment. Both of these have practical limits, though I hesitate to speculate what they are.


newtonian gravity - rotational oblateness


I am trying to compute the amount of oblateness that is caused by planetary rotation. I picture the force of gravity added to the centrifugal force caused by the rotation of the planet as follows:


$\hspace{4cm}$forces


That is, at the point in question, at latitude $\phi$, the distance from the axis of rotation is $r\cos(\phi)$. Thus, the centrifugal force would be $\omega^2r\cos(\phi)$ in a direction perpendicular to the axis of rotation. The radial and tangential components would be $\omega^2r\cos^2(\phi)$ and $\omega^2r\cos(\phi)\sin(\phi)$, respectively.


My assumption is that the surface of the planet would adjust so that it would be perpendicular to the effective $g$; that is, the sum of the gravitational and centrifugal forces. This would lead to the equation $$ \frac{\mathrm{d}r}{r\,\mathrm{d}\phi}=-\frac{\omega^2r\cos(\phi)\sin(\phi)}{g-\omega^2r\cos^2(\phi)} $$ We can make several assumptions here, and I will assume that $\omega^2r$ is small compared to $g$. Thus, we get $$ \int_{\text{eq}}^{\text{np}}\frac{\mathrm{d}r}{r^2} =-\frac{\omega^2}{g}\int_0^{\pi/2}\cos(\phi)\sin(\phi)\,\mathrm{d}\phi $$ which leads to $$ \frac1{r_{\text{np}}}-\frac1{r_{\text{eq}}} =\frac{\omega^2}{2g} $$ and $$ 1-\frac{r_{\text{np}}}{r_{\text{eq}}} =\frac{\omega^2r_{\text{np}}}{2g} $$ However, numerical evaluation and Wikipedia seem to indicate that this should be twice what I am getting. That is, $$ 1-\frac{r_{\text{np}}}{r_{\text{eq}}} =\frac{\omega^2r^3}{Gm} =\frac{\omega^2r}{g} $$ What am I doing wrong?





quantum mechanics - Do we always ignore zero energy solutions to the (one dimensional) Schrödinger equation?


When we solve the Schrödinger equation on an infinite domain with a given potential $U$, much of the time the lowest possible energy for a solution corresponds to a non-zero energy. For example, for the simple harmonic oscillator with frequency $\omega$, the possible energies are $\hbar\omega(n+\frac12)$ for $n =0,1,\dots$ . Some of the time, solutions with zero energy are possibly mathematically, but boundary conditions would mean that such solutions would be everywhere zero, and hence the probability of finding a particle anywhere would be zero. (For example, an infinite potential well).


However, when solving the Schrödinger equation for a particle moving freely on a circle of lenfth $2\pi$ with periodic boundary conditions $\Psi(0,t)=\Psi(2\pi,t)$ and $\frac{\partial\Psi}{\partial x}(0,t)=\frac{\partial\Psi}{\partial x}(2\pi,t)$, I have found a (normalised) solution $\Psi(x,t)=\frac1{2\pi}$ with corresponding energy $0$. I can't find a way to discount this mathematically, and it seems to make sense physically as well. Is this a valid solution, and so is it sometimes allowable to have solutions with $0$ energy? Or is there something I'm missing?



Answer



In my view, the important question to answer here is a special case of the more general question



Given a space $M$, what are the physically allowable wavefunctions for a particle moving on $M$?




Aside from issues of smoothness wavefunctions (which can be tricky; consider the Dirac delta potential well on the real line for example), as far as I can tell there are precisely two other conditions that one needs to consider:




  1. Does the wavefunction in question satisfy the desired boundary conditions?




  2. Is the wavefunction in question square integrable?




If a wavefunction satisfies these properties, then I would be inclined to assert that it is physically allowable.



In your case where $M$ is the circle $S^1$, the constant solution is smooth, satisfies the appropriate conditions to be a function on the circle (periodicity), and is square integrable, so it is a physically allowed state. It also happens to be an eigenvector of the Hamiltonian operator with zero eigenvalue; there's nothing wrong with a state having zero energy.


Thursday, 27 June 2019

quantum mechanics - What makes matter "solid"?


I'm a non-physicist with a basic high-school understanding of physics. I've always wondered what it is that makes things "solid". Why do molecules rebound from each other? There's just a bunch of tiny atoms there with (comparatively) large spaces between them. Why don't they just "slide" around or between each other?



Answer



What makes it solid is a combination of the uncertainty principle and Pauli's exclusion principle.


According to the uncertainty principle, electrons can't have a well-defined position if they have a sufficiently well-defined momentum (mass times velocity). For the energy of electrons to be low enough, the momentum also has to be low enough. That also means that the uncertainty of the momentum has to be low enough.



The uncertainty principle then implies that the uncertainty of the position of the electron has to be large enough, at least 0.1 nanometers or so - the atomic radius - for the kinetic energy to be smaller than a few electronvolts, a decent amount of energy that is comparable to the potential energy of electrons near protons if the electron clouds is similarly large.


Pauli's exclusion principle then guarantees that in each volume comparable to the volume of an atom, there can be at most 1 (or 2) electrons. That's why matter is impenetrable.


By the way, to derive the actual maximum density of electrons, one also needs to know the strength of the electrostatic attraction between the electrons and the protons that neutralize the charge. The Bohr radius goes like $1/\alpha$ - the inverse fine-structure constant - so if the electric force were stronger, matter could actually become denser.


This was an explanation why matter based on nuclei and electrons can't really significantly exceed the density of ordinary materials. Still, there are different phases. In gases, the molecules are separated by big gaps - so most of the space is empty and the exclusion principle is not too important. For liquids, the distance between the molecules is near the saturation point - like dense gases - but they still don't keep the shape.


Glass is an example of a liquid, in some sense, that is however behaving almost as solids. The canonical solids prefer to be crystals - like metals or diamond. In that case, it's energetically favored for the atoms or molecules to be organized to regular cubic or hexagonal or similar lattices. They like to keep this shape because it saves energy. It's still true that such solids are impenetrable because of the explanations at the top.


string theory - What is a D-brane?


I know that in string theory, D-branes are objects on which open strings are attached with Dirichlet boundary conditions. But what exactly is a brane? Are they equally fundamental objects like string? If so then do they also vibrate? If the visible universe itself is not a brane then what is the dynamics of these branes within the universe? Do individual D-Branes interact, collide? Can an open string tear itself off from the D-brane? If so what are the results?



Answer



Branes are (usually) extended objects; $p$-branes are objects with $p$ spatial dimensions.



D-branes are a special and important subset of branes defined by the condition that fundamental strings can end on the D-branes. This is literally the technical definition of D-branes and it turns out that this simple fact determines all of the properties of D-branes.


Perturbatively, fundamental strings are more fundamental than branes or any other objects. In that old-fashioned description, D-branes are "solitons" - configurations of classical fields that arise from the closed strings. They are analogous to magnetic monopoles - which may also be written as classical configurations of the "more fundamental fields" in field theory. In a similar way, D-branes' masses diverge for $g\to 0$.


Non-perturbatively, D-branes and other branes are equally fundamental as strings. In fact, when $g$ is sent to infinity, some D-branes may become the lightest objects - usually strings of a dual (S-dual) theory. When we include very strongly coupled regimes (high values of the string coupling constant $g$), there is a brane democracy.


Back to the perturbative realm. The condition that open strings can end on D-branes - and nowhere else - means that there exists a particular spectrum of open strings stretched between such D-branes. By quantizing these open strings, we obtain all the fields that propagate along (and in between) such D-branes. The usual methods (world sheets of all topologies, now allowing boundaries) allow us to calculate all the interactions of these modes, too.


So yes, D-branes also vibrate. But because their tension goes to infinity for $g\to 0$, you need even more energy to excite these vibrations than for strings. The quanta of these vibrations are particles identified with open strings - that move along these D-branes but are stuck on them. The insight that the D-branes are dynamical and may vibrate, and the insight that they carry Ramond-Ramond charges (generalizations of the electromagnetic field one obtains from superstrings whose all RNS fermionic fields are periodic on the world sheet) were the main insights of Joe Polchinski in 1995 that made D-branes essential players and helped to drive the second superstring revolution.


Other branes typically have qualitatively similar properties as D-branes but one must use different methods to determine these properties.


When we quantize a D-brane, we find open string states which are scalars corresponding to the transverse positions. It follows that D-branes may be embedded into the spacetime - in any way. The shape oscillates according to a generalized wave equation again. Also, all D-branes carry electromagnetic fields $F_{\mu\nu}$ in them. These fields are excited by the endpoints of the open strings that behave as quarks (or antiquarks). For a stack of $N$ coincident branes, the gauge group gets promoted to $U(N)$. The electric flux inside the D-branes may be viewed as a "fuzzy" continuation of the open strings that completes them to "de facto closed strings".


Those fields have superpartners in the case of the supersymmetric D-branes which are stable and the most important ones, of course. D-branes may collide and interact much like all other objects.


The most appropriate interaction that allows the open strings to "disconnect" from D-branes is the event in which two end points (of the opposite type, if the open strings are oriented) collide. Much like a quark and antiquark, these two endpoints may annihilate. In this process, an open string may become a closed string - which may escape away from the D-brane. The same local process of "annihilation of the endpoints" may also merge two open strings into one. Such interactions are the elementary explanations of all the interactions between the fields produced by the open strings - for example between the transverse scalars and the electromagnetic fields within the D-brane.


Aside from that, some branes may also be open branes, and end on another kind of branes. The latter brane always includes some generalized electromagnetic fields that are sourced by the endpoints or end curves or whatever is the $(p-1)$-dimensional geometry of the boundary of the former brane.



How is matter stored in black hole?


A black hole engulfs the matter nearby, how does it store the mass inside? We know that matter is composed of particles, does a black hole store the mass in massive particles? Or can we assume that it's composed of extremal black holes? what are theories about it?



EDIT: I checked out Black-holes are in which state of matter? it doesn't provide a definite answer.



Answer



A black hole is created by stellar core collapse. Its event horizon (if that exists -Hawking) appears. By what mechanism and path, as viewed eternally, can accreted matter enter the black hole proper? It externally appears to be asymptotically stuck within the event horizon.


The inside of a black hole is speculative, certainly at the singularity (if there is one), for the unknown local geometry of spacetime. Adding mass to a black hole linearly increases its external radius. The external volume increases as the cube of the external radius, implying that galactic core supermassive black holes have average densities approaching zero. NGC 4889 is 2.1×10^10 solar masses, then $r = (2.95)(M/M_{sun})$ km. The sun's average density is 1409 kg/m^3. Calculate NGC 4889's average external density.


Consider a Kerr (rotating) black hole. If mass resides in a central zero-dimensional singularity, what sources the external angular momentum? $L= r × mv$. Black hole interiors are not well-defined.


statistical mechanics - Entropy of an ideal gas in $Tto 0$ limit


After deriving the entropy of an ideal gas we get to : $$S = Nk \left[\ln(V) + \frac{3}{2}\ln(T) + \frac{3}{2}\ln\left(\frac{2\pi mk}{h^2}\right) - \ln(N) + \frac{5}{2} \right]$$


In the zero temperature limit, we expect to have $S=0$, however, we get infinity. How can we overcome this mathematical inconsistency?




newtonian mechanics - Contradiction on gravitational potential energy


I was reading the derivation of the gravitational energy of a point mass and I seem to have found a contradiction.


The derivation in my textbook is given as follows:-


Let there me a large fixed mass 'M' and a small mass 'm' placed at a distance of $r_1$ from mass M.


Now let there be some external force $F_{ext}$ that displaces mass m from $r_1$ to $r_2$.


Now according to the work energy theorem, we will have the following equation:- $$K_2 - K_1 = W_g + W_{ext}$$ If we make sure that the kinetic energy of the system does not increase:- $$0 = W_g + W_{ext}$$ $$W_g = -W_{ext}$$ The change in gravitational potential energy of the system is equal to the negative of the work done by the gravitational force. $$ U(r_2) - U(r_1) = -W_g$$ Now let $r_1 = r$ and $r_2 = r + dr$ where Dr is an infinitesimally small distance. $$ U(r_2) - U(r_1) = -\int F_g.dr$$ Since the gravitational force and the displacement are in opposite directions:- $$ U(r_2) - U(r_1) = -\int F_gdrcos(\pi)$$ $$ U(r_2) - U(r_1) = \int F_gdr$$ $$ U(r_2) - U(r_1) = \int \frac{GMm}{r^2}dr$$ $$ U(r_2) - U(r_1) = -\frac{GMm}{r}$$ Now let $r_1$ = $\infty$ and $r_2 = r$ $$ U(r) - U(\infty) = GMm(\frac{1}{\infty}-\frac{1}{r})$$ $$ U(r) - U(\infty) = GMm(0-\frac{1}{r})$$ Now we will assume that the potential at infinity is zero. $$ U(r) = -\frac{GMm}{r}$$



My query:-


But if we let $r_1 = \infty$ and $r_2$ = r , $r_1$ will be greater than $r_2$(since $\infty>r$ ). Our original assumption was that the mass m was moving away from the mass M and that's why the dot product of gravitational force and displacement was negative. But now we are assuming it is moving towards mass M (from infinity to a separation of r). This looks like a contradiction to me.




electromagnetism - Fermi level alignment and electrochemical potential between two metals


I'm trying to get a more intuitive/physical grasp of the Fermi level, like I have of electric potential. I know that, for just a single piece of metal in equilibrium, you have to have the electric potential the same at all points, because if you didn't, that would mean there's necessarily an electric field between the two points, which would put a force on electrons, and move them until the field is gone.


But I can't understand the electrochemical potential in the same way. I've been looking at this picture to try and understand what happens when two metals with different work functions are put in contact with each other:


Fermi levels


In (b), the chemical potential in the two metals have to be aligned, so for this to happen, a small number of electrons flow from the metal with the bigger work function to the one with the smaller (could you say this intuitively makes sense, because the WF is a measure of how hard it is for electrons to escape the material, so they'd rather be in a material where it's easier to escape?).


This is where my confusion is. First of all, I don't really know of a good intuitive reason why the chemical potentials even have to be the same in the two materials (like the way I mentioned above for the voltage in a regular, single piece of metal. I don't have any sort of sense like that, here).


Right now, it kind of seems like the chemical potential has some sort of "priority" -- when these two metals come into contact, why do their chemical potentials match up, and their electric potentials separate, rather than vice versa, or more intuitive to me, some equilibrium in between where they're both off? (but now I'm more confused -- according to wiki, it seems like the latter does happen, but I don't really know why. Is that a thermodynamic potential trying to minimize?)


My second point of confusion is the current that passes between and voltage that there now is across the two pieces of metal, from a practical standpoint. Wiki (above) and Ashcroft and Mermin:


AM



both say that there is this 'contact' potential across the metals now, but that it only drives current momentarily, and a tiny amount. So if I were to apply an increasing voltage across the metals (in either/both biases), would I need to hit a "threshold" voltage before which there would be no current flowing? How would Ohm's law work here, or would it? Naively, without applying any external bias, it seems like you have $V \neq 0$,$R \neq 0$, but $I = 0$. So should you expect anything funky when you apply external potential?


Thank you!




biophysics - What would be walking speed in low gravity?


In $1g$ the average adult human walks 4-5 km in an hour. How fast would such a human walk in a low gravity environment such as on the Moon $(0.17g)$ or Titan $(0.14g)$?


Let's ignore the effects of uneven terrain (regolith or ice/snow/sooth); suppose our human walks on hardened pavement.



Answer



This article suggests that the walking speed in lower $g$ environments is indeed less than on 1$g$ environments.


The issue at hand is the work done in raising ones leg in order to move forwards and the loss of energy due to the motion. Quoting the article,



During a walking step, in contrast [to the running step], the centre of mass of the body is lowered during the forward acceleration and raised during the forward deceleration. Therefore the kinetic energy loss can be transformed into a potential energy increase: $ΔE_p = MgS_v$, where $g$ is the acceleration of gravity and $S_v$ is the vertical displacement of the centre of mass within each step.




The potential energy must be equated to the kinetic energy, $$ \Delta E=\frac12M\left(v_2^2-v_1^2\right) $$ where $v_2$ is the maximal velocity of the body in the step and $v_1$ the minimal velocity of the step. If we equate the two changes in energy and assume some median velocity of the body during the step, then $$ v_{med}\sim\sqrt{2gS_v} $$ Since $g$ decreases on the lighter bodies, then $v_{med}$ would necessarily decrease as well.


Wednesday, 26 June 2019

fluid dynamics - What does $(mathbf{u}cdotnabla)mathbf{u}$ mean in the Navier-Stokes equation?


I am studying the Navier-Stokes equations and I have the equation in the form: $$\rho \dfrac{\partial{\mathbf{u}}}{\partial{t}} + \rho (\mathbf{u}\cdot\nabla)\mathbf{u} - \mu\nabla^2\mathbf{u} + \nabla p = \rho f$$



Can someone explain me what does $ (\mathbf{u}\cdot\nabla)\mathbf{u}$ mathematically (generally) mean here?




spacetime - Non-existence of double time-derivative of fields in the Lagrangian and violation of equal footing of space and time


In classical field theory, we consider the Lagrangians with single time-derivative of fields whereas double derivative of the field w.r.t. space is allowed sometimes. I understand that the reason of abandoning the 2nd order time-derivative of the fields is that we require two initial conditions, one is that of the field and the second is that if the momentum of the field.


What I don't understand is what is the problem with specifying the two initial conditions?


Also, while moving over to QFT from the classical description, how come the above mentioned discrimination of time derivative over space derivative, does not contradict the notion of putting space and time on equal footing?




Answer



Metric signature convention: $(+---)$.


First, note that physical dynamics is ultimately decided by the equations of motion, which you get from the Lagrangian $\mathcal{L}$ after using the least action principle. The kinetic term in a $1$-derivative (before integration by parts) field theory goes like $\mathcal{L} \sim \partial_\mu \phi \partial^\mu \phi \sim -\phi \square \phi$ whose equations of motion are $\square \phi + \cdots = 0$. This is a second order differential equation and so needs two initial conditions if you want to simulate the system.


The reason why people get nervous when they see higher derivatives in Lagrangians is that they typically lead to ghosts: wrong-sign kinetic terms, which typically leads to instabilities of the system. Before going to field theory, in classical mechanics, the Ostrogradsky instability says that non-degenerate Lagrangians with higher than first order time derivatives lead to a Hamiltonian $\mathcal{H}$ with one of the conjugate momenta occurring linearly in $\mathcal{H}$. This makes $\mathcal{H}$ unbounded from below. In field theory, kinetic terms like $\mathcal{L} \sim \square \phi (\square+m^2) \phi$ are bad because they lead to negative energies/vacuum instability/loss of unitarity. It has a propagator that goes like $$ \sim \frac{1}{k^2} - \frac{1}{k^2-m^2}$$


where the massive degree of freedom has a wrong sign. Actually, in a free theory, you can have higher derivatives in $\mathcal{L}$ and be fine with it. You won't 'see' the effect of having unbounded energies until you let your ghost-like system interact with a healthy sector. Then, a ghost system with Hamiltonian unbounded from below will interact with a healthy system with Hamiltonian bounded from below. Energy and momentum conservation do not prevent them from exchanging energy with each other indefinitely, leading to instabilities. In a quantum field theory, things get bad from the get-go because (if your theory has a healthy sector, like our real world) the vacuum is itself unstable and nothing prevents it from decaying into a pair of ghosts and photons, for instance.


This problem of ghosts is in addition to the general consternation one has when they are required to provide many initial conditions to deal with the initial value problem.


Also, in certain effective field theories, you can get wrong-sign spatial gradients $ \mathcal L \sim \dot{\phi}^2 + (\nabla \phi)^2$. (Note that Lorentz invariance is broken here). These lead to gradient instabilities.


particle physics - Strong force between quarks that are out of causal contact


This is a rather artificial scenario, but it has been bugging me lately.


Background


Due to the confinement in QCD, quarks are bound in color-neutral configurations. Any attempt to separate a quark from this bound state costs so much energy that it's enough to pair-produce new quarks, hence the quark-jets in accelerator experiments.


Setup


I'm now considering the reversed (hypothetical) scenario. Assume the you initially have two quarks (up and anti-up for instance) that are placed far away from each other. Buy far I here mean further than any other length scale in CQD. Now, let the two quarks approach each other, as in a scattering experiment.


Question



At what distance does the two quarks start to interact, and what happens? Since the strong force is confining, the interaction should be stronger the further away the quarks are, but they cannot interact outside of their causal cones, so how does this work at really long distances?


My thoughts


I'm imagining that the "free" quarks are in a metastable state and the true ground state is the one where several pairs of quarks have pair-produced to bind with the two initial quarks. Thus the closer the two initial quarks are, the smaller the energy barrier between the metastable and the true ground state becomes. Thus at some separation $r$ there is a characteristic time-scale before pair-production occurs.




electromagnetism - Is there a limitation on Gauss' law?



Recently I had a question to find the electric field at a distance $R$ from the origin, where the space is filled with charge of density $\rho$. I did this by assuming a Gaussian surface of radius $R$. Now outside won't affect the field so I calculated the field as:



$$\left|\,\vec E\,\right| = \frac{\rho R}{3\varepsilon} \tag{1}$$


I was satisfied with my solution, until a thought struck me: as the space is infinite, for an infinitesimal charge producing a field $\vec {E_1}$ there will be another charge producing $-\vec {E_1}$ thus the resultant field should be zero. Thus bringing me to my first question, is Gauss' law always valid, or does it have some limitation?



Answer



Gauss's law is always fine. It is one of the tenets of electromagnetism, as one of Maxwell's equations, and as far as we can tell they always agree with experiment.


The problem you've uncovered is simply that "a uniform charge density of infinite extent" is not actually physically possible, and it turns out that (i) it is not possible to express it as the limit of a sequence of sensible physical situations, and (ii) it is not possible to provide a proper mathematical formalization for it. It's a bit of a bummer, because you can do this perfectly with infinite line and surface charges, but bulk charges just don't work like this.


This might seem a bit strange (and, really, it should), so let's take another look at what you mean when you say "space is filled" with charge of density $\rho$. Could you implement this in real life? Of course not! You can only fill up some finite volume $V$. Your hope then is that as $V$ gets bigger and bigger, the field inside it stabilizes to some sort of limit.


The problem is that for this procedure to make sense, you need the limiting procedure to be independent of the detailed shape of $V$ as you scale it up, for surely if you're in the centre of the slab and the field has mostly converged, the answer can't depend on details of a boundary that's very far away.


For a line and a surface charge, this works perfectly. You can calculate the field for a finite line charge, and the limit doesn't depend on which end goes to infinity faster as long as they both do. You can also prove that the field of increasing patches of surface charge does not depend too much on the shape of the patches if they are big enough. For bulk charges, though, you've just proved that it doesn't work: if you displace $V$, you get a different answer. Hence, the problem for an infinite spread of bulk charge doesn't make sense, and it's not the limit of sensible physical systems that are "big enough".


Another way of showing that the "big enough" property doesn't make sense is that there is nothing to compare the charge's size with. For line and surface charges, this is perfectly fine, and in fact all they are is models for a finite line / surface charge of length / radius $L$, whose field is tested at a point a distance $d$ from the charge. The distribution is "infinite" if $L/d\gg 1$, or in other words the models are good if the point is much nearer to the source than the source's size. For a bulk charge, there's no meaningful distance $d$, and hence no meaningful dimensionless parameter to take a limit over, and this in turn is what drives the meaninglessness of the situation.





Finally, let me put this a bit more mathematically, in a way that sort of makes it have an answer. Another way to phrase the problem "space is filled with a uniform bulk charge of density $\rho_0$" is as the simple differential equation $$\nabla\cdot \mathbf E=\rho_0/\epsilon_0.$$ This is a perfectly reasonable question to ask, except that you're missing boundary conditions, so the solution won't be (anywhere near) unique. However, boundary conditions don't make sense if your domain is all of space, so you need something else, and what turns out to do the job is to demand answers which share the symmetry properties of the charge - both the translation symmetries and all the point symmetries.


For line and surface charges, this actually works almost perfectly. The coupled symmetry and differential equation demands have, fortunately, unique solutions: the translation symmetries and the differential equation rule out everything except uniform fields, which are then ruled out by the point symmetries.


For a bulk charge, on the other hand, you get a fundamental linear dependence and a uniform field, which cannot be ruled out by the translational symmetry: $$\mathbf E=\frac{\rho_0}{3\epsilon_0}\mathbf r + \mathbf E_0=\frac{\rho_0}{3\epsilon_0}(\mathbf r-\mathbf r_0).$$ This form is sort of translation invariant, except that now you have to re-choose $\mathbf r_0$ every time you translate, which can't be quite right. And if you try to impose any point symmetries, you'll need to put $\mathbf r_0$ at every point with an inversion symmetry - and there you lose out, because it cannot be done.




To rephrase this last bit in your terms, the inversion symmetry requires that the field be zero at every point, but this is not consistent with the differential equation. You always have "infinitesimal" bits of charge at $\vec r$ and $-\vec r$ producing infinitesimal bits of field which cancel each other out, so the field should be zero at every point. This is indeed inconsistent with Gauss's law - but you can simply chalk it up to the fact that the problem is inconsistent.


electrostatics - Why is there no permittivity-type constant for gravitation?


When I look at electric or magnetic fields, each of them has a constant that defines how a field affects or is affected by a medium. For example, electric fields in vacuum have a permittivity constant $ϵ_0$ embedded in the electric field expression of a point charge: $E = q/4π ϵ_0r^2$. However, if I put this point charge in some dielectric that has a different permittivity constant $ϵ$, the value of the electric field changes. On a similar note, magnetic fields behave very similar but have the permeability constant $μ_0$ instead.


From my understanding, I believe that this is not the case for gravitational fields since the universal gravitational constant $G$ is consider to be a fundamental constant. So I am assuming that even though gravitational fields do operate in different types of mediums, this somehow doesn’t affect the gravitational field value. My question is why is this the case, that is, why isn’t there a permittivity-type constant for gravitation?



Answer



Permittivity $\varepsilon$ is what characterizes the amount of polarization $\mathbf{P}$ which occurs when an external electric field $\mathbf{E}$ is applied to a certain dielectric medium. The relation of the three quantities is given by


$$\mathbf{P}=\varepsilon\mathbf{E},$$


where permittivity can also be a (rank-two) tensor: this is the case in an anisotropic material.


But what does it mean for a medium to be polarized? It means that there are electric dipoles, that units of both negative and positive charge exist. But this already gives us an answer to the original question:


There are no opposite charges in gravitation, there is only one kind, namely mass, which can only be positive. Therefore there are no dipoles and no concept of polarizability. Thus, there is also no permittivity in gravitation.



velocity - Measuring more accurately the distance of remote galaxies


From what I read in Wikipedia, the velocity of a Galaxy has two components: one is due to Hubble's law for cosmic expansion, and the other is the peculiar velocity of the galaxy.


Since the peculiar velocity of galaxies can be over 1.ooo km/s in random direction, this causes an error in evaluating their distance using Hubble's law (I am summarizing from Wikipedia).



A more accurate estimate can be made by taking the average velocity of a group of galaxies: the peculiar velocities, assumed to be essentially random, will cancel each other, leaving a much more accurate measurement.



I assume that "group of galaxies" actually means the cosmic structure by that name, rather than just any collection of galaxies that seem to be in "the same neighborhood", though the Wikipedia text does not explicitly reference that, as it usually would. But I will ignore that issue.


The major problem that I see is that the speed of celestial structures with respect to their "surroundings" seems to be in proportion to their size: 30 km/s for Earth, 200 km/s for the Sun, 600 km/s for the Milky Way, and generally up to 1000 km/s and more for galaxies.


So I would expect this to go up again for even larger structures, such as groups or clusters of galaxies.


Hence, while averaging velocities may give some correction in the measurement, the major source of error should come from the group velocity itself, and would not be corrected by that procedure.



This would weaken significantly the Wikipedia assertion that it produces a "much more accurate" measurement.


Am I right, or do I make an error in my reasonning?




Tuesday, 25 June 2019

Frustrated by the light clock special relativity thought experiment



Here is this age old thought experiment being told by a professor on Sixty Symbols: https://youtu.be/Cxqjyl74iu4
This explanation using the light clock is extremely frustrating. How can one use a hypothetical example which is physically impossible and then say the "result" explains SR? The photon would never hit the top mirror directly above its source b/c light does not take on the velocity of its source. Instead, the instant it leaves its source it goes straight up while the rocket moves forward, and would strike the back of the rocket (or the top somewhere to the left of the mirror). If the photon struck the mirror it would not move forward with the rocket, but again would go straight down while the rocket moves forward, b/c for the photon to move forward it woud have to feel the friction of the mirror pushing it forward, which is again impossible. The reason a wave such as sound would have the trajectory shown in this example is that the medium inside the rocket, air, is moving at the speed of the rocket and the sound wave would take on that velocity as it left its source. Light does not use a medium to move. The reason a physical object such as a ball would have the trajectory shown is that particles take on the velocity of the souce that is accelerating them. Again, light does not take that velocity on, but instead it instantly has its standard speed (c) as it leaves its source. So, the photon does the exact same thing leaving a moving source as it would a stationary source, it moves at the speed of light in the direction it's facing, hence no length is added to its trajectory as stated in the example, and thus does not prove the time dilation of SR. Anybody else hearing me here? Thoughts?




newtonian mechanics - Can the velocity of the center of mass of two spheres change after a collision?


I'm curious as to whether or not the velocity of the center of mass of a system comprised of two spheres can change after the two spheres collide. Looking at the equation for the velocity of the center of mass for a system of particles:


$$V_\text{CM} = \frac{m_1v_1 + m_2v_2 + \cdots + m_nv_n}{m_1 + m_2 + \cdots + m_n}$$


It looks like, if after the collision, one of the particles changes direction, and the negative terms outweigh the positive terms in the numerator, the velocity of the center of mass would change direction (and possibly magnitude). However, I can't think of any examples where this would happen in elastic or inelastic collisions. I'm not even sure if its speed can change. It would make sense that it would if kinetic energy (velocity) was lost in an inelastic collision, however, I can't come up with any conditions to make this happen.


I could really use some insight.



Answer



Why do we even use centre of mass? In other words, why do we define it the way it is defined, and what use is it? Well, the centre of mass of a system is a point that behaves as if though all the mass of the system and all of momentum of the object were concentrated at that point. For the two momenta to be equal, we require


$p_{com} = p_{system}$



or


$(\sum m_i) \dot x_{com} = \sum_i m_i \dot x_i$


which leads directly to the familiar definition of the c.o.m (up to a constant). Note this is just a simple rearrangement of your equation. From this you can see that if momentum is conserved on the R.H.S., it must be conserved on the L.H.S. (because they are identical, by construction).


As @Greg mentioned, momentum is always conserved, although I think he meant to say that it is more fundamental than conservation of kinetic energy, rather than just energy.


mathematical physics - Mathematically-oriented Treatment of General Relativity



Can someone suggest a textbook that treats general relativity from a rigorous mathematical perspective? Ideally, such a book would




  1. Prove all theorems used.




  2. Use modern "mathematical notation" as opposed to "physics notation", especially with respect to linear algebra and differential geometry.





  3. Have examples that illustrate both computational and theoretical aspects.




  4. Have a range of exercises with varying degrees of difficulty, with answers.




An ideal text would read a lot more like a math book than a physics book and would demand few prerequisites in physics. Bottom line is that I would like a book that provides an axiomatic development of general relativity clearly and with mathematical precision works out the details of the theory.


Addendum (1): I did not intend to start a war over notation. As I said in one of the comments below, I think indicial notation together with the summation convention is very useful. The coordinate-free approach has its uses as well and I see no reason why the two can't peacefully coexist. What I meant by "mathematics notation" vs. "physics notation" is the following: Consider, as an example, one of the leading texts on smooth manifolds, John Lee's Introduction to Smooth Manifolds. I am very accustomed to this notation and it very similar to the notation used by Tu's Introduction to Manifolds, for instance, and other popular texts on differential geometry. On the other hand, take Frankel's Geometry of Physics. Now, this is a nice book but it is very difficult for me to follow it because 1) Lack of proofs and 2)the notation does not agree with other math texts that I'm accustomed to. Of course, there are commonalities but enough is different that I find it really annoying to try to translate between the two...


Addendum (2): For the benefit of future readers, In addition to suggestions below, I have found another text that also closely-aligns with the criteria I stated above. It is, Spacetime: Foundations of General Relativity and Differential Geometry by Marcus Kriele. The author begins by discussing affine geometry, analysis on manifolds, multilinear algebra and other underpinnings and leads into general relativity at roughly the midpoint of the text. The notation is also fairly consistent with the books on differential geometry I mentioned above.




Answer



I agree with Ron Maimon that Large scale structure of space-time by Hawking and Ellis is actually fairly rigorous mathematically already. If you insist on somehow supplementing that:



  • For the purely differential/pseudo-Riemannian geometric aspects, I recommend Semi-Riemannian geometry by B. O'Neill.

  • For the analytic aspects, especially the initial value problem in general relativity, you can also consult The Cauchy problem in general relativity by Hans Ringström.

  • For a focus on singularities, I've heard some good things about Analysis of space-time singularities by C.J.S. Clarke, but I have not yet read that book in much detail myself.

  • For issues involved in the no-hair theorem, Markus Heusler's Black hole uniqueness theorems is fairly comprehensive and self-contained.

  • One other option is to look at Mme. Choquet-Bruhat's General relativity and Einstein's equations. The book is not really suitable as a textbook to learn from. But as a supplementary source book it is quite good.


If you are interested in learning about the mathematical tools used in modern classical GR and less on the actual theorems, the first dozen or so chapters of Exact solutions of Einstein's field equations (by Stephani et al) does a pretty good job.



resonance - Why do tuning forks have two prongs?


I believe the purpose of a tuning fork is to produce a single pure frequency of vibration. How do two coupled vibrating prongs isolate a single frequency? Is it possible to produce the same effect using only 1 prong? Can a single prong not generate a pure frequency? Does the addition of more prongs produce a "more pure" frequency?


The two prong system only supports a single standing wave mode, why is that?



Answer



I am by no means an expert in tuning fork design, but here are some physical considerations:



  • Different designs may have different "purities," but don't take this too far. It is certainly possible to tune to something not a pure tone; after all, orchestras usually tune to instruments, not tuning forks.

  • Whatever mode(s) you want to excite, you don't want to damp with your hand. Imagine a single bar. If you struck it in free space, a good deal of the power would go into the lowest frequency mode, which would involve motion at both ends. However, clamping a resonator at an antinode is the best way to damp it - all the energy would go into your hand. A fork, on the other hand, has a natural bending mode that will not couple very well to a clamp in the middle.



biophysics - Is colour, as represented using primary colours, accurate only to humans?


Slightly biological, hopefully physical enough to be answered.


Suppose a magenta hue is represented by a mix of red and blue pigment. This is all very well for a creature with red and blue photoreceptors, but suppose it was seen by a creature which had a magenta-sensitive receptor, but no red or blue. Would the colour appear the same (insomuch as a qualititave concept can appear the same, I suppose I mean 'Would they interpret it as colour of the same wavelength?')?



The crux of my question is, do the particular bands in which photoreceptors are activated affect vision of additive, as opposed to pure, hues?


Finally, completely off-topic, but if there happens to be a biologist around, do animals on the whole have similar photoreceptors, or are they placed largely randomly?


Thanks, Wyatt



Answer



No, they will not appear the same. Humans have three color receptors so any possible color for us is just three numbers in RGB space. However, electromagnetic spectrum is continuous and there is an infinite number of spectra that would produce the same RGB stimulus. That is why you perceive the this page as white although it is in fact a combination of R,G,B tuned for a human eye. A creature with other set of photoreceptors would not see this page as white. Actually, white is also subjective, see how many settings for white balance your digital camera has, but suppose that we set it to 'daylight' and consider the continuous spectrum of sun as white.


Another interesting point: magenta hue cannot be represented by any single wavelength, check out this famous horseshoe diagram:


CIE 1931 Color Space


So magenta is even more subjective than green or red.


Generally, most some animals don't perceive colors at all. There are some who have only two receptors and others who have four. Mantis shrimp has 16 different photoreceptors!


Monday, 24 June 2019

newtonian mechanics - Can magnitude be negative?


My teacher told that magnitude is the positive value of that quantity or the modulus of that quantity.


he also told that vector quantities have both magnitude and direction and scalar quantities have only magnitudes and hence are always positive.


However, gravitational potential energy is always negative except for being 0(at infinity)


But gravitational potential energy is also a scalar quantity.


So is there magnitude negative?


What I thought about it was that it's magnitude is negative.


Let's take an example of any vector quantity,say velocity.


If a body is moving with the velocity of -5m/s, that means it is moving with a speed of 5m/s in the direction opposite to the positive direction. And here, the body is covering 5 meters every second, though it's velocity is -5m/s.


But if a body has potential energy -40J, it does not mean that it has actual potential energy 40J but is in opposite direction, hence the magnitude should be negative



Please tell me that will the magnitude be positive or negative?



Answer



This is a very common misconception among physics students, so let me see if I can provide some examples that will make the distinction clearer.


VECTORS are quantities that have a magnitude and a direction. The magnitude of the velocity is speed, which is always positive.



  • Examples: As you pointed out, one of the simplest examples of a vector quantity is velocity. Other good examples are forces, and momenta.

  • For a vector $\vec{v}$, the magnitude of the vector, $|\vec{v}|$ is the length of the vector. This quantity is always positive! The magnitude of velocity, for example, is speed, which is always positive. (If a car is traveling 95 mph, A radar gun would register the speed of a car as 95 mph regardless of whether the car was going backwards, forwards, or sideways). Similarly, the magnitude of a force is always a positive number, even if the force points down. If you have $7$ N forces point up, down, left and right, the magnitude of those forces are all just $7$ N. Once again, the magnitude of a vector is its length, which is always positive.


SCALARS on the other hand work entirely differently. Scalar quantities have a numerical value and a sign.





  • Examples: Temperature is a nice simple example. Others include time, energy, age, and height.




  • For a scalar $s$, the absolute value of the scalar, $|s|$ is simply the same numerical value as before, with the negative sign (if it existed) chopped off. We do not (or at least we shouldn't!) talk about the "magnitude" of a scalar! Conceptually, I recommend thinking about the absolute value of a scalar, and the magnitude of a vector as completely different things. If it is $-3°F$ outside, it does not make sense to talk about the magnitude of the temperature. You could, however, compute the absolute value of the temperature to be $3°F$.




  • Note that some scalar quantities don't make sense as negative numbers: A person's age is a scalar quantity, and we don't really talk about negative age. Another example is temperatures measured on the kelvin scale.





So, to answer your question, energy is a scalar, so it does not have a magnitude. If a body has -40J of potential energy, then it simply has 40J less than your arbitrary 0 point. It does not make sense to talk about the magnitude of this scalar quantity. Please let me know if that helped or hurt your understanding!


statistical mechanics - Physical distinction between mixing and ergodicity


How can one in a very contrasting manner distinguish between the physical meaning of mixing dynamics and that of ergodic dynamics? More precisely, is one a stronger condition than the other? (which begs to ask questions such as: are ergodic systems also mixing? or vice-versa).


Usually by "mixing" one means the quick decorrelation of system properties (more correctly averages of system properties) from the initial conditions. On the other hand, rather similarly by ergodicity one means equal time and ensemble averages independently of initial conditions. Please feel free to use examples that you see fit, in order to shed a clear light on the difference behind these two concepts.



Answer



Mixing is a very physically intuitive concept: a set of particles saitsfying a small spread of uncertainty in their initial conditions, follow paths that enter (nearly) into (nearly) any region, and in a relatively "uniform" way: After sufficiently long a period of time, the percentage of them found within in that region, even within a short period, is proportional to the volume of the region.



Consider the following intuitive example. We consider a (small) amount of uncertainty in the initial conditions, which means we consider not a point but a small volume to begin with. Picture it as being a bit of black colour in the phase space, where the rest of the phase space is white. Now suppose the dynamics really is mixing: somebody is stirring the phase space as if it were a can of paint (into which we put this bit of colour). If the dynamics is truly "mixing", then after a while, the colour will be, as far as the eye can tell, more or less evenly spread throughout the phase space. This is the intuitive example of "mixing" dynamics. Another way of putting this is, even a small deviation in the initial condition eventually results in a large deviation in the state.


Ergodicity is more of a mathematical notion than a physical notion. It means that the time averages are (nearly always) equal to the phase averages. This is the "dual" to the above notion. The ergodic theorem says that this holds as long as the phase space cannot be decomposed into two disjoint invariant subsets of positive measure ("metric transitivity") . Well, you could be generous and say that this is physical too.


A trivial example of ergodic dynamics is irrational rotation on the torus. consider the two-dimensional surface of a three-dimenional doughnut. Or, what is the same, consider the unit square with its edges identified, so a particle that reaches the boundary immediately re-appears at the opposite edge. The dynamics is simple unaccelerated free motion. The initial condition consists of the initial position plus a velocity vector. If the slope of the velocity vector is rational number, the particle will eventually return to its original position and the motion is not ergodic since there are regions it will never reach. But such initial conditions constitute a set of measure zero. "Nearly all" initial conditions have a velocity vector with irrational slope. On such trajectories, the time average of an observable equals the phase average. So this dynamical system is ergodic.


Sunday, 23 June 2019

homework and exercises - Hamiltonian equations: can I divide a solution of motion for a constant?


I'm solving an exercise about Hamiltonian equations. I have followed the proceeding below. The results given by the book are different to mine because its first result is the half of mine (and the second one linked to the first one is different to mine). I think that my proceeding is correct and so I can't understand...



Given these two Hamiltonian equations:



$$\tag{1} \dot p ~=~ - \alpha pq,$$ $$\tag{2} \dot q ~=~\frac{1}{2} \alpha q^2.$$


Find $q(t)$ and p$(t)$, considering initial conditions $p_0$ and $q_0$.



I have integrated the second equation and obtained:


$$\tag{3} q(t)~=~\frac{2q_0}{2-q_0 \alpha (t-T_0)}$$


Then I have pugged this, in the second canonical eq, and I have obtained:


$$\tag{4} p(t)~=~p_0(2-q_0 \alpha (t-t_0))^2.$$


The solutions given by the book are:


$$\tag{5} q(t)~=~\frac{q_0}{1- \frac{1}{2} \alpha q_0 (t-t_0)},$$ $$\tag{6} p(t)~=~p_0[1-\frac{1}{2} \alpha q_0 (t-t_0)]^2.$$


I can obtain the solutions of the book if I divide numerator and denominator of $q$ for 2.. but.. can I do it?



Is my proceeding correct?




general relativity - Can gravity accelerate an object past the speed of light?


Imagine we have something very heavy (i.e supermassive black hole) and some object that we can throw with 0.999999 speed of light (i.e proton). We are throwing our particle in the direction of hole. The black hole is so heavy that we can assume that in some moment acceleration of gravity would be say 0.0001 speed of light/s^2. So the question is what will be the speed of proton in few a seconds later, assuming we have such distances that it will not hit the black hole before.



Answer



This the classic "hurling a stone into a black hole" problem. It's described in detail in sample problem 3 in chapter 3 of Exploring Black Holes by Edwin F.Taylor and John Archibald Wheeler. Incidentally I strongly recommend this book if you're interested in learning about black holes. It does require some maths, so it's not a book for the general public, but the maths is fairly basic compared to the usual GR textbooks.


The answer to your question is that no-one observes the stone (proton in your example) to move faster than light, no matter how fast you throw it towards the black hole.


I've phrased this carefully because in GR it doesn't make sense to ask questions like "how fast is the stone" moving unless you specify what observer you're talking about. Generally we consider two different types of observer. The Schwarzschild observer sits at infinity (or far enough away to be effectively at infinity) and the shell observer sits at a fixed distance from the event horizon (firing the rockets of his spaceship to stay in place).



These two observers see very different things. For the Schwarzschild observer the stone initially accelerates, but then slows to a stop as it meets the horizon. The Schwarzschild observer will never see the stone cross the event horizon, or not unless they're prepared to wait an infinite time.


The shell observer sees the stone fly past at a velocity less than the speed of light, and the nearer the shell observer gets to the event horizon the faster they see the stone pass. If the shell observer could sit at the event horizon (they can't without an infinitely powerful rocket) they'd see the stone pass at the speed of light.


To calculate the trajectory of a hurled stone you start by calculating the trajectory of a stone falling from rest at infinity. I'm not going to repeat all the details from the Taylor and Wheeler book since they're a bit involved and you can check the book. Instead I'll simply quote the result:


For the Schwarzschild observer:


$$ \frac{dr}{dt} = - \left( 1 - \frac{2M}{r} \right) \left( \frac{2M}{r} \right)^{1/2} $$


For the shell observer:


$$ \frac{dr_{shell}}{dt_{shell}} = - \left( \frac{2M}{r} \right)^{1/2} $$


These equations use geometric units so the speed of light is 1. If you put $r = 2M$ to find the velocities at the event horizon you'll find the Schwarzschild observer gets $v = 0$ and the (hypothetical) shell observer gets $v = 1$ (i.e. $c$).


But this was for a stone that started at rest from infinity. Suppose we give the stone some extra energy by throwing it. This means it corresponds to an object that starts from infinity with a finite velocity $v_\infty$. We'll define $\gamma_\infty$ as the corresponding value of the Lorentz factor. Again I'm only going to give the result, which is:


For the Schwarzschild observer:



$$ \frac{dr}{dt} = - \left( 1 - \frac{2M}{r} \right) \left[ 1 - \frac{1}{\gamma_\infty^2}\left( 1 - \frac{2M}{r} \right) \right]^{1/2} $$


For the shell observer:


$$ \frac{dr_{shell}}{dt_{shell}} = - \left[ 1 - \frac{1}{\gamma_\infty^2}\left( 1 - \frac{2M}{r} \right) \right] ^{1/2} $$


Maybe it's not obvious from a quick glance at the equations that neither $dr/dt$ nor $dr_{shell}/dt_{shell}$ exceeds infinity, but if you increase your stone's initial velocity to near $c$ the value of $\gamma_\infty$ goes to $\infty$ and hence 1/$\gamma^2$ goes to zero. In this limit it's easy to see that the velocity never exceeds $c$.


In his comments Jerry says several times that the velocity exceeds $c$ only after crossing the event horizon. While Jerry knows vaaaaastly more than me about GR I would take him to task for this. It certainly isn't true for the Schwarzschild observer, and you can't even in principle have a shell observer within the event horizon.


homework and exercises - How to find Hamiltonian from this simple Lagrangian? (tricky)


$$L~=~ \frac{1}{2} \dot{q} \sin^2{q} $$


Is it zero or not defined?




Saturday, 22 June 2019

electromagnetism - Are magnetism and electricity the same thing?




I have read at certain place electricity and magnetism are the same thing, bit in reality we see both have different properties.




thermodynamics - Non-extensivity in a few body system


Thermodynamics finds application in many areas of physics, many of them sharing the feature of fluid-like or many-body like behavior.


However, small systems or few body systems have been studied too using thermodynamics. But generally it is accepted that these systems behave differently from macroscopic systems: they are non-extensive.


Non-extensivity generally is deduced from the fact that extensive properties in these systems don't add up the way they would in macroscopic ones. But I am struggling to find hard evidence of this, because merging two systems, at least in a thermodynamics view, implies that we should check the equilibrium cases only.


I mean, when I say that for two non extensive systems with entropies $S_a$ and $S_b$ it doesn't hold $S_a + S_b = S_{a+b}$, I should be checking that all these systems are under the same conditions regarding intensive properties, right? If not, then not even macroscopic systems would exhibit extensivity.


My question is: Are there publications checking this aspects for certain small systems?


I am searching for a papers where this is checked, because I have not seen this done and find it strange.




Answer



A recent work that uses nanothermodynamics and includes a computational investigation of the kind you are asking about for an Ising lattice:


R.V. Chamberlin, The Big World of Nanothermodynamics


Sec.5 of the following paper makes a reference to another paper that appears to have tested the limits of usual thermodynamics in single polymer stretching experiments:


J. M. Rubi, D. Bedeaux, and S. Kjelstrup, Thermodynamics for Single-Molecule Stretching Experiments, J. Phys. Chem. B 2006, 110, 12733-12737


You may also find some clues in discussion and refs from Secs. 2.3-2.5 of this paper:


T. Dauxois, S. Ruffo, E. Arimondo, M. Wilkens, Dynamics and Thermodynamics of Systems with Long-Range Interactions: An Introduction


It is actually the Intro to this volume: Dynamics and Thermodynamics of Systems with Long Range Interactions, Eds. T. Dauxois, S. Ruffo, E. Arimondo, M. Wilkens (Google Books)


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...