Tuesday, 31 March 2020

gravity - How can we recover the Newtonian gravitational potential from the metric of general relativity?


The Newtonian description of gravity can be formulated in terms of a potential function $\phi$ whose partial derivatives give the acceleration:



$$\frac{d^2\vec{x}}{dt^2}=\vec{g}=-\vec{\nabla}\phi(x)=\left(\frac{\partial\phi}{\partial x}\hat{x}+\frac{\partial\phi}{\partial y}\hat{y}+\frac{\partial\phi}{\partial z}\hat{z}\right)$$


However, in general relativity, we describe gravity by means of the metric. This description is radically different from the Newtonian one, and I don't see how we can recover the latter from the former. Could someone explain how we can obtain the Newtonian potential from general relativity, starting from the metric $g_{\mu\nu}$?



Answer



Since general relativity is supposed to be a theory that supersedes Newtonian gravity, one certainly expects that it can reproduce the results of Newtonian gravity. However, it is only reasonable to expect such a thing to happen in an appropriate limit. Since general relativity is able to describe a large class of situations that Newtonian gravity cannot, it is not reasonable to expect to recover a Newtonian description for arbitrary spacetimes.


However, under suitable assumptions, one does recover the Newtonian description of matter. This is called taking the Newtonian limit (for obvious reasons). In fact, it was used by Einstein himself to fix the constants that appear in the Einstein Field equations (note that I will be setting $c\equiv 1$ throughout).


$$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\kappa T_{\mu\nu} $$


Requiring that general relativity reproduces Newtonian gravity in the appropriate limit uniquely fixes the constant $\kappa\equiv 8\pi G$. This procedure is described in most (introductory) books on general relativity, too. Now, let us see how to obtain the Newtonian potential from the metric.


Defining the Newtonian limit


We first need to establish in what situation we would expect to recover the Newtonian equation of motion for a particle. First of all, it is clear that we should require that the particle under consideration moves at velocities with magnitudes far below the speed of light. In equations, this is formalized by requiring


$$\frac{\mathrm{d}x^i}{\mathrm{d}\tau}\ll \frac{\mathrm{d}x^0}{\mathrm{d}\tau} \tag{1}$$



where the spacetime coordinates of the particle are $x^\mu=(x^0,x^i)$ and $\tau$ is the proper time. Secondly, we have to consider situation where the gravitational field is "not too crazy", which in any case means that it should not be changing too quickly. We will make more precise as


$$\partial_0 g_{\mu\nu}=0\tag{2}$$


i.e. the metric is stationary. Furthermore we will require that the gravitational field is weak to ensure that we stay in the Newtonian regime. This means that the metric is "almost flat", that is: $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ where $h_{\mu\nu}$ is a small perturbation, and $\eta_{\mu\nu}:=\operatorname{diag}(-1,1,1,1)$ is the Minkowski metric. The condition $g_{\mu\nu}g^{\nu\rho}=\delta^\rho_\mu$ implies that $g^{\mu\nu}=\eta^{\mu\nu}-h^{\mu\nu}$, to first order in $h$, where we have defined $h^{\mu\nu}:=\eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}$$^1$. This can be easily checked by "plug-and-chug".


Taking the Newtonian limit


Now, if we want to recover the equation of motion of a particle, we should look at the corresponding equation in general relativity. That is the geodesic equation


$$\frac{\mathrm{d}^2x^\mu}{\mathrm{d}\tau^2}+\Gamma^\mu_{\nu\rho}\frac{\mathrm{d}x^\nu}{\mathrm{d}\tau}\frac{\mathrm{d}x^\rho}{\mathrm{d}\tau}=0 $$


Now, all we need to do is use our assumptions. First, we use equation $(1)$ and see that only the $00$-component of the second term contributes. We obtain


$$\frac{\mathrm{d}^2x^\mu}{\mathrm{d}\tau^2}+\Gamma^\mu_{00}\frac{\mathrm{d}x^0}{\mathrm{d}\tau}\frac{\mathrm{d}x^0}{\mathrm{d}\tau}=0 $$


From the definition of the Christoffel symbols


$$\Gamma^{\mu}_{\nu\rho}:=\frac{1}{2}g^{\mu\sigma}(\partial_{\nu}g_{\rho\sigma}+\partial_\rho g_{\nu\sigma}-\partial_\sigma g_{\nu\rho}) $$



we see that, after we use equation $(2)$, the only relevant symbols are


$$\Gamma^{\mu}_{00}=-\frac{1}{2}g^{\mu\sigma}\partial_\sigma g_{00} \textrm{.}$$


Using the weak field assumption and keeping only terms to first order in $h$, we obtain from straightforward algebra that


$$\Gamma^{\mu}_{00}=-\frac{1}{2}\eta^{\mu\sigma}\partial_\sigma h_{00} $$


which leaves us with the simplified geodesic equation


$$\frac{\mathrm{d}^2 x^\mu}{\mathrm{d}\tau^2}=\frac{1}{2}\eta^{\mu\sigma}\partial_\sigma h_{00}\bigg(\frac{\mathrm{d}x^0}{\mathrm{d}\tau}\bigg)^2 $$


Once again using equation $(2)$ shows that the $0$-component of this equation just reads $\ddot{x}^0=0$ (where the dot denotes differentiation with respect to $\tau$), so we're left with the non-trivial, spatial components only:


$$\ddot{x}^i=\frac{1}{2}\partial_i h_{00} $$


which looks suspiciously much like the Newtonian equation of motion


$$\ddot{x}^i=-\partial_i\phi$$



After the natural identification $h_{00}=-2\phi$, we see that they are exactly the same. Thus, we obtain $g_{00}=-1-2\phi$, and have expressed the Newtonian potential in terms of the metric.





  1. For a quick 'n' dirty derivation, we assume an expansion of the form $g^{\mu\nu}=\eta^{\mu\nu}+\alpha \eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}+\mathcal{O}(h^2)$ (note that the multiplication by $\eta$'s is the only possible thing we can do without getting a second order term), and simply plugging it into the relationship given in the post:


$$(\eta_{\mu\nu}+h_{\mu\nu})\big(\eta^{\mu\nu}+\alpha \eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}+\mathcal{O}(h^2)\big)=\delta^\mu_\rho \Leftrightarrow \alpha=-1$$


experimental physics - Testing General Relativity


Ever since Einstein published his GR theory in 1916, there have been numerous experimental tests to confirm its correctness--and has passed with flying colors.


NASA and Stanford have just announced that their Gravity Probe B activity has confirmed GR's predicted geodetic and frame-dragging effects. Are there any other facets of GR that need experimental verification?




Answer



Sure there are. The theory has been tested within only a teeny tiny part of the range of its predictions. For example it predicts gravitational redshift in the range of 0% (no redshift) to 100% (black hole), but experiments to date have shown a maximum gravitational redshift less than 0.01%. It matters less how many tests of GR are done than how extensively those tests cover the range of what GR predicts.


While we have little experimental data to definitively show that GR is the correct theory of gravity, we do know that it leads to major problems for physics, like its breakdown at gravitational singularities, its incompatibility with quantum mechanics, and the black hole information loss paradox. A competing theory of gravity that is confirmed by all experimental tests of GR to date need not have any of those problems, indicating that a lot more testing of GR is warranted.


classical mechanics - What is the physical interpretation of the Poisson bracket



Apologies if this is a really basic question, but what is the physical interpretation of the Poisson bracket in classical mechanics? In particular, how should one interpret the relation between the canonical phase space coordinates, $$\lbrace q^{i}, p_{j} \rbrace_{PB}~=~\delta^{i}_{j} $$ I understand that there is a 1-to-1 correspondence between these and the commutation relations in quantum mechanics in the classical limit, but in classical mechanics all observables, such as position and momentum commute, so I'm confused as to how to interpret the above relation?




quantum field theory - Divergent bare parameters/couplings: what is the physical meaning of it? Do this have any relation with wilson's renormalization group approach?


I understand that bare parameters in the Lagrangian are different from the physical one that you measure in an experiment. I'm wondering if the fact that they are divergent has any physical meaning? If they weren't the divergencies that arise in loop calculation cannot "be put" anywhere else. I'm OK with them being different from the physical one, but why is it OK for them to be divergent?


EDIT: I have some difficulties in understanding if this has any connection with renormalization group approach of Wilson. They seems to be quite different. In fact in the last case you start with an effective field theory valid up to scale Landa (sharp cutoff) and you "integrate out" the high momentum part of the action to see how the theory behaves at low energies. On the other approach you want (after the renormalization ) to let the cutoff go to infinity and find finite results. That means that the theory is not sensitive anymore about the high energy/short scale behavior.



The running of the coupling in Wilson's approach has nothing to do with the bare parameters going to infinity when the cutoff is removed right?


Is there any reference that tries to unify this 2 different approaches? I read deeply this two books: Quantum and statistical physics - le bellac Field theory, the renormalization group - amit Do you reccomend any other books/articles?




newtonian mechanics - Newton's principle of determinacy


I am a mathematician. I have a (somewhat long term) goal of understanding some of the physical insights that have influenced my area of research. To this end I read Arnold's Mathematical methods in classical mechanics a while ago, but something I didn't understand has been bugging me ever since.


In the first chapter Arnold defines a motion of $n$ particles in $\mathbb{R}^3$ as a map $\mathbf{x}:\mathbb{R} \rightarrow \mathbb{R}^N$ for $N=3n$. The first chapter is then about what types of motion are allowed. In section 2D Arnold makes the following observation:



According to Newton's principle of determinacy all motions of a system are uniquely determined by their initial positions ($\mathbf{x}(t_0) \in \mathbb{R}^N$) and initial velocities ($\mathbf{\dot{x}}(t_0) \in \mathbb{R}^N$).




This seems important and I expect that this would have an impact on what types of functions $\mathbf{x}$ could be. He goes on:



In particular, the initial positions and velocities determine the acceleration. In other words, there is a function $\mathbf{F} : \mathbb{R}^N \times \mathbb{R}^N \times \mathbb{R} \rightarrow \mathbb{R}^N$ such that $$ \mathbf{\ddot{x}} = \mathbf{F}(\mathbf{x},\mathbf{\dot{x}},t). $$



So the implication of Newton's determinacy principle is that the acceleration obeys a second order differential equation. This seems completely vacuous to me. Any function $\mathbf{x}$ obeys a second order differential equation (as long as it is twice differentiable).


Could someone please explain to me what Arnold is saying here. I feel like I am missing something important.



Answer



The trajectories are uniquely determined means that the theorem of existence and uniqueness applies (so, the differential equation has to be sufficiently regular).


Newton's principle states more: the system is fully determined by the position and the speed, that is, by $2n$ constants, where $n$ is the dimension of the space. As you have $n$ equations (one per spacial coordinate), they are completely determined if and only if they are of second order.



The statement is not that the function $x$ obeys a second order differential equation, it says that the dynamics are directed by a second order DE.


Edit:


In other words, The key is that there is one set of ODE for any possible initial condition. You can construct a first order ODE for a given trajectory, but it will be useless if you change the initial conditions.


homework and exercises - Rotational velocity of tethered shape after falling


Question


My solution to the above question involves equating the potential energy to the to the kinetic energy at the point at which the wire tightens as: $$ \frac{1}{2}mv^2 = mgh $$ However, I am having trouble finding the initial rotational velocity of the object, I initially thought that it was, if l is length: $$ \omega = \frac{2}{l} v $$ However, seeing a this is a full (past) exam question, I think the solution if not as straightforward. My second thought was that perhaps the radius of rotation is the distance from the connection point to the centroid of the shape and v the velocity component perpendicular to this.
But I am not sure that all of the velocity after falling is converted into rotational velocity at that instant, or if the point of connection accelerates to the right at the point as the string becomes taut.




Monday, 30 March 2020

optics - Photons and perfect mirror


A perfect mirror means, that all the photons which collided with the mirror will be reflected in the same amount, with the same energy and with the same - except sign - angle. Will the mirror get an impulse from the photons?



Answer



Yes it will.


Assuming the light is incedent normally the change in the photon momentum is $2h\nu/c$, and consequently the momentum of the mirror will change by the same amount.


If the mirror is free to move it will be accelerated by the light and as a result the light will be slightly red shifted. There is more discussion of this in Can relativistic momentum (photons) be used as propulsion for 'free' after the initial generation? though the question is not an exact duplicate.



homework and exercises - Derivation of equations of motion in Nordstrom's theory of scalar gravity?


Nordstrom's theory of a particle moving in the presence of a scalar field $\varphi (x)$ is given by $$ S = -m\int e^{\varphi (x)}\sqrt{\eta_{\alpha \beta}\frac{dx^{\alpha}}{d \lambda}\frac{dx^{\beta}}{d \lambda}}d\lambda , $$ where $\lambda$ is the parametrization of the worldline of the particle, ignoring the free field part $\int \eta_{\alpha \beta}\partial^{\alpha}\varphi \partial^{\beta} \varphi d^{4}x$.


How does one derive the equations of motion in terms of the parameter $$d\tau = \sqrt{\eta_{\alpha \beta}\frac{dx^{\alpha}}{d \lambda}\frac{dx^{\beta}}{d \lambda}} d\lambda. $$ $u^{\alpha} = \frac{dx^{\alpha}}{d\tau} \Rightarrow u_{\alpha}u_{\beta}\eta^{\alpha \beta} = 1$?


My attempt:


$$ \delta S = 0 \Rightarrow \int \left( \frac{\partial (e^{\varphi } \sqrt{...})}{\partial x^{\alpha}}\delta x^{\alpha} +\frac{\partial (e^{\varphi } \sqrt{...})}{\partial \left( \frac{d x^{\alpha}}{d \lambda } \right)}\frac{d}{d\lambda} \delta x^{\alpha} \right)d\lambda = |d\tau = \sqrt{...}d\lambda | = $$ $$ = \int \left(\sqrt{...}e^{\varphi}\partial_{\alpha}\varphi - \frac{d}{d \lambda} \left(\frac{d x_{\alpha}}{d\tau}e^{\varphi}\right) \right)\delta x^{\alpha}d \lambda = $$ $$ =\int \left( \partial_{\alpha}\varphi - \frac{d^{2}x_{\alpha}}{d \tau^{2}} - \frac{dx_{\alpha}}{d\tau} \frac{d \varphi }{d\tau }\right) \delta x^{\alpha} e^{\varphi} \sqrt{...}d\lambda = $$ $$ =\int \left( \partial_{\alpha}\varphi - \frac{d u_{\alpha}}{d \tau} - u_{\alpha} u_{\beta} \partial^{\beta} \varphi \right)\delta x^{\alpha} e^{\varphi} \sqrt{...}d\lambda \Rightarrow $$ $$ \partial_{\alpha}\varphi - \frac{d u_{\alpha}}{d \tau} - u_{\alpha} u_{\beta} \partial^{\beta} \varphi = 0 \Rightarrow \partial_{\alpha} \varphi = e^{-\varphi}\frac{d }{d \tau}\left( e^{\varphi } u_{\alpha}\right). $$ Unfortunately, this equation doesn't look like the equation from Wikipedia, $$ \frac{d (\varphi u_{\alpha})}{d \tau} = -\partial_{\alpha } \varphi. $$ I can explain the part of differences by renaming the function, $e^{\varphi } \to \varphi $, in the expression for action (then my equation reduces to the form $ \partial_{\alpha} \varphi = \frac{d }{d \tau}\left( \varphi u_{\alpha}\right)$), but I can't explain why my equation has the wrong sign.




special relativity - Does the potential energy related to a particle determines its rest mass?


Would it be possible to determine the rest mass of a particle by computing the potential energy related to the presence (existence) of the particle, if this potential energy could be determined accurately enough?


I noticed from the answers to a recent question that I always assumed this to be true, without even thinking about it. However, it occurred to me that this concept was at least unfamiliar to the people who answered and commented on that question, and that it's even unclear whether this concept is true or meaningful at all.


Let me explain this concept for an idealized situation. Consider an idealized classical spherical particle with a charge $q$ and a radius $r$ at the origin. Assume that the particle generates an electrostatic field identical to the one of a point charge $q$ in the region outside of radius $r$ and vanishing inside the region of radius $r$. Now let's use a point charge $-q$ and move it to the origin in order to cancel this field in the region outside of radius $r$. Moving the point charge to the origin will generate a certain amount of energy, and that would be the energy which I mean by the potential energy related to the presence (existence) of this idealized classical spherical particle.


I'm well aware that really computing the potential energy related to the presence (existence) of any real particle is not practically feasible for a variety of reasons, but that never worried me with respect to this concept. What worries me now is whether this notion of potential energy is even well defined at all, and even if it is, whether it really accounts for the entire rest mass (not explained by other sources of kinetic, internal or potential energy) of a particle. After all, the rest mass of a particle might simply be greater than the mass explained by any sort of potential energy.




Answer



The answer is ultimately no, but this is a reasonable idea, although old. This idea was floating around in the late 19th century, that the mass of the electron is due to the energy in the field around the electron.


The concept of potential energy is refined in field theories to field energy. The fields have energy, and this energy is identified with the potential energy of a mechanical system, so that if you lift a brick up, the potential energy of the brick is contained in the gravitational field of the brick and the Earth together.


This is important, because unlike kinetic energy, it is difficult to say where the potential energy is. If you lift a brick, is the potential energy in the brick? In the Earth? In Newton's mechanics, the question is meaningless both because things go instantaneously to different places, and also because energy is a global quantity with no way to measure the location. But in relativistic physics, the energy gravitates, and the gravitational field produced by energy requires that you know where this energy is located.


The upshot of all this is that potential energy is field energy, and you are asking if all the mass-energy of a particle is due to the fields around it.


This model has a problem if you think of it purely electromagnetically. Using a model where the electron is a ball of charge, and all the mass is electromagnetic field, you would derive, along with Poincare, Abraham and others that the total mass is equal to 4/3 of the E/c^2. The reason you don't get the right relativistic relation is because of the stresses you need to hold a ball of charge from exploding. The correct relation really needs relativity, and then you can't determine if the mass is all field.


The process of renormalization in quantum field theory tells you that part of the mass of the electron is due to the mass of the field it carries, but there are two regimes now. There is a long-distance regime, much longer than the Compton wavelength of the electron, where you get a contribution to the mass from the electric field which blows up as the reciprocal of the electron radius, and then there is the region inside the Compton wavelength, where you get the QED mass correction from electrons fluctuating into positrons, which softens the blowup to a log. The compton wavelength of the electron is 137 times bigger than the classical electron radius, so even with a Planck scale cutoff, not all the mass of the electron is field, because the blow-up in field energy is so slow at high energy.


So in quantum field theory, the answer is no--- the field energy is not the entire mass of the particle. But in another sense it is yes, because if you include the electron field too, then the total mass of the electron is the mass in the electron field plus the electromagnetic field.


Within string theory, you can formulate the question differently--- is there a measure of a field at infinity which will tell you the mass of the particle? In this case, it is the gravitational field, so that the far-away gravitational field tells you the mass.


But you probably want to know--- is the mass due to the combination of gravitational and electromagnetic field together? In this sense, since this is a classical question, it is best to think in classical GR.



If you have a charged black hole, there is a contribution to the mass of the black hole from the field outside, and a contribution from the black hole itself. As you increase the charge of the black hole, there comes a point where the charge is equal to the mass, where the entire energy of the system is due to the external fields (gravitational and electromagnetic together), and the black hole horizon becomes extremal. The extremal limit of black holes can be thought of as a realization of this idea, that all the mass is due to the fields.


Within string theory, the objects made out of strings and branes are extremal black holes in the classical limit. So within string theory, although it is highly quantum, you can say the idea that all the mass-energy is field energy is realized. This is not very great in giving you what the mass should be, because in the cases of interest, you are finding particles which are massless, so that all their energy is the energy in infinitely boosted fields. But you can take comfort in the fact that this is just a quantum regime of a system where the macroscopic classical limit of the particles are classical gravitational systems where your idea is correct.


visible light - Why are some Air Gap Sparks Orange?



While testing the ignition system on my car with a variable gap spark tester I noticed that the spark was orange. I suspect that there may be a problem with the ignition coil such that there is sufficient voltage to jump the air gap, but that not enough amperage to generate a nice blue spark.


While researching the color of air sparks, I came across this in a wiki on ionized air glow:



Rydberg atoms, generated by low-frequency lightnings, emit at red to orange color and can give the lightning a yellowish to greenish tint.



Might low amperage and Rydberg atoms be the reason for my sparks being orange?


I also ran across this Briggs and Stratton page which claims:



Orange and yellow come from particles of sodium in the air ionizing in the high energy of the spark gap




But the question would still remain why some air gap sparks are blue while others have this orange / yellow coloring.


If not, what might be another explanation for my orange sparks?




Sunday, 29 March 2020

quantum mechanics - Flaws of Broglie–Bohm pilot wave theory?


I recently learned about an oil drop experiment that showed how a classical object can produce quantum like behavior because its assisted by a pilot wave. How has this not gained more attention? What flaws does Broglie–Bohm pilot wave theory have in explaining particle behavior?




electromagnetism - What happens if you try to apply Maxwell's Equations to this quantum mechanical system?


In another post, we discussed the oscillating charge in a hydrogen atom and the weight of opinion seemed to be that there is indeed an oscillating charge when you consider the superposition of the 1s and 2p states. One of the correspondents (freecharly) went a little farther and said that Schroedinger believed this oscillating charge to be the source of radiation. I wonder if the actual calculation bears this out? Specifically, in the case of the hydrogen atom in this particular superposition, do you get the correct decay times for the superposition of states if you apply Maxwell's equations to the oscillating charge and assume that as the system loses energy by radiation, the "probability" flows from the 2p to the 1s state in accordance with the amount of energy remaining in the system?


EDIT: Some people are objecting in different ways to the basic premise of the question, so let me make it a little more specific: I am not asking if hydrogen atoms ACTUALLY EXIST in a particular superposition of these states. (I may ask that in another question.) What I am asking here is IF you take (just to be specific) a 50-50 superposition of the 1s and 2p states, and apply Maxwell's equations to the oscillating charge, AND you assume that as the atom radiates the probability drains from the excited state to the ground state in such a way as to maintain conservation of energy...IF you do all those things, do you get a result that is consistent with standard QM?




astrophysics - How do astronomers know what wavelength the body will emit when it is at rest?




In the book of The First Three Minutes by Weinberg, at page 21, he talks about how do astronomers measure the speed of a luminous body along the line of sight by Doppler affect, i.e. the fractional change in the wavelength of the incoming light will be proportional to the speed of the body to $c$, but to use this technique, we need to know the original wavelength, i.e the wavelength of the emitted light when the body is at rest, without it we cannot talk about any increase in the wavelength, so how do astronomers get this information prior to the measurement?



Answer



As an addition to the correct answer of @flippiefanus, consider the element sodium.


When excited at low pressure by an electric arc, sodium vapour emits a complex spectrum of discrete wavelengths, an atomic emission spectrum, dominated by two intense emission lines with slightly different wavelengths: one at $588.9950$ nanometres and the second at $589.5924$ nanometres. If you have ever tossed some salt (or salt water) into a Bunsen burner flame, or seen a low-pressure sodium street light, you've seen these two wavelengths.


In addition, if you pass a continuous spectrum through a cloud of unexcited sodium vapour, these same two wavelengths will be strongly absorbed in an atomic absorption spectrum.


The two wavelengths above have been very precisely measured for sodium atoms at rest in the lab framework. In addition, the ratio of the two wavelengths has been calculated: $1.00101427$.


If the sodium is moving towards or away from the observer at some unknown speed, the two emission lines will both be Doppler shifted by the same factor, but the ratio will stay the same!


So, if an astronomer takes a spectrum of a distant star and sees two very close, strong, emission or absorption lines, he/she will calculate the ratio of the two wavelengths. If the result is the same as the ratio above, then the original wavelengths are known, and the observed wavelengths, via the Doppler shift, will produce a velocity of recession or approach.


semiconductor physics - Why doesn't current flow in reverse biased diode?


Consider this reverse biased diode :


enter image description here


I read that no or very small current flows in reverse biased diode as depletion layers get widened and huge resistance is offered so no electrons can cross it. But, why the electrons or holes need to cross the depletion layer? In the diagram above, the positive charges (holes) are moving towards left and the current due to electrons is also in left, so won't the circuit be completed?



Answer



The current flows shown in the diagram are only temporary and flow only when the battery is first connected.


When you first connect the battery holes flow to the left (in your diagram) and electrons flow to the right, and the resulting charge separation creates a potential difference across the depletion layer. The flow stops when the potential difference across the depletion layer becomes equal and opposite to the battery potential. At this point the net potential difference is zero so the charges stop flowing.


Saturday, 28 March 2020

gravity - How Does Dark Matter Form Lumps?


As far as we know, the particles of dark matter can interact with each other only by gravitation. No electromagnetics, no weak force, no strong force. So, let's suppose a local slight concentration of dark matter comes about by chance motions and begins to gravitate. The particles would fall "inward" towards the center of the concentration. However, with no interaction to dissipate angular momentum, they would just orbit the center of the concentration and fly right back out to the vicinity of where they started resulting in no increase in density. Random motions would eventually wipe out the slight local concentration and we are left with a uniform distribution again.




How does dark matter form lumps?





fluid dynamics - What is the velocity area method for estimating the flow of water?


Can anyone explain to me what the Velocity Area method for measuring river or water flow is?


My guess is that the product of the cross sectional area and the velocity of water flowing in a pipe is always constant. If the Cross sectional area of the pipe increases at a particular point, then the velocity decreases so that the product $AV$ is a constant. Am I right?


If so, how can we extend this to pipes where the water is accelerating & does not have a constant velocity? For example, the system may be under the action of gravity & hence the acceleration of the water is $g$, the acceleration due to gravity?



Answer



What you refer to is conservation of mass under some assumptions:




  • Constant density

  • A steady state flow


I'll bring us back to your equation by starting with the very fundamental mass accounting for a given fluid flow. To be comprehensive, we need to recognize that velocity isn't constant over the entire area, but we will assume that it is. Take the flow rate to be $\dot{m}$.


$$\dot{m} = \rho V A$$


Now, if we have a steady state flow along a single flow path, then this quantity will be constant over the entire path, $\dot{m}=const$. Water in the cases you are concerned about is sufficiently incompressible so $\rho = const$. This results in your conclusion that $VA$ is constant.


Gravity may or may not shift the balance from $V$ to $A$ or vice versa. It depends on if there are rigid boundaries to the flow. If you have a flow fall freely in air or flow downward in a trench (like a river) then the boundary of the fluid may change freely. If you have a pipe with a given flow area, then the velocity is fully determined from that. Anyway, there are laws that conserve other things - like energy. So in a rigid pipe flowing downward (absent friction) the pressure will increase as you go down in elevation, which results directly from gravity.


Does Newtonian $F=ma$ imply the least action principle in mechanics?


I've learned that Newtonian mechanics and Lagrangian mechanics are equivalent, and Newtonian mechanics can be deduced from the least action principle.


Could the least action principle $\min\int L(t,q,q')dt$ in mechanics be deduced from Newtonian $F=ma$?


Sorry if the question sounds beginnerish



Answer



You also need an expression for the Lagrangian, which in classical mechanics is $$ L = T - U$$


Where $T$ is the kinetic energy and $U$ is the potential energy.


Provided that you can associate a potential $U$ to the force $\vec{F}$ such that $\vec{F} = - \vec{\nabla} U$ (such a force is said to be conservative), the principle of least action and Newton second's law are equivalent.



The demonstration for a single particle in 1D ($T = m v_x^2 /2$, $F = -dU(x)/dx$) is actually a good exercise.


newtonian mechanics - How is momentum conserved in this example?



Suppose a sticky substance is thrown at wall. The initial momentum of the wall and substance system is only due to velocity of the substance but the final momentum is 0. Why is momentum not conserved?



Answer



You should also consider what the wall is attached to. And obviously it is the Earth. If we assume the Earth's velocity is zero after the substance is thrown, since there is the force that slow down the substance at the moment of impact, there is also the reaction force on Earth with the same magnitude and opposite direction. So Earth will gain velocity and final momentum of combined Earth and substance system will be equal to the intial momentum of the substance.


And also we can look at the situation in a bit different way. When we stand on the floor and throw the substance, there appears a friction force between our feet and the floor and it acts on us in the throw direction. So the friction force on Earth will be opposite to the throw direction and Earth will pick up speed towards the substance, too. And at any moment, Earth plus substance system will have zero momentum. The substance and the Earth will move towards each other and after the impact their speed will be zero.


Would a generator in vacuum/space provide electricity endlessly?


At it's simplest, electricity generation is achieved by induced voltage due to a changing magnetic field. In a vacuum in the absence of friction, would the initial spin imparted to the rotor of a generator ever come to a halt?


i.e. Would a traditional generator in space generate electricity perpetually (notwithstanding component failure etc)?



Answer



No, because the process of transferring the voltage into a useable form or device will reduce it. In other words, under perfectly idealized conditions (which are impossible), yes it might spin forever, but as soon as you try to use your generator to power a device, you'll slow it down.


Friday, 27 March 2020

quantum mechanics - Continuous spectrum of hydrogen atom



I wonder if there is a nice treatment of the continuous spectrum of hydrogen atom in the physics literature--showing how the spectrum decomposition looks and how to derive it.



Answer



The term to look for is Coulomb wave. These wavefunctions are well explained in the corresponding Wikipedia article.


Depending on your mathematical background, you should be ready for a bit of a formula jolt, as these wavefunctions rely very intimately on the confluent hypergeometric function. If you want the short of it, then I can tell you that the solutions $\psi_\mathbf k^{(\pm)}(\mathbf r)$ to the continuum hydrogenic Schrödinger equation $$ \left(-\frac12\nabla^2+\frac Zr\right)\psi_\mathbf k^{(\pm)}(\mathbf r)=\frac12 k^2\psi_\mathbf k^{(\pm)}(\mathbf r) $$ with asymptotic behaviour $$ \psi_\mathbf k^{(\pm)}(\mathbf r)\approx \frac{1}{(2\pi)^{3/2}}e^{i\mathbf k·\mathbf r} \quad\text{as }\mathbf k·\mathbf r\to\mp \infty $$ are $$ \psi_\mathbf k^{(\pm)}(\mathbf r) = \frac{1}{(2\pi)^{3/2}} \Gamma(1\pm iZ/k)e^{-\pi Z/2k} e^{i\mathbf k·\mathbf r} {}_1F_1(\mp iZ/k;1;\pm i kr-i\mathbf k·\mathbf r) .$$


You can also ask for solutions with definite angular momentum (which do exist for any $m$ and $l\geq|m|$); those are detailed in the partial wave expansion section of the Wikipedia article. If you want textbooks which develop these solutions, look at




L. D. Faddeev and O. A. Yakubovskii, Lectures on quantum mechanics for mathematics students. American Mathematical Society, 2009;



and



L. A. Takhtajan, Quantum mechanics for Mathematicians, American Mathematical Society, 2008.



Hat-tip to Anatoly Kochubei for providing these references in an answer to my MathOverflow question Is zero a hydrogen eigenvalue?


condensed matter - How is Meissner effect explained by BCS theory?


Someone says we can derive the GL equations from BCS theory, which can explain Meissner effect, but I want a more clear physical picture of this phenomena.



Answer



The bottom line is the spontaneous symmetry breakdown from global $U(1)$ to $\mathbb{Z}_2$ and the concomitant rigidity of the omnipresent coherent phase down to which the system breaks. However, both the microscopic action and the BCS ground state (3) of a superconductor possess local $U(1)$ gauge symmetry.


By rigidity, I mean something reminiscent of a restoring force felt when one tries bending or distorting a solid stick, which, fundamentally, originates from the translational symmetry breaking in a typical crystalline solid. Note that this global $U(1)$ symmetry breakdown does not contradict Elitzur's theorem which forbids spontaneous breakdown of local gauge symmetry. The value of this phase in a superconductivity ground state is not observable (maximal uncertainty) since one has to integrate over $\phi\in[0,2\pi)$ so as to recover particle conservation. Nonetheless, its variation in spacetime, hence rigidity, is not only physically observable but also crucial, especially in Meissner effect and Josephson effect.


The form of BCS ground state (3) (given at the end) says, all Cooper pairs of different momenta $\vec{k}$s share exactly the same in-pair relative phase $\phi\equiv \mathrm{arg}(v_{\vec{k}}/u_{\vec{k}})$, namely the aforesaid omnipresent coherent phase is almost of the same value inside the whole superconductor. So does the gap function $\Delta$ in the mean-field BCS interaction term $\Delta c^{\dagger}\left(x\right)c^{\dagger}\left(x\right)+\textrm{h.c.}$. This also manifests the symmetry breaking down to $\mathbb{Z}_2$ for that only $\{0,\pi\}$ phase transformation of $\{c,c^\dagger\}$ renders the term invariant. (See also this good answer, owing to a discussion with the author of which, I managed to correct the inaccuracies in my answer.)


In the superconducting phase, one can choose the unitary gauge to make the Goldstone field $\phi(x)=0$ everywhere and certainly arrive at some new $A'_\mu$ (physically unnecessary, however, having the merit of vanishing Goldstone modes and no Faddeev-Popov ghost). Or one can instead replace the massless gauge field $A_\mu$ by a newly defined non-gauge vector field $-\frac{e^*}{c}A'_\mu\equiv \hbar\partial_\mu\phi-\frac{e^*}{c}A_\mu$ while preserving the total three degrees of freedom. Anyway, the new $A'_\mu$ is manifestly massive in the effective Lagrangian. And it looks as if photons coupled to a superconductor undergo kinda explicit gauge symmetry breaking and acquire mass via this Abelian Higgs mechanism, which restrains the long-range electromagnetic field to sort of exponentially decaying Yukawa potential. This is nothing but the Meissner effect clarified below Eq.(1).


And consequently, a macroscopic coherent quantum state with phase rigidity, just as (3) does, is constructed. Equivalently speaking, this is a phenomenon where the quantum mechanical phase reaches macroscopic dimensions, which is somewhat natural for Bosons (e.g. Bose-Einstein condensation and Bosonic superfluidity of ${}^4\mathrm{He}$), and is astonishingly also achieved via the formation of Fermions' Cooper pairs.



If you need more calculations




  • Rigidity dictates Meissner effect. As is well known, in the presence of electromagnetic field (no matter whether it is a penetrating one or a conventional one), the wavefunction gains an Aharonov-Bohm $U(1)$ phase $\mathrm{e}^{\mathrm{i}e\chi(x)/\hbar}$, wherein nonintegrable phase $\chi=\int{A_\mu\mathrm{d}x^\mu}$ might, in general, be path-dependent. What if this system possesses some rigidity of this distribution of twisting angle $\chi(x)$? In analogy to the twisting or distortion in a solid body, a macroscopic term $\int{\frac{1}{2}\kappa(\nabla\chi)^2\mathrm{d}V}$ arises in the free energy of this system. More detailed analysis in BCS theory indeed gives you a free energy increase (c.f. last section of this answer) $$ \Delta G=e^2\frac{\rho}{2m}\int{A^2}\mathrm{d}V, $$ wherein electron density $\rho=\langle\psi^*(x)\psi(x)\rangle$ (spin degrees of freedom neglected for the nonce). And as a result of this, we get one of the London's equations $$ \vec{j}_d=-\frac{\delta\Delta G}{\delta\vec{A}}=-e^2\frac{\rho}{m}\vec{A},\tag{1} $$ which is the famous $j\propto A$ relation. Combined with Maxwell Eq. $\vec{j}=\nabla\times\vec{H}$ (provided $\vec{A}$ has no temporal dependence), you can easily obtain the exponential decay of $\vec{A}$ or $\vec{B}$ inside the superconductor, that is to say, Meissner effect is mandatorily required by this rigidity. In a nutshell, superconductivity serves as a mechanism that resists the generation of Aharonov-Bohm phase due to penetrated electromagnetic field.




  • Why does the diamagnetic current persist? In quantum mechanics, electrical current equals to the paramagnetic current subtracted by the diamagnetic current $$ \vec{j}=\vec{j}_p+\vec{j}_d\equiv [\frac{1}{2m}(\psi^*\hat{\vec{p}}\psi-\psi\hat{\vec{p}}\psi^*)]+[-\frac{q}{m}\vec{A}\psi^*\psi].\tag{2} $$



    1. In a normal state, presence of $\vec{A}$ also increases the free energy, however, in a relative banal way, that is, $\Delta G=\frac{1}{2}\chi\int{(\nabla\times\vec{A})^2\mathrm{d}V}$. Together with Maxwell equations, only a small Landau diamagnetic current is retained $\vec{j}=-\frac{\delta\Delta G}{\delta\vec{A}}=\nabla\times\vec{M}$, where $\vec{M}$ is the local magnetization. This is because the $\vec{j}_p$ and $\vec{j}_d$ in Eq. (2) cancel each other, as is straightforward to check once you notice that $\psi$ contains the aforementioned Aharonov-Bohm phase $\mathrm{e}^{\mathrm{i}e\chi(x)/\hbar}$.

    2. On the other hand, there's no such cancellation in a superconducting phase. Paramagnetic current $\vec{j}_p$ obviously contains some spatial derivative of the phase in $\psi(x)$, i.e., kinda strain of the wavefunction. However, such twisting is certainly not energetically favoured since the previously discussed rigidity. With hindsight, you might even think it in a sloppy way -- rigidity repels $\vec{A}$ out, no $\vec{A}$ no Aharonov-Bohm phase in $\psi$, $\vec{j}_p=0$ consequently. Anyway, diamagnetic current $\vec{j}_d$ in (1) perfectly persists in the end. (Partly cancelled by nonzero $\vec{j}_p$ when $0




  • BCS theory provides the microscopic mechanism that yields this rigidity.



    1. From a field theoretic point of view of BCS theory, we can introduce auxiliary Bosonic fields $\Delta\equiv |\Delta(x)|\mathrm{e}^{\mathrm{i}\theta(x)},\bar{\Delta},\varphi$ so as to perform Stratonovich-Hubbard transformation onto the BCS action $S$. Afterwards, part of the action reads $\frac{1}{2m}\sum_\sigma{\int{(\nabla\theta(x))^2\bar{\psi}_\sigma(x)\psi_\sigma(x)\mathrm{d}V}}$, wherein $\psi$ is the original fermionic field, which conspicuously manifests rigidity of phase $\theta$. Indeed, $\Delta$ corresponds to the superconducting gap or the order parameter. Further, after tedious manipulations and approximations to construct an effective low-energy theory of $\varphi$ and $\theta$, we can directly calculate to show that the paramagnetic current density correlation function $\langle j_p^\alpha(x)j_p^\beta(0)\rangle$ vanishes when $T=0$, which undoubtedly endorses our previous discussion in section 2. You can even see that this owes to the existence of the gap $\Delta$.

    2. To connect the aforesaid phase $\theta$ of $\Delta$ with the phase of wavefunction, we might turn to the BCS ground state $$ \vert\Psi_{\mathrm{BCS}}(\phi)\rangle=\prod_{\vec{k}}{(|u_{\vec{k}}|+|v_{\vec{k}}|\mathrm{e}^{\mathrm{i}\phi}c_{\vec{k}\uparrow}^\dagger c_{-\vec{k}\downarrow}^\dagger)\vert 0\rangle}.\tag{3} $$ In either BCS's original variational calculation or Bogoliubov transformation approach, this relative phase $\phi\equiv \mathrm{arg}(v_{\vec{k}}/u_{\vec{k}})$ is always directly related to gap $\Delta$ because of $\Delta_\vec{k}^*v_\vec{k}/u_\vec{k}\in \mathbb{R}$. At this stage, we can once again say that the wave function becomes solid and no strain occurs, therefore $\vec{j}_p$ does not emerge.





semiconductor physics - Conductivity of a crystalline solid


In a crystalline solid each atomic level 'splits' into n levels (n = number of atoms in the system). When the number of atoms is large each level becomes replaced by a band of closely spaced levels.


In a semi-conductor we have an empty "conduction band" and a fully occupied "valance band". Conductivity arises because electrons get excited to the conduction band.


Question: Why can't electron in the valence band freely move around and therefore conduct electricity? My question also applies to metals where the conduction band is already half-filled. What's special about this conduction band that allows electron to move around freely?



Answer




Every energy level in a band has an associated momentum, and the total momentum of all the levels in a band is zero. Because the total momentum is zero there can be no net movement of electrons and hence no current. In effect, for every electron with momentum $p$ there is another electron with momentum $-p$ and they cancel each other out.


You can't change the momentum of any of the electrons in a filled band by applying an external field because all the energy levels are full. There are no empty levels for you to move your electron into. That means an external field cannot cause a net movement of electrons.


When you excite an electron into the conduction band it will go into a low momentum state, but there are available states above it with higher momenta. Apply an external field and you will move the electron up into a state with higher momentum that is lined up with the field. In this state the electron has a velocity aligned with the field so there is a net movement of electrons and therefore a current.


The same effect causes a current in metals. There are empty states for the electrons in the band to move into.


units - Why do universal constants have the values they do?


This is meant to be a generic question of the type that we get repeatedly on this site, in different versions:



Why do universal constants have the values they do? Can we predict their values theoretically? Do they change over time? How would the world be different if a particular constant had a different value?




Thursday, 26 March 2020

newtonian mechanics - Boundary conditions on wave equation



wave equation question


I am having trouble understanding the boundary conditions.


From the solutions, the first is that $D_1(0, t) = D_2(0, t)$ because the rope can't break at the junction.


The second is that $\dfrac{\partial D_1}{\partial x} D_1(0, t) = \dfrac{\partial D_2}{\partial x}(0, t)$. How can I interpret this physically? I'm not quite sure how to think about $\partial D /\partial x$.



Answer



The second condition is saying that there is no discontinuity in the slope of the rope at the junction. In other words, there is no "kink" in the rope.


Imagine if this assumption were to fail in the following way: $$ \frac{\partial D_1}{\partial x}(0,t) = -1, \qquad \frac{\partial D_2}{\partial x}(0,t) = 1 $$ Then near the origin, the rope would look like the function $f(x) = |x|$ does at the origin; there would be a "triangular kink" in the rope facing upward.


Addendum. Why can't there be a kink? In response to Nathaniel's response, here's why there can't be a kink. We argue by way of contradiction.


Suppose that there were a kink, and consider a small mass element centered on the junction. In the presence of a kink, the tensions on either side of the junction would point in different directions, and there would therefore be a net force on the small mass element. Now consider taking the size of that mass element to zero. There will be a net force on the mass element as we take the limit of its size to zero, but its mass will vanish, which yields a contradiction to Newton's second law.


electromagnetism - Electric charge attraction, unlike magnet attraction, can be neutralized, right?


If I put a magnet in a box of nails, it attracts lots of nails. If I put a hydrogen atom missing its electron (a hydron) near a bunch of electrons, it attracts exactly one electron and then it stops being attractive to the other electrons. This means that the attractiveness of electric charges can be neutralized whereas the attractiveness of magnets cannot, right?


I’m asking this because I’m thinking about modeling electricity using magnets but I realized that a limit of that model is that you can’t have two “neutralized” magnets (one representing the proton in a hydrogen atom and one representing the electron) not be attractive to a third magnet (representing another electron) passing nearby.




quantum mechanics - Conservation of energy and wavefunctions


I'd appreciate a bit of clarification on how conservation of energy works in QM.


The infinite square well has a set of stationary states, each corresponding to one of the discrete energy levels of the well. A particle in a particular state can be represented as a sum of these stationary states. The expectation value of the Hamiltonian, < H > , of this particle does not change with time. The state however (by virtue of being a sum of multiple stationary states) has a non-zero energy-uncertainty.


Measuring the energy of the particle collapeses its wavefunction into one of the stationary states with a corresponding energy value.


This bothers me. While < H > is constant, the "actual" energy of the particle (upon measurement) could vary greatly. This seems like a violation of energy conservation. [And while I've posited this question in terms of the infinite well, any wavefunction composed of non-degenerate stationary states seems like a violation to me]


So, is the particle entangled with some other particle (through whatever process put it in the well in the first place) such that the net energy in some larger system is constant? Or is energy only conserved is larger systems (which are compromised of so many particles as to approach < H > )? Or what am I misunderstanding?



Answer




That's a great question! Indeed, in quantum mechanics, energy is only defined in expectation, and conservation of energy refers solely to expectation values. It's straightforward to show from the Schrodinger equation that this expectation value is constant in time.


However, measurement does not break conservation of energy as long as we keep track of the energy of the measurement apparatus. This has to be true because we can always consider the system plus the measurement apparatus as one joint quantum system, whose energy we know is conserved.


As a specific example, suppose we measure the position of a low-energy particle by firing a photon at it. If we measure the position very accurately, then the wavefunction collapses to a sharp spike, which has a very high energy which wasn't present in the original particle. This energy must have been absorbed from the incoming photon -- and indeed, by the de Broglie formula, $\lambda = h/p$, a high-precision position measurement can only be done using a high energy photon.




Some subtleties arise if the measurement device is macroscopic. For example, let's say that the photon in the previous example has a chance of missing, so the quantum state is a superposition $$|\text{low energy particle and high energy photon} \rangle + |\text{high energy particle and low energy photon} \rangle.$$ The (expectation value) of the energy is the same as before at this point. Now suppose we use a high-energy photon detector and read off the result in the lab, giving a superposition of $$|\text{low energy particle, you see '1' on detector} \rangle + |\text{high energy particle, you see '0' on detector} \rangle.$$ At this point energy is still perfectly conserved if you account for the energy of everything in each branch, but if you subscribe to the Copenhagen interpretation, you might say this state is unacceptable because macroscopic observers can't be in superpositions. In other words, you say the state must be $$|\text{low energy particle, you see '1' on detector} \rangle \text{ or } |\text{high energy particle, you see '0' on detector} \rangle$$ rather than a superposition.


This is also fine. The problem is that one is then tempted to remove oneself and the detector from the picture entirely, arriving at $$|\text{low energy particle} \rangle \text{ or } |\text{high energy particle} \rangle$$ which are two states with different energy. This thinking led the pioneers of quantum mechanics to propose that energy was conserved only on average (e.g. in BKS theory, see here).


This is how it's usually presented in introductory textbooks, but that is only done for simplicity. When you think about it, it's a profoundly unphysical picture -- it basically treats measurements as occurring magically, without any physical cause. Modern-day users of the Copenhagen interpretation don't use it in this unphysical way, and we have known for almost a century that energy non-conserving theories like BKS are completely wrong. Energy is exactly conserved in all genuine interpretations of quantum mechanics.


angular momentum - Torque in a non-inertial frame


How can we calculate torque in a non-inertial frame? Take for instance a bar in free fall with two masses, one on either end, $M_1$ and $M_2$. Taking the point of rotation to not be the center of mass, i.e. $M_1\neq M_2$ and take the point of rotation to be the center, what is the proper way of analyzing the situation to come to the conclusion that there is no rotation?



Answer




Follow the rules of motion:



  1. Sum of forces equals mass times acceleration of the center of mass: $$ \sum_i \vec{F}_i = m \vec{a}_{cm} $$

  2. Sum of torques about the center of mass equals change in angular momentum: $$ \sum_i (M_i + \vec{r}_i \times \vec{F}_i) = I_{cm} \dot{\vec{\omega}} + \vec{\omega} \times I_{cm} \vec{\omega}$$ where $\vec{r}_i$ is the relative location of force $\vec{F}_i$ to the center of mass.


So for an accelerating rigid body that is not rotating $\dot{\vec{\omega}} = \vec{\omega} = 0$ the right hand side of the last equation must be zero.


See https://physics.stackexchange.com/a/80449/392 for a complete treatment of how you go from linear/angular momentum to the equations of motion.


Also see https://physics.stackexchange.com/a/82494/392 for a similar situation where a force is applied away from the center of mass.


The rule that comes out of the above equations of motion are:





  1. If the net torque about the center of mass is zero then the body will purely translate

  2. If the sum of the forces on a body are zero (but not the net torque) then the body will purely rotate about its center of mass.



feynman diagrams - What's the difference between t-channel and s-channel in particle physics


As the Feynman diagram shows above. Does the s-channel and t-channel stands for exactly same reaction or they have big difference?


enter image description here




thermodynamics - Is it possible to "cook" pasta at room temperature with low enough pressure?


It is known fact, that boiling point of water decreases by decreasing of pressure. So there is a pressure at which water boils at room temperature. Would it be possible to cook e.g. pasta at room temperature in vacuum chamber with low enough pressure?


Or "magic" of cooking pasta is not in boiling and we would be able to cook pasta at 100°C without boiling water (at high pressure)?



Answer



No. Boiling itself doesn't mean that the water will cook anything. If you have boiling water at 30°C you could touch it (if we forget that it's at really low pressure) and nothing would happen. Boiling is not what cooks, but temperature.


In fact, if you want to purify water at high altitudes, you need to boil water for a longer time because it will be at a lower temperature.


everyday life - Why does hot water clean better than cold water?


I had a left over coffee cup this morning, and I tried to wash it out. I realized I always instinctively use hot water to clean things, as it seems to work better.


A Google search showed that other people get similar results, but this Yahoo answer is a bit confusing in terms of hot water "exciting" dirt.


What is the physical interaction between hot water and oil or a material burnt onto another vs cold water interaction?




Answer



The other answers are correct, but I think that you might benefit from a more "microscopic" view of what is happening here.


Whenever one substance (a solute) dissolves in another (a solvent), what happens on the molecular scale is that the solute molecules are surrounded by the solvent molecules.


What causes that to happen? As @Chris described, there are two principles at work - thermodynamics, and kinetics.


In plain terms, you could think of thermodynamics as an answer to the question "how much will dissolve if I wait for an infinite amount of time," whereas kinetics answers the question "how long do I have to wait before X amount dissolves." Both questions are not usually easy to answer on the macroscopic scale (our world), but they are both governed by two very easy to understand principles on the microscopic scale (the world of molecules): potential and kinetic energy.



On the macroscopic scale, we typically only think about gravitational potential energy - the field responsible for the force of gravity. We are used to thinking about objects that are high above the earth's surface falling towards the earth when given the opportunity. If I show you a picture of a rock sitting on the surface of the earth:


Example of a potential energy surface


And then ask "Where is the rock going to go?" you have a pretty good idea: it's going to go to the lowest point (we are including friction here).


On the microscopic scale, gravitational fields are extremely weak, but in their place we have electrostatic potential energy fields. These are similar in the sense that things try to move to get from high potential energy to lower potential energies, but with one key difference: you can have negative and positive charges, and when charges have the opposite sign they attract each other, and when they have the same sign, they repel each other.



Now, the details of how each individual molecule gets to have a particular charge are fairly complicated, but we can get away with understanding just one thing:


All molecules have some attractive potential energy between them, but the magnitude of that potential energy varies by a lot. For example, the force between the hydrogen atom on one water molecule ($H_2O$) and the oxygen atom on another water molecule is roughly 100 times stronger than the force between two oxygen molecules ($O_2$). This is because the charge difference on water molecules is much greater (about 100 times) than the charge difference on oxygen molecules.


What this means is we can always think of the potential energy between two atoms as looking something like this:


enter image description here


The "ghost" particle represents a stationary atom, and the line represents the potential energy "surface" that another atom would see. From this graph, hopefully you can see that the moving atom would tend to fall towards the stationary atom until it just touches it, at which point it would stop. Since all atoms have some attractive force between them, and only the magnitude varies, we can keep this picture in our minds and just change the depth of the potential energy "well" to make the forces stronger or weaker.



Let's modify the first potential energy surface just a little bit:


Kinetically trapped rock


Now if I ask "where is the rock going to go?," It's a little bit tougher to answer. The reason is that you can tell the rock is "trapped" in the first little valley. Intuitively, you probably can see that if it had some velocity, or some kinetic energy, it could escape the first valley and would wind up in the second. Thinking about it this way, you can also see that even in the first picture, it would need a little bit of kinetic energy to get moving. You can also see that if either rock has a lot of kinetic energy, it will actually go past the deeper valley and wind up somewhere past the right side of the image.


What we can take away from this is that potential energy surfaces tell use where things want (I use the term very loosely) to go, while kinetic energy tells us whether they are able to get there.



Let's look at another microscopic picture:


enter image description here


Now the atoms from before are at their lowest potential energy. In order for them to come apart, you will need to give them some kinetic energy.


How do we give atoms kinetic energy? By increasing the temperature. Temperature is directly related to kinetic energy - as the temperature goes up, so does the average kinetic energy of every atom and molecule in a system.


By now you might be able to guess how increasing the temperature of water helps it to clean more effectively, but let's look at some details to be sure.



We can take the microscopic picture of potential and kinetic energies and extract two important guidelines from it:



  1. All atoms are "sticky," although some are stickier than others

  2. Higher temperatures mean that atoms have larger kinetic energies



Going back to the coffee cup question, all we need to do now is think about how these will play out with the particular molecules you are looking at.


Coffee is a mixture of lots of different stuff - oils, water-soluble compounds, burnt hydrocarbons (for an old coffee cup), etc. Each of these things has a different "stickiness." Oils are not very sticky at all - the attractive forces between them are fairly weak. Water-soluble compounds are very "sticky" - they attract each other strongly because they have large charges. Since water molecules also have large charges, this is what makes water-soluble compounds water-soluble - they stick to water easily. Burnt hydrocarbons are not very sticky, sort of like oils.


Since molecules with large charges tend to stick to water molecules, we call them hydrophilic - meaning that they "love" water. Molecules that don't have large charges are called hydrophobic - they "fear" water. Although the name suggests they are repelled by water, it's important to know that there aren't actually any repelling forces between water and hydrophobic compounds - it's just that water likes itself so much, the hydrophobic compounds are excluded and wind up sticking to each other.


Going back to the dirty coffee cup, when we add water and start scrubbing, a bunch of stuff happens:


Hydrophilic Compounds


Hydrophilic compounds dissolve quickly in water because they stick to water pretty well compared to how well they stick to each other and to the cup. In the case where they stick to each other or the cup better than water, the difference isn't huge, so it doesn't take much kinetic energy to get them into the water. So, warm water makes them dissolve more easily.


Hydrophobic Compounds


Hydrophobic compounds (oils, burnt stuff, most stains) don't stick to the water. They stick to each other a little bit (remember that the forces are much weaker compared to water since the charges are very small), but water sticks to itself so well that the oils don't have a chance to get between the water molecules. We can scrub them, which will provide enough energy to knock them loose and allow the water to carry them away, but if we were to increase the kinetic energy as well by increasing the water temperature, we could overcome both the weaker forces holding the hydrophobic compounds together, while simultaneously giving the water molecules more mobility so they can move apart and let the hydrophobic compounds in. And so, warmer water makes it easier to wash away hydrophobic compounds as well.


Macroscopic View



We can tie this back to the original thermodynamics vs. kinetics discussion. If you increase the temperature of the water, the answer to the question "How much will dissolve" is "more." (That was the thermodynamics part). The answer to "How long will it take" is "not as long" (kinetics).


And as @anna said, there are other things you can do to make it even easier. Soap for example, is made of long chain molecules with one charged end and one uncharged end. This means one end is hydrophilic, while the other end is hydrophobic. When you add soap to the picture, the hydrophilic end goes into the water while the hydrophobic end tries to surround the oils and burnt stuff. The net result is little "bubbles" (called micelles) made up of soap molecules surrounding hydrophobic molecules that are in turn surrounded by water.


momentum - Hammer vs large mass on nail


Why is a hammer more effective in driving a nail than a large mass resting over the nail ?


I know this has to do with momentum, but cant figure it out.




Wednesday, 25 March 2020

electromagnetism - Can low-frequency electromagnetic radiation be ionizing?


I've read from several sources that electromagnetic radiation begins to have an "ionizing" effect right around the time the frequency passes the uv spectrum and into x-ray/gamma ray spectrum. [1] [2] [3]


The reasoning given for this is that the higher frequency waves contain more energy, enough to tear apart molecular bonds.


When I compare this to sound waves it makes sense because high pitched sounds are more damaging to human ears than low pitched sounds are. [4]


However just because a high pitched sound may cause you to go deaf more easily, this doesn't mean my ears would enjoy standing 3 feet away from a 12,000-watt sub-woofer playing a low pitched sound.


In other words, I understand high-frequency waves contain more energy by nature, but if you ramp up the amplitude of the low-frequency waves they can start to do harm too.


So with electromagnetic radiation is there a point that I could say produce infrared waves that would also be ionizing? Or is there something that is inherently different about high-frequency em waves that cause the ionizing effect?



Answer



Background:


Einstein's photoelectric effect theory won him the Nobel prize, and it relates very closely to this. Although it's different from photoionisation, it relies on similar ideas.



His proposition: each atom will absorb the energy of one photon, and the energy of a photon is given by $$E=h\nu$$where $h$ is Plank's constant, a really, really tiny number ($6.62607004 × 10^{-34} m^2 kg / s$), and $\nu$ is frequency of the light. Higher intensity light, which is analogous to wave amplitude, contains more photons, but the energy of each photon is the same for a given frequency.


If I shine low-frequency high-intensity light on a surface, there's plenty of energy, but each atom, upon absorbing one photon, won't be able to lose an electron. However, if I shine high-frequency light, even if the intensity is low, each atom which absorbs a photon will be able to loose an electron, and we see ionization.


But when the intensity is high enough, even low frequencies will cause ionization.


This phenomenon, called multi-photon ionization, occurs when the atom absorbs more than one photon. It's usually pretty rare, because an atom frequently emits the other photons before it absorbs enough energy totally, but at very high enough intensities, it's appreciable.


Sound works differently in air: we generally don't say it's quantized in the same way, although if you examine it more minutely, you'll see that it can be quantized as phonons, which aren't evident in gases. But that's not relevant to your question, it's just something to keep in mind if you want to generalize a bit more by discussing sound in condensed matter.


Oddly enough, the parallel to sound and hearing loss is wrong! See this Biology SE question... high-frequency sounds are dangerous not particularly because they have more energy (which they do, see the equation for sound energy in a container), but because of the nature of the human ear and the alignment of hairs in it.


newtonian mechanics - How to model a rising helium balloon?


I'm trying to model the ascent of a helium filled weather balloon from 0km to 25km altitude. The plan is to eventually use a python script to calculate the time taken to reach 25km. However, I don't really know where to start.


I have worked out an expression for acceleration in terms of the balloons volume and the density of surrounding air. I now need to find an way of calculating the volume at a given altitude such that I can model the acceleration throughout the ascent.


So if anyone could help me with this I would greatly appreciate it.




Answer



Both temperature and pressure variation with altitude are given here You can use the ideal gas law to get the volume with altitude from these.


homework and exercises - What is the distance between two objects in space as a function of time, considering only the force of gravity?



What is the distance between two objects in space as a function of time, considering only the force of gravity? To be specific, there are no other objects to be considered and the objects in question are not rotating.


For instance, say you have two objects that are 6 million miles apart. One is 50,000 kg and the other is 200 kg. Say I want to know how much time has passed when they are 3 million miles apart. How would I go about doing that?


EDIT: Looking at the other question I am having trouble following David Z's steps in his answer. Intermediate steps would be helpful. In particular I don't see how the integration step works. I also don't understand why the initial r value, ri remains as a variable after it's derivative has been set to 0, wouldn't the integral of that derivative (i.e. the function ri) be 0 + C? I also don't see how you wind up with a term that includes 2 under a square root sign.



I can not ask for the intermediate steps on the question itself because I do not have the reputation points.


I think it probably answers my question or will once I understand it, but I am not sure.


EDIT: I can sort of understand the integration step. But it seems like he is integrating with respect to two different variables on both sides, the variables being r on the left and the derivative of r on the right. There must be something I'm missing here.




gravity - In what limit does string theory reproduce general relativity?




In quantum mechanical systems which have classical counterparts, we can typically recover classical mechanics by letting $\hbar \rightarrow 0$. Is recovering Einstein's field equations (conceptually) that simple in string theory?



Answer



To recover Einstein's equations (sourceless) in string theory, start with the following world sheet theory (Polchinski vol 1 eq 3.7.2): $$ S = \frac{1}{4\pi \alpha'} \int_M d^2\sigma\, g^{1/2} g^{ab}G_{\mu\nu}(X) \partial_aX^\mu \partial_bX^\nu $$ where $g$ is the worldsheet metric, $G$ is the spacetime metric, and $X$ are the string embedding coordinates. This is an action for strings moving in a curved spacetime. This theory is classically scale-invariant, but after quantization there is a Weyl anomaly measured by the non-vanishing of the beta functional. In fact, one can show that to order $\alpha'$, one has $$ \beta^G_{\mu\nu} = \alpha' R^G_{\mu\nu} $$ where $R^G$ is the spacetime Ricci tensor. Notice that now, if we enforce scale-invariance at the qauntum level, then the beta function must vanish, and we reproduce the vacuum Einstein equations; $$ R_{\mu\nu} = 0 $$ So in summary, the Einstein equations can be recovered in string theory by enforcing scale-invariance of a worldsheet theory at the quantum level!


particle physics - status of +4/3 scalar as explanation of $tbar t$ asymmetry


One of the early proposals for the Tevatron asymmetry on $t \bar t$ was a "fundamental diquark" with a charge (and hypercharge) +4/3, either in a triplet or a sextet colour. I am interested on the current status of this proposal:


Generically, has some particular model of been favoured in ulterior research?



For starters, you can see the short report in JHEP 1109:097,2011, at the end of section 2, points 5 and 6. I only become aware of the survival of this hypothesis after seeing http://arxiv.org/abs/1111.0477 last week.




Tuesday, 24 March 2020

thermodynamics - In a Monte Carlo $NVT $simulation how do I determine equilibration?


I'm running an NVT (constant number of particles, volume and temperature) Monte Carlo simulation (Metropolis algorithm) of particles in two dimensions interacting via Lennard-Jonse potential ($U = 4(\frac{1}{r^{12}} - \frac{1}{r^6})$, in reduced units). boundary conditions are periodic.


From this simulation I'm calculating the instantaneous pressure and potential energy. in the first steps the system is not in equilibrium, so I need to start averaging after the system is in equilibrium.


I'm starting my simulation from a random configuration.


My question: even after the system has reached equilibrium, it fluctuates around this equilibrium. these fluctuations may be large for large temperatures. so how do I know that I have reached equilibrium?


Here are some examples of the curve: The energy Vs. simulation step, for a high temperature (warmer color is higher density)


$$\uparrow$$ The energy Vs. simulation step, for a high temperature (warmer color is higher density)
The energy Vs. simulation step, for a low temperature (warmer color is higher density)



$$\uparrow$$The energy Vs. simulation step, for a low temperature (warmer color is higher density) The energy Vs. simulation step, for a high temperature, only for low densities. in this graph it's harder to tell if we reached equilibrium (warmer color is higher density)


$$\uparrow$$The energy Vs. simulation step, for a high temperature, only for low densities. in this graph it's harder to tell if we reached equilibrium (warmer color is higher density)




newtonian mechanics - What is the maximal height for a water rocket's flight?


A water rocket works like this: there is a circular slot of area $A_1$ at the bottom centre of a cylinder of cross-sectional area $A_0$ and height $L$ that is filled with water to an initial height $h_0$. This slot will fall away during launch. The water has pushed all of the air that was originally in the cylinder to the top $L-h_0$ of the cylinder (I believe this is an isothermal compression: the compression was fast), so is at a higher pressure of $\frac{L-h_0}{L}P_0$, where $P_0$ is atmospheric pressure.


enter image description here


To launch, the slot is instantaneously removed (leaving a hole of area $A_1$ in the bottom of the cylinder), and water is pushed downwards, as the air pressure is higher inside the cylinder than outside, at a speed $u(t)$. There is no sloshing of the water in the cylinder: the body of water remains cylindrical. Thus the air in the cylinder now takes up more volume (has expanded in an adiabatic expansion), but because of the upwards impulse imparted to the cylinder by the leaving water, the cylinder is now moving upwards with speed $v(t)$. The rocket will reach a maximum height $H_{max}(h_0)$, where $h_0$ is the original height of the water. What $h_0$ will give the maximum value of $H_{max}(h_0)$ for fixed $A_0,A_1,l$?


Partial solution.


In the adiabatic expansion, let $V(t)$ be the volume of the air in the rocket and $P(t)$ the pressure. As the air is mostly diatomic (Nitrogen and Oxygen are),


$$\displaystyle P(t)V(T)^{\frac{1+\frac{5}{2}}{\frac{5}{2}}}=k$$ $$P(t)V(t)^{\frac{7}{5}}=A_0P_0 \frac{(L-h_0)^2}{L}$$


$$ \frac{dV}{dt}= A_1 u(t)$$



The change of momentum per unit time of the water being spewed out the bottom is


$$\rho \delta V(v(t)+u(t))$$




cosmology - Is dark matter around the Milky Way spread in a spiral shape (or, in a different shape)?


Dark matter doesn't interact with electromagnetic radiation, but it, at least, participates in gravitational interactions as known from the discovery of dark matter. But does dark matter exist in a spiral shape around our galaxy?



Answer



In current cosmological models, the Milky Way resides in a 'halo' of dark matter. Halo is a technical term - in this case, it means a spherically symmetric collection of dark matter. Since dark matter is not self-interacting and does not interact with other matter, it doesn't experience any sort of collisions or friction, and therefore never flattens out into a disk the way normal (baryonic) matter does. So, dark matter does not trace out a disk and does not follow spiral arms.


general relativity - Would a very long massive rod exhibit a large deviation from Newtonian gravity (specifically a deficit angle rather than 1/r force)?


In General Relativity the metric corresponding to an infinitely long massive rod is flat but with a deficit angle. It exhibits a very large deviation from Newtonian gravity in all regions of space in that the effective gravitational potential is flat rather than logarithmic.


My question is is there still a large deviation from Newtonian gravity for finite rods? Specifically if I were near a very long rod would I experience any gravity and/or would I see a deficit angle? Generally physicists are comfortable approximating long rods as infinite but I know that Tipler cylinders only give closed timelike curves if they are actually infinitely long, so there are cases where approximating long rods as infinite fails entirely.


Also if long massive rod don't attract very strongly would you be able to test for deviations from standard GR (such as f(R) gravity) by measuring gravitational force near a long rod?




What stops us from creating a nuclear fusion reactor as we already have the hydrogen bomb working on the same principle of fusion?


I have been out of physics for some time now since my childhood, so please bear with me if the question below feels too novice.


I grew up with the understanding that the nuclear fusion reaction is still a dream of many people as it's a source of clean energy without the side effects of nuclear waste as we observe in nuclear fission.


Now recently I was just checking the principle on which the hydrogen bomb works, and I was shocked that it uses nuclear fusion to generate all that energy. This contradicted my understanding that nuclear fusion is not a dream but it actually is a reality.


So if we already achieved nuclear fusion why can't we create a nuclear fusion reactor out of it to generate all the power we need? Also why can't we have the small scale fusion reaction on Jupiter (as mentioned in my other question) that can help us take over the outer planets of solar system.



Also I just wanted to know if we can continue this fusion reaction to generate precious heavy metals – is it possible?




mathematical physics - Good Fiber Bundles and Differential Geometry references for Physicists



I'm a student of Physics and I have interest on the theory of Fiber Bundles because of the applications they have in Physics (gauge theory for example). What are good books to learn the theory of fiber bundles and connections that are rigorous but at the same time gives what we need to apply in Physics?



Answer



I think I good book for that may be C. J. Isham's Modern Differential Geometry for Physicists. I haven't gotten to the chapter of fiber bundles, but what I've read seems to be quite rigorous. And as it is written for physicists, I think it could please your needs.


Monday, 23 March 2020

homework and exercises - Radially symmetric charge distribution (dipole moment)




a) There's a radially symmetric charge density $\rho(r)$ centered around the origin. Determine the dipolemoment of that charge density.


b) Let $\rho(r)$ be an arbitrary charge density now. Under what circumstances does the dipole moment of the displaced charge density $\rho '(\vec{r}) = \rho (\vec{r}-\vec{b})$ differ from the one not displaced at all.



Here were my ideas so far:


a) Just thinking about the situation it has to be zero, right? I mean, since there's no real dipole. But how do I show that mathematically?



I was thinking of just going like this (it may be wrong):


Let the charge density be $\rho (r)=kr$, then we can get the charge q by integrating:


$$q=4\pi \int_0^R kr\cdot r^2dr=\pi k R^4$$


I'm looking at the charge distribution as a spherical electron cloud with radius $R$.


Then, since $p=qd$ and $d$ is zero because there are no two different charges the dipole moment is zero. Is that sufficient as an answer?


b) I don't know how to approach this one. My guess is that its dipole moment is also zero because we're only looking at a displacement here.


Anyone got any idea? I would appreciate any advice on this.




special relativity - What if a light clock travels perpendicular to mirrors that make up the clock?


I'm guessing you're all familiar with the classic intuitive way of explaining time dilation: with a light clock traveling at velocity v directed at a parallel direction to the mirrors that make up the light clock.


Now, what happens if we have a light clock that goes upwards at v? That is to say, what if its velocity is in a direction perpendicular to the mirrors' orientations? As far as I can tell, this situation wouldn't present any time dilation. Thoughts?



Answer



In that case, time dilation still occur, of course. In order to show this using t=d/v, you'd have to take into account the space contraction in the direction of motion. Mathematically, if d is the height of the clock, then the time taken from a photon at the bottom to reach the top of the clock isn't $\frac{d+vt}{c}$ but $ \frac{d/\gamma+vt}{ c}$. When you calculate the time taken for that photon to get back to the bottom of the clock and add it up to the time previously calculated, you get the exact same time dilation than the clock moving parallelly to the mirrors.


thermodynamics - What is the reason for the shift in balance between neutrons and protons in the early universe?


In the book of The First Three Minutes by Weinberg, on pages 106-107, it is stated that



SECOND FRAME. The temperature of the universe is 30,000 million degrees Kelvin [...] The nuclear particle balance has conse- quently shifted to 38 per cent neutrons and 62 per cent protons.


[...]


THIRD FRAME. The temperature of the universe is 10,000 million degrees Kelvin. [...] The decreasing temperature has now allowed the proton-neutron balance to shift to 24 per cent neutrons and 76 per cent protons.




What is the reason for this balance shift between neutrons and protons? And what determines the rate of change of the neutron/proton ratio?



Answer



There are two very relevant facts that inform this answer: (1) The rest mass energy of a neutron is 1.29 MeV higher than that of a proton. $(m_n - m_p)c^2 = 1.29$ MeV. (2) The total number of neutrons plus protons (essentially the only baryons present) is a constant.


Neutrons and protons can transform into one another via reactions moderated by the weak nuclear force. e.g. $$ n + e^{+}\rightarrow p + \bar{\nu_e}$$ $$ p + e \rightarrow n + \nu_e$$


Because of the rest mass energy difference, the first of these reactions requires no energy input and the products have kinetic energy even if the neutron were at rest. The second does require energy (at least 1.29 MeV) to proceed, in the form of reactant kinetic energy.


In the first second of the universe, with temperatures higher than $kT >10$ MeV ($10^{11}$K) these reactions are rapid, and in balance (occur with almost equal likelihood) and the $n/p$ ratio is 1. i.e. Equal numbers of neutrons and protons.


As the universe expands and cools to less than a few MeV (a few $10^{10}$ K) two things happen. The density of reactants and the reaction rates fall; and the first reaction starts to dominate over the second, since there are fewer reactants with enough kinetic energy (recall that the kinetic energies of the particles are proportional to the temperature) to supply the rest mass energy difference between a neutron and proton. As a result, more protons are produced than neutrons and the $n/p$ ratio begins to fall.


The $n/p$ ratio varies smoothly as the universe expands. If there is thermal equilibrium between all the particles in the gas then the $n/p$ ratio is given approximately by $$\frac{n}{p} \simeq \exp\left[-\frac{(m_n-m_p)c^2}{kT}\right],$$ where the exponential term is the Boltzmann factor and $(m_n - m_p)c^2 = 1.29$ Mev is the aforementioned rest-mass energy difference between a neutron and a proton. The rate at which $n/p$ changes is simply determined by how the temperature varies with time, which in a radiation-dominated universe is derived from the Friedmann equations as $T \propto t^{-1/2}$ (since the temperature is inversely related to the scale factor through Wien's law).


In practice, the $n/p$ ratio does not quite vary like that because you cannot assume a thermal equilibrium once the reaction rates fall sufficiently that the time between reactions is comparable with the age of the universe. This in turn depends on the density of all the reactants and in particular the density of neutrinos, electrons and positrons, which fall as $T^3$ (and hence as $t^{-3/2}$). At a temperature of $kT \sim 1$ MeV, the average time for a neutron to turn into a proton is about 1.7s, which is roughly the age of the universe at that point, but this timescale grows much faster than $t$.


When the temperature reaches $kT = 0.7$ MeV ($8\times 10^9$K) after about 3 seconds, the reaction rates become so slow (compared with the age of the universe) that the $n/p$ ratio is essentially fixed (though see below$^{*}$) at that point. The final ratio is determined by the Boltzmann factor $\sim \exp(-1.29/0.7)= 1/6.3$. i.e. There are six times as many protons as neutrons about three seconds after the big bang.



$^{*}$ Over the next few minutes (i.e. after the epoch talked about in our question) there is a further small adjustment as free neutrons decay into protons, $$ n \rightarrow p + e + \bar{\nu_e}$$ in the window available to them before they are mopped up to form deuterium and then helium. During this window, the temporal behaviour is $$ \frac{n}{p} \simeq \frac{1}{6} \exp(-t/t_n),$$ where $t_n$ is the decay time for neutrons of 880s. Since the formation of deuterium occurs after about $t \sim 200$s this final readjustment gives a final n/p ratio of about 1/7.


statistical mechanics - Grand canonical ensemble and chemical potential $mu=0$


In the grand canonical ensemble a system can exchange particles with a reservoir so its number of particles is not fixed. So what does it mean that $\mu=0$ implies that the number $N$ of particles is not conserved, given that $N$ is always not conserved in the grand canonical ensemble?


I've read some posts about chemical potential (in particular when $\mu=0$) but I haven't found the answer to this question.



Answer



In the grand canonical ensemble, the number of particles is not fixed. Particles are continuously exchanged with a reservoir. The number of particles is not conserved but fluctuates whatever the value of the chemical potential $\mu$. The latter can be interpreted as the energy cost when a new particle is introduced in the system.


In relativistic quantum theories, new particles can appear in the system without coming from the reservoir and without any energy cost. For example, the number of particle may change due to the spontaneous production of a pair particle/anti-particle from a photon. In a photon gas, a photon can be absorbed by an atom and two photons may be emitted with the same total energy. In all these cases, the energy cost is zero so the chemical potential $\mu$ is necessarily equal to zero.



thermodynamics - Are information conservation and energy conservation related?


as evident from the title, are both, conservation of energy and conservation of information two sides of the same coin??


Is there something more to the hypothesis of hawking's radiation other than the fact that information cannot be lost? or can I say energy cannot be lost??



Answer



First of all I do not think that conservation of information is an established statement. It seems to be an open problem still as far as black holes go.


Even if true, it is a different type of conservation, analogous to the unitarity requirements of a system of functions or phase space considerations.


From the conclusions of a paper by Hawking :




In this paper, I have argued that quantum gravity is unitary and information is preserved in black hole formation and evaporation. I assume the evolution is given by a Euclidean path integral over metrics of all topologies. The integral over topologically trivial metrics can be done by dividing the time interval into thin slices and using a linear interpolation to the metric in each slice. The integral over each slice will be unitary and so the whole path integral will be unitary. On the other hand, the path integral over topologically non trivial metrics will lose information and will be asymptotically independent of its initial conditions. Thus the total path integral will be unitary and quantum mechanics is safe.


How does information get out of a black hole? My work with Hartle[8] showed the radiation could be thought of as tunnelling out from inside the black hole. It was therefore not unreasonable to suppose that it could carry information out of the black hole. This explains how a black hole can form and then give out the information about what is inside it while remaining topologically trivial. There is no baby universe branching off, as I once thought. The information remains firmly in our universe. I’m sorry to disappoint science fiction fans, but if information is preserved, there is no possibility of using black holes to travel to other universes. If you jump into a black hole, your mass energy will be returned to our universe but in a mangled form which contains the information about what you were like but in a state where it can not be easily recognized. It is like burning an encyclopedia. Information is not lost, if one keeps the smoke and the ashes. But it is difficult to read. In practice, it would be too difficult to re-build a macroscopic object like an encyclopedia that fell inside a black hole from information in the radiation, but the information preserving result is important for microscopic processes involving virtual black holes. If these had not been unitary, there would have been observable effects, like the decay of baryons.



Energy is a conserved quantity because of Noether's theorem: wherever it holds, energy is conserved. In extreme General Relativity scenaria energy itself loses its meaning, whereas phase space and unitarity may hold and if Hawking is correct, information is conserved.


So energy conservation and possible conservation of information are two unconnected effects.


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...