Sunday 30 April 2017

newtonian mechanics - Is an explanation of 'effect of Earth's rotation on $g$' possible from an inertial reference frame?


Edit : Added a picture for better understanding of my querry. All the texts I have studied has used an non-inertial frame to explain the phenomenon. But every time I see something explained with pseudo forces, I try to realize in real forces.


But in this case I tried to explain it in a frame which is centered in Earth's center and not rotating. But I couldn't explain what happens to the tangential component (to the point on Earth's surface where $g$ to be measured) of centripetal acceleration in this scenario. It doesn't cancel out.


But then I thought the tangential component is so low that practically it would have no effect.


Is my explanation wrong?enter image description here



Answer




Here is a diagram to show the the force on a point mass $m$ on the surface of an ideal (spherical, uniform density etc) Earth of mass $M$, radius $R$ and angular speed $\omega$.


The force acting on the mass $m$ is $\dfrac{GMm}{R^2}$ at all positions on the surface of the Earth.


enter image description here


Except at the poles the gravitational force of attraction can be thought of as providing two accelerations on the point mass.


One is the centripetal acceleration $r \omega^2 = \dfrac {v^2}{r}$ where $r$ is the radius of the "orbit" and $v$ is the tangential speed of the mass.


At the poles $m g_{\rm p} = \dfrac{GMm}{R^2}$ where $g_{\rm p}$ is the acceleration of free fall at the poles and $m g_{\rm p}$ is the reading on a spring balance at the poles.


At the Equator $m (g_{\rm e} + m R \omega^2) = \dfrac{GMm}{R^2}$ where $ R \omega^2 = \dfrac{v^2}{R}$ is the centripetal acceleration of the mass and $g_{\rm e}$ is the acceleration of free fall at the Equator which will be less than it is at the Poles or anywhere else on the Earth.


At a general position with latitude $\lambda$ on has to include the directions of the force and the accelerations as they are not collinear.
The vector triangle is shown on the diagram.
In this case the centripetal acceleration is $R \cos \lambda \omega^2$ and the acceleration of free fall $g$ is between the value at the Poles and at the Equator.



Transition from quantum to classical mechanics


As I understand it, if $S \gg h$ then we are in the classical realm, whereas if $S \leq h$ we are in the quantum realm. My question is what happens somewhere in between those 2 limits? Are we quantum and classical at the same time?




bernoulli equation - What do I need for a good flow of water from a faucet: velocity of water or pressure?


I can't understand one thing. A good flow of water from bath faucet requires a good velocity of water or a good pressure? For example, I have a crane that has a pipe diameter of 1 inch. The better the flow of water, the higher velocity of water. But the higher velocity of water, the lower pressure (bernoulli principle). So it seems that the flow is good, but the pressure is low.





general relativity - Does an object creates gravitational waves when only accelerating in one direction?


I know from reading about the gravitational waves detected by Ligo, that when an object has angular acceleration, it produces gravitational waves.


I'm wondering if an object creates gravitational waves when only accelerating in one direction, however?


I'm also curious as to how the nature of the waves would differ in this case.



Answer



Any object with mass that accelerates (is it linear or angular acceleration) produces gravitational waves, though in most occasions those will be much too small to be detected. As @CuriousOne pointed out, same happens with electromagnetic waves and accelerating charges. The gravitational waves that can be detected usually come from very massive objects (such as black holes, neutron stars, etc) undergoing rapid accelerations. The situations encountered in nature where this really massive objects are accelerated tend to be related to binary stars or black hole systems orbiting each other or single stars swiftly rotating about their own axis with a noticeable irregularity like a mountain on their surface. I'd say the reason why you don't hear much about GW produced by linearly accelerating black holes/stars is that this scenario is quite unlikely to happen in nature.



You can find information about the other possible sources of GW here.


Saturday 29 April 2017

homework and exercises - A pearl that moves in a smooth vertical hoop (Circular motion)



I couldn't understand something about the situation of a pearl that moves in a smooth vertical hoop in circular motion. When the normal force equals 0 , the pearl didn't disconnect from the smooth vertical hoop, even though the pearl had velocity at that point. Why the pearl didn't disconnect?



Answer



It had a tangential velocity and its weight was enough to provide the centripetal force for motion.


rotation - Earth is rotating




Possible Duplicate:
Why does the atmosphere rotate along with the earth?



If i take off from land on a helicopter straight above the earth surface to a certain height and stay there for few mins/hours and come down. Why am i coming down to the same place where i took off? If the earth is rotating i should land on a different place right because i have not moved i am just coming down straight.I moved only vertically.


I got this thought because i was thinking why are we spending so many hours on flights to reach a country on west if i started from a eastern country.



May be there are a lot of scientific reasons behind this which i am not aware excuse me if it sounds silly.I thought this would be the best place to ask.



Answer



The helicopter in your example would have some velocity given to it by the Earth. I believe atmospheric drag would play a significant role in this, but let's ignore that for now.


You may have heard the process of an orbit described as continuous free fall, where you fall "towards" the other body just as fast as you move along the orbit. If this hypothetical helicopter lifted off, it would just be orbiting the planet!


Friday 28 April 2017

particle physics - Pair production of neutrinos


I learned that neutrinos have a much lower energy than electrons. Pair production of electrons occurs when the photon energy is above 2 times the energy of an electron. So I am wondering if pair production of neutrinos wouldn't be even more common and occur at much lower energy levels?



Answer



Pair production of electron/positron happens in the electric field of the atoms to satisfy conservation laws and the same will be true for off mass shell Z0 going into a neutrino antineutrino pair, interacting with the weak field of the atoms.


The equivalent to the photon weak interaction mediator is the Z0 and neutrino antineutrino pairs can be formed that way. The weak interaction is orders of magnitude smaller than the electromagnetic one, and thus the probability of getting pairs of neutrinos-antineutrinos is high only in special situations as in the Big Bang or in a Super Nova explosion where the density of matter is high and there is energy available.


The weak couplings lower the probability of interaction drastically so the advantage of a smaller neutrino mass with respect to the electrons is lost for experiments possible in the laboratory.



quantum mechanics - Momentum $k$-space Brillouin zone for non-quadratic and interacting systems?


Usually, we define the momentum $k$-space Brillouin zone (by Fourier transformed from the real space $x$ with a wavefunction $\psi(x)$ to the momentum $k$-space) for:


(1) quadratic non-interacting (free) systems (such as those can be written in terms of BdG equation.)


and


(2) translational invariant systems (so one can define the conjugate momentum $k$ as a good quantum number).



Question: Could we define the momentum $k$-space Brillouin zone for


non-quadratic and interacting systems


but translational invariant systems? (Namely can we modify (1) to interacting, but keep (2)?)






Thursday 27 April 2017

newtonian mechanics - How to find a condition that ensures that the rocket immediately takes off?


For the rocket in a constant gravitational field, how to find a condition that ensures that the rocket immediately takes off?


Update: I apologize. Let's try again. So my question: For the rocket in a constant gravitational field, find a condition (involving the constants $m_0,μ,g,v_r$) that ensures that the rocket immediately takes off.


$m_0$ is the mass of the rocket at $t=0$;


$μ$ is the mass per unit time of expelled gas;



$v_r$ is the velocity of the expelled gas relative to the rocket.


After I want to show that, in this case, the velocity of the rocket always grows (until the fuel is used up).



Answer



If $m_0$ is initial mass of rocket, $\mu$ the mass of gas ejected per unit time ($dm \over dt$), $g$ acceleration due to gravity and $v_r$ speed of fuel leaving the rocket motor then....


$$Force ~down = m_0g$$


$$Force ~up = v_r {d m \over dt} = v_r \mu $$


and condition for lift off is


$$m_0g \lt v_r {d m \over dt} $$


or


$$m_0g \lt v_r \mu $$



(may need a minus sign on the right hand side if we consider $dm/dt$ to be negative.)


When you think about times after lift off then you need to look at net force up and how the mass of fuel is lost from the rocket.


Note that force can be calculated from the rate of change of momentum. So $F=ma$ is the same as $F=m {dv\over dt}$ and $F={d mv \over dt}$ or $F={dp \over dt}$.


Here if we differentiate momentum ($p$ or $mv$) with respect to time the $v$ is constant, but the $m$ changes with time so ${d mv \over dt}$ = $v{d m \over dt}$.


Putting this all together we get $$F=ma=m {dv\over dt}={d mv \over dt}$$ if the mass is constant.


But in this case $$F={d mv \over dt}=v{d m \over dt}$$


How can super massive black holes have a lower density than water?


I heard on a podcast recently that the supermassive black holes at the centre of some galaxies could have densities less than water, so in theory, they could float on the substance they were gobbling up... can someone explain how something with such mass could float?


Please see the below link for the podcast in question:


http://www.universetoday.com/83204/podcast-supermassive-black-holes/



Answer




Well, it can't (float), since a Black Hole is not a solid object that has any kind of surface.


When someone says that a super massive black hole has less density than water, one probably means that since the density goes like $\frac{M}{R^3}$ where M is the mass and R is the typical size of the object, then for a black hole the typical size is the Schwarzschild radius which is $2M$, which gives for the density the result


$$\rho\propto M^{-2}$$


You can see from that, that for very massive black holes you can get very small densities (all these are in units where the mass is also expressed in meters). But that doesn’t mean anything, since the Black Hole doesn’t have a surface at the Schwarzschild radius. It is just curved empty space.


general relativity - A Cosmological horizon at the Hubble radius?


I have calculated that if one extends a rigid ruler into space by a fixed proper distance $D$ then a clock at the end of the ruler, running on proper time $\tau$, will run more slowly than time $t$ at the origin by a time dilation factor:


$$\frac{dt}{d\tau} = \frac{1}{\sqrt{1 - H^2 D^2 / c^2}}$$


where $H$ is the Hubble parameter.


If one substitutes in Hubble's law, $v = H D$ (the theoretical law which is exact), one finds the following satisfying result that



$$\frac{dt}{d\tau} = \frac{1}{\sqrt{1 - v^2 / c^2}}.$$


Although this looks like a result from special relativity I derived it by combining the FRW metric line element and an equation for the path of the end of the ruler, $\chi = D / R(t)$, where $\chi$ is the radial comoving coordinate of the end of the ruler, $D$ is a fixed proper distance and $R(t)$ is the scalefactor.


Does this prove that there is a cosmological horizon at the Hubble radius $D=c/H$ where the proper time $\tau$ slows down to a stop compared to our time $t$?


This seems to be a general result which is true regardless of cosmological model.


Details of Calculation


The general FRW metric is given by:


$$ds^2 = -c^2 dt^2 + R(t)^2\left[d\chi^2+S^2_k(\chi)d\psi^2 \right]$$


where $d\psi^2 = d\theta^2 + \sin^2 \theta d\phi^2$ and $S_k(\chi)=\sin \chi$,$\chi$ or $\sinh \chi$ for closed ($k=+1$), flat ($k=0$) or open ($k=-1$) universes respectively. The scale factor $R(t)$ has units of length.


Consider a ruler of fixed proper length $D$ extending out radially from our position at the origin. The path of the far end of the ruler in comoving coordinates is


$$\chi = \frac{D}{R(t)}$$



Differentiating this equation by proper time $\tau$ gives us:


$$\frac{d\chi}{d\tau} = - \frac{D}{R^2} \frac{dR}{dt} \frac{dt}{d\tau}.$$


Using the FRW metric we can find a differential equation for the path of the far end of the ruler. We substitute in $ds^2=-c^2d\tau^2$ (end of ruler has a time-like path), $d\psi=0$ (the ruler is radial) and divide through by $d\tau^2$ to obtain:


$$c^2\left(\frac{dt}{d\tau}\right)^2 - R(t)^2 \left(\frac{d\chi}{d\tau}\right)^2 = c^2.$$


Substituting the expression for $d\chi/d\tau$ into the above equation we find:


$$c^2\left(\frac{dt}{d\tau}\right)^2 - D^2 \left(\frac{\dot R}{R}\right)^2 \left(\frac{dt}{d\tau}\right)^2 = c^2.$$


Using the definition of the Hubble parameter $H=\dot{R}/R$ we finally obtain:


$$\frac{dt}{d\tau} = \frac{1}{\sqrt{1 - H^2 D^2 / c^2}}.$$




quantum mechanics - Why can't electrons be found inside the nucleus if there are infinite number of orbitals?


If there are an infinite number of orbitals, we can assume, that they can be present in any point in space. If that is correct, why do we not find electrons in the nucleus?



I study in high school. Correct me if I'm wrong.



Answer



Let's suppose the electron we are considering is in an orbital described by the wavefunction $\psi$. If we look in some small volume element $dV$ then the probability of finding the electron in that volume element is:


$$ P = \psi^*\psi \, dV $$


To calculate the probability of findng the electron inside the nucleus we'll use polar coordinates, and as our volume element $dV$ we'll take the volume of a spherical shell of radius $r$ and width $dr$. The volume of this element is:


$$ dV = 4\pi r^2 dr $$


so the probability is:


$$ P = \psi^*\psi \, 4\pi r^2 dr $$


If the radius of the nucleus is $R$, then we get the probability of finding the electron in the nucleus simply by integrating from $r = 0$ to $r = R$:


$$ P = \int_0^R \psi^*\psi \, 4\pi r^2 dr $$



And this integral generally has a non-zero magnitude i.e. the probability of finding the electron inside the nucleus is non-zero.


We know the electron has a non-zero probability of being inside the nucleus because in some cases it can react with a proton in a process called electron capture or inverse beta decay.


general relativity - What is the stress energy tensor?


I'm trying to understand the Einstein Field equation equipped only with training in Riemannian geometry. My question is very simple although I cant extract the answer from the wikipedia page:


Is the "stress-energy" something that can be derived from the pseudo-Riemannian metric (like the Ricci tensor, scalar curvature, and obviously the metric coefficients that appear in the equation) or is it some empirical physics thing like the "constants of nature" that appear in the equation? Or do you need some extra mathematical gadget to specify it? Thanks and apologies in advance if this is utterly nonsensical. Also, as a non-physicists I'm not sure how to tag this either so sorry for that as well.




energy - What is wrong with my argument to derive the Hamiltonian in relativity?


In General Relativity (and special too) the Lagrangian for a particle of mass $m$ in the absence of forces other than gravity is


$$L=m\sqrt{g_{\mu\nu}U^\mu U^\nu}$$


where $U^\mu$ is the four-velocity. In that case we can derive the momentum $p_\mu$ by


$$p_\mu=\dfrac{\partial L}{\partial U^\mu}=\dfrac{\partial}{\partial U^\mu}m\sqrt{g_{\alpha\beta}U^\alpha U^\beta}$$


$$p_\mu=\dfrac{mg_{\alpha\beta}}{2\sqrt{g_{\alpha\beta}U^\alpha U^\beta}}\left(\delta^\alpha_\mu U^\beta+\delta^\beta_\mu U^\alpha\right)=\dfrac{mg_{\mu \alpha}U^\alpha}{\sqrt{g_{\alpha\beta} U^\alpha U^\beta}}$$


If we parametrize the worldline by proper time $\tau$ then $L(\gamma(\tau),\gamma'(\tau))=m$ and we get of the square root on the denominator which is just $1$. Then


$$p_\mu= m g_{\mu\alpha}U^\alpha,$$


and these are the components of a covector. This directly leads to the momentum four-vector


$$p^\mu= m U^\mu.$$



Everything works here. Now I want to compute the energy. Well the Hamiltonian as always should be


$$H=p_\mu U^\mu-m\sqrt{g_{\mu\nu}U^\mu U^\nu}=m g_{\mu\nu}U^\mu U^\nu-m\sqrt{g_{\mu \nu}U^\mu U^\nu}.$$


But if things are parametrized by propertime, when we compute $H$ on the path, that is $H(\gamma(\tau),\gamma'(\tau))$ we get zero!


What I expected was to get $H = p^0$.


What am I doing wrong here? Why am I getting zero?



Answer





  1. The problem is that the Legendre transformation from 4-velocity to 4-momentum is singular: The 4 components of the 4-momentum $p_{\mu}$ are constrained to live on the mass-shell $$p_{\mu}g^{\mu\nu}p_{\nu}~=~\pm m^2. \tag{A}$$ Here the $\pm$ refers to the choice of Minkowski sign convention $(\pm,\mp,\mp,\mp)$. Therefore it is a constrained system. The 4-momentum has only 3 independent components.





  2. How to perform the singular Legendre transformation for a relativistic point particle is explained in e.g. this Phys.SE post.




  3. It turns out that the appearance of the constraint (A) and the vanishing energy/Hamiltonian reflect the worldline reparametrization invariance of the model. See also e.g. this Phys.SE post.




thermodynamics - What is enthalpy?




After an intense research reading documents and Q&A, I am still very confused about the concept Enthalpy.


Etymology says: "to warm in". [ἐνθάλπω (enthálpō, “to warm in”)]
I interpret: "to put warmth (same as heat) into something." (i. e. warmth transfer = heat?)


[1] Some will say: "Enthalpy is a measure of heat transfer".
I will wonder: "Being heat[-flow] already a concept involving transference, what's the point of defining a concept that refers to a transfer-transference?"


[2] Some will say: "Enthalpy is a state function to correct the fact that heat is not a state function".
I will wonder: "Okay, but how and why?".


[3] Some will say: "Enthalpy is Internal energy plus Pressure times Volume".
I will wonder: "Looks just like a brother-concept to what I learned as heat. Still I see no point in it.".



I would like to understand not only what enthalpy is, but where can I visualize enthalpy in for example a cup of coffee that is chilling on a balcony, in contact with the atmosphere.



Answer



One the main advantages of enthalpy is that it allows you to work out compression and expansion work done during constant pressure thermodynamic processes in an easier form.


Rather than think of the energy content of a system, we could include the work done to make room for the system in the first place.


You need to do work to create space for a system, and this work can be estimated using PV, which is the volume occupied by the system multiplied by the pressure of the environment in which the system is to be created.


So taking U as the total internal energy of a system, then we can define H = PV plus U.


So H is the total energy required to construct the system and also make room for it.


To reverse the idea, if you completely destroy a system, you would recover not only the internal energy of the system, as well as the work done by the environment (atmosphere), as it rushes in to fill the vacuum.


There are only two causes for an increase in the enthalpy of any particular system, either the system expands and it does work on the environment to create space for this expansion, or the internal energy of the system increases.


We can say $\Delta H = Q + W_0$, where $W_0$ is any other type of work and Q is the amount oF heat added to the system.




Where can I visualize enthalpy in for example a cup of coffee that is chilling on a balcony, in contact with the atmosphere



Enthalpy is a measure of the work done to move air out of the way to give space to your coffee cup (PV) plus the internal energy (U) of the hot coffee inside the cup.


To sum up, the enthalpy change is produced solely by work, in various forms, and heat. So this eliminates the calculations involved in compression expansion themodynamic processes.


$\Delta H $ in a direct indication of the heat added to the system, as long as no other types of work is being done.


Wednesday 26 April 2017

thermodynamics - Why is pressure an intensive property?



My teacher explained to me that we volume is an extensive property because it is additive in nature. But he also told us that pressure is an intensive property. Now according to the gas law equation $PV=nRT$, pressure is dependent on volume. Increasing pressure should increase volume. So shouldn't pressure be extensive as well.



Answer



From the ideal gas equation,


$$P=\frac{nRT}{V}$$


Now assuming the gas is uniformly distributed over space (has constant density for a given temperature), halving the number of moles will divide the volume by the same amount. Essentially, if we divide the number of moles by any number, we will end up dividing the volume by the same number to maintain constant temperature. So it doesn't matter how many moles of gas you take at a given temperature, you will always end up with the same pressure. You could also look at it as ratio of two extensive quantities will always give an intensive quantity.


newtonian mechanics - The sum of all external forces acting on all the particles is equal to the total external force applied to the system of particles. Why?



As per the book:


$\sum_{j=1}^N f_{j}^{ext}$+$\sum_{j=1}^N f_{j}^{int}$=$\sum_{j=1}^N dp_{j}/dt$


The first term, $\sum_{j=1}^N f_{j}^{int}$, is the sum of all internal forces acting on all the particles.


$\sum_{j=1}^N f_{j}^{int}$=$0$ (because according to newton's 3rd law, every particle will have the reaction force as a counteracting force within the system-which I understood)


The second term, $\sum_{j=1}^N f_{j}^{ext}$, is the sum of all external forces acting on all the particles. It is the total external force $F_{ext}$ acting on the system.:



$\sum_{j=1}^N f_{j}^{ext}$ ≡$ F_{ext}$


However, I didn't really get how can we say that the second term is equal to total external force. It would be appreciated if it could be explained with the help of some graph or figure by choosing small number of particle and showing that the sum of force of interaction of these particle is indeed the total external force applied to the system.



Answer



Since you seem happy about the internal forces let's ignore them and just set them all to zero so only consider external forces.


The total momentum $P$ is just the sum of all the individual momenta:


$$ P = \sum p_i $$


and we can differentiate both sides of this to get:


$$ \frac{dP}{dt} = \sum \frac{dp_i}{dt} $$


For any object, simple or composite, force is the rate of change of momentum - that is just Newton's second law. Now, the left side of the equation above is the rate of change of total momentum so that's the total force:


$$ \frac{dP}{dt} = F_\text{tot} $$



The right side is the rate of change of momenta of the individual particles so that's the force on the individual particles:


$$ \frac{dp_i}{dt} = f_i $$


So we end up with:


$$ F_\text{tot} = \sum f_i $$


acoustics - How quickly should a fluid come to hydrostatic equilibrium?



Let's say I'm holding a one-liter water bottle, full of water, which I then drop.


Before dropping the water bottle, the equilibrium is for there to be a pressure gradient in the water canceling the gravitational force on the water. While the bottle is in free fall, the new equilibrium is constant pressure everywhere. Should I expect the water to come to this new equilibrium in the few tenths of a second it takes the water bottle to fall?


I expect the answer is basically yes, because density changes (and therefore pressure changes) should propagate at around the speed of sound, and p-waves might bounce around a few times while exponentially dying away (depending on boundary conditions created by the material of the bottle?), at the end of which we have equilibrium. So for a 30-cm bottle with sound speed 1500 m/s, I might guess the time is a few times .02s, which is longer than the ~.5s it takes for the bottle to fall from my hand to the ground.


Does this sort of reasoning make any sense? How can I justify it in a less handwavy manner?




classical mechanics - Are water waves (i.e. on the surface of the ocean) longitudinal or transverse?


I'm convinced that water waves for example:


Water wave jpg


are a combination of longitudinal and transverse. Any references or proofs of this or otherwise?



Answer



Anim


Each point is moving according to:

$x(t) = x_0 + a e^{-y_0/l} \cos(k x_0+\omega t)$
$y(t) = y_0 + a e^{-y_0/l} \sin(k x_0+\omega t)$


With $x_0,y_0$ -- "motion centre" for each particle, $a$ -- the amplitude, $l$ -- decay length with depth.


So you have exact "circular" superposition of longitudinal and transverse waves.


Tuesday 25 April 2017

general relativity - Do singularities have a "real" as opposed to mathematical or idealized existence?


I was thinking of, for example a Schwarzchild metric at r=0, i.e. the gravitational singularity, a point of infinite density. I realise that there are different types of singularities--timelike, spacelike, co-ordinate singularities etc. In a short discussion with Lubos, I was a bit surprised when I assumed they are idealized and I believe he feels they exist. I am not a string theorist, so am not familiar with how singularities are dealt with in it. In GR, I know the Penrose-Hawking singularity theorems, but I also know that Hawking has introduced his no-boundary, imaginary time model for the Big Bang, eliminating the need for that singularity. Are cosmic strings and other topological defects singularities or approximations of them (if they exist). In what sense does a singularity exist in our universe? --as a real entity, as a mathematical or asymptotic idealization, as a pathology in equations to be renormalized or otherwise ignored, as not real as in LQG, or as real in Max Tegmark's over-the-top "all mathematical structures are real"?



Answer



Dear Gordon, I hope that other QG people will write their answers, but let me write mine, anyway.



Indeed, you need to distinguish the types of singularities because their character and fate is very different, depending on the type. You rightfully mentioned timelike, spacelike, and coordinate singularities. I will divide the text accordingly.


Coordinate singularities


Coordinate singularities depend on the choice of coordinates and they go away if one uses more well-behaved coordinates. So for example, there seems to be a singularity on the event horizon in the Schwarzschild coordinates - because $g_{00}$ goes to zero, and so on. However, this singularity is fake. It's just the artifact of using coordinates that differ from the "natural ones" - where the solution is smooth - by a singular coordinate transformation.


As long as the diffeomorphism symmetry is preserved, one is always allowed to perform any coordinate transformation. For a singular one, any configuration may start to look singular. This was case in classical general relativity and it is the case for any theory that respects the symmetry structure of general relativity.


The conclusion is that coordinate singularities can never go away. One is always free to choose or end up with coordinate systems where these fake singularities appear. And some of these coordinate systems are useful - and will remain useful: for example, the Schwarzschild coordinates are great because they make it manifest that the black hole solution is static. Physics will never stop using such singularities. What about the other types of the singularities?


Spacelike singularities


Most famously, these include the singularity inside the Schwarzschild black hole and the initial Big Bang singularity.


Despite lots of efforts by quantum cosmologists (meaning string theorists working on cosmology), especially since 1999 or so, the spacelike singularities remain badly understood. It's mainly because they inevitably break all supersymmetry. The existence of supersymmetry implies the existence of time-translational symmetry - generated by a Hamiltonian, the anticommutator of two supercharges. However, this symmetry is brutally broken by a spacelike singularity.


So physics as of 2011 doesn't really know what's happening near the very singular center of the Schwarzschild black hole; and near the initial Big Bang singularity. We don't even know whether these questions may be sharply defined - and many people guess that the answer is No. The latter problem - the initial Big Bang singularity - is almost certainly linked to the important topics of the vacuum selection. The eternal inflation answers that nothing special is happening near the initial point. A new Universe may emerge out of its parent; one should quickly skip the initial point because nothing interesting is going on at this singular place, and try to evolve the Universe. The inflationary era will make the initial conditions "largely" irrelevant, anyway. However, no well-defined framework to calculate in what state (the probabilities...) the new Universe is created is available at this moment.


You mentioned the no-boundary initial conditions. I am a big fan of it but it is not a part of the mainstream description of the initial singularity as of 2011 - which is eternal inflation. In eternal inflation, the initial point is indeed as singular as it can get - surely the curvatures can get Planckian and maybe arbitrarily higher - however, it's believed by the eternal inflationary cosmologists that the Universe cannot really start at this point, so they think it's incorrect to imagine that the boundary conditions are smooth near this point in any sense, especially in the Hartle-Hawking sense.



The Schwarzschild singularity is different - because it is the "final" spacelike singularity, not an initial condition - and it's why no one has been talking about smooth boundary conditions over there. Well, there's a paper about the "black hole final state" but even this paper has to assume that the final state is extremely convoluted, otherwise one would macroscopically violate the predictions of general relativity and the arrow of time near the singularity.


While the spacelike singularities remain badly understood, there exists no solid evidence that they are completely avoided in Nature. What quantum gravity really has to do is to preserve the consistency and predictivity of the physical theory. But it is not true that a "visible" suppression of the singularities is the only possible way to do so - even though this is what people used to believe in the naive times (and people unfamiliar with theoretical physics of the last 20 years still believe so).


Timelike singularities


The timelike singularities are the best understood ones because they may be viewed as "classical static objects" and many of them are compatible with supersymmetry which allowed the physicists to study them very accurately, using the protection that supersymmetry offers.


And again, it's true that most of them, at least in the limit of unbroken supersymmetry and from the viewpoint of various probes, remained very real. The most accurate description of their geometry is singular - the spacetime fails to be a manifold, i.e. diffeomorphic to an open set near these singularities. However, this fact doesn't lead to any loss of predictivity or any inconsistency.


The simplest examples are orbifold singularities. Locally, the space looks like $R^d/\Gamma$ where $\Gamma$ is a discrete group. It's clear by now that such loci in spacetime are not only allowed in string theory but they're omnipresent and very important in the scheme of things. The very "vacuum configuration" typically makes spacetime literally equal to the $R^d/\Gamma$ (locally) and there are no corrections to the shape, not even close to the orbifold point. Again, this fact leads to no physical problems, divergences, or inconsistencies.


Some of the string vacua compactified on spaces with orbifold singularities are equivalent - dual - to other string/M-theory vacua on smooth manifolds. For example, type IIA string theory or M-theory on a singular K3 manifold is equivalent to heterotic strings on tori with Wilson lines added. The latter is non-singular throughout the moduli space - and this fact proves that the K3 compactifications are also non-singular from a physics viewpoint - they're equivalent to another well-defined theory - even at places of the moduli spaces where the spacetime becomes geometrically singular.


The same discussion applies to the conifold singularities; in fact, orbifold points are a simple special example of cones. Conifolds are singular manifolds that include points whose vicinity is geometrically a cone, usually something like a cone whose base is $S^2\times S^3$. Many components of the Riemann curvature tensor diverge. Nevertheless, physics near this point on the moduli space that exhibits a singular spacetime manifold - and physics near the singularity on the "manifold" itself - remains totally well-defined.


This fact is most strikingly seen using mirror symmetry. Mirror symmetry transforms one Calabi-Yau manifold into another. Type IIA string theory on the first is equivalent to type IIB string theory on the second. One of them may have a conifold singularity but the other one is smooth. The two vacua are totally equivalent, proving that there is absolutely nothing physically wrong about the geometrically singular compactification. We may be living on one. The equivalence of the singular compactifications and non-singular compactifications may be interpreted as a generalized type of a "coordinate singularity" except that we have to use new coordinates on the whole "configuration space" of the physical theory (those related by the duality) and not just new spacetime coordinates.


It's very clear by now that some singularities will certainly stay with us and that the old notion that all singularities have to be "disappeared" from physics was just naive and wrong. Singularities as a concept will survive and singular points at various moduli spaces of possibilities will remain there and will remain important. Physics has many ways to keep itself consistent than to ban all points that look singular. That's surely one of the lessons physics has learned in the duality revolution started in the mid 1990s. Whenever physics near/of a singularity is understood, we may interpret the singularity type as a generalization of the coordinate singularities.



At this point, one should discuss lots of exciting physics that was found near singularities - especially new massless particles and extended objects (that help to make singularities innocent while preserving their singular geometry) or world sheet instantons wrapped on singularities (that usually modify them and make them smooth). All these insights - that are cute and very important - contradict the belief that there's no "valid physics near singularities because singularities don't exist". Spacetime manifolds with singularities do exist in the configuration space of quantum gravity, they are important, and they lead to new, interesting, and internally consistent phenomena and alternative dual descriptions of other compactifications that may be geometrically non-singular.


quantum mechanics - Time-energy uncertainty relation


In the book on QM by D.J. Griffiths, the time-energy uncertainty relation is in general proved for any observable $Q$ whose operator is not a function of time.




  • Now it has been derived that if uncertainty in energy is $\Delta E$ and the time required for an observable's expectation value to change by its standard deviation be $\Delta T$, then $$\Delta E \Delta T\geq h/4\pi$$




  • Now, my question is,say, for the time evolution of a free particle wavefunction, when the standard deviation itself is a function of time, there how can we interpret this result? Because then we cannot define a time in which the expectation changed by an s.d as the s.d is also a function of time.






Answer



Quoting Griffith,



$\Delta t$ represents the amount of time it takes the expectation value of $Q$ to change by one standard deviation.



Your questions stems from the fact that Griffith did not spell out the times at which each quantity is evaluated in that phrase.


First, the standard deviation always depend on time in a demonstration such as this one since $\sigma_Q^2 = \langle Q^2 \rangle - \langle Q \rangle^2$ and $\langle Q \rangle$ depends on time, and so does $\langle Q^2 \rangle$, a priori, and there is no reason to postulate that those dependencies cancel.


Then in the phrase I quoted above, you have to read:




  • the change of the expectation value from time $t$ to time $t+\Delta t$; and

  • the standard deviation at time $t$.


It is quite clear if we look at the formula this quote comments:


$$\sigma_Q = \left|\frac{d\langle Q\rangle}{dt}\right|\Delta t$$


From the time $t$ to the time $t+\Delta t$, $\langle Q\rangle$ changes by $\left|\frac{d\langle Q\rangle}{dt}\right|\Delta t$ if $\Delta t$ is small enough: this is a classic linear approximation. And then we compare that to $\sigma_Q$ at time $t$, since this formula is correct only if $\sigma_Q$ is that at time $t$, as should be clear from the demonstration in Griffith, or from this answer to the question quoted by @Qmechanic in a comment to your question.


quantum mechanics - Deriving the Angular Momentum Commutator Relations by using $epsilon_{ijk}$ Identities


I've been trying to derive the relation



$$[\hat L_i,\hat L_j] = i\hbar\epsilon_{ijk} \hat L_k $$


without doing each permutation of ${x,y,z}$ individually, but I'm not really getting anywhere. Can someone help me out please?


I've tried expanding the $\hat L_i = \epsilon_{nmi} \hat x_n \hat p_m$ and using some identities for the $\epsilon_{ijk} \epsilon_{nmi}$ which gives me the LHS as something like $-\hbar^2\delta_{ij}$ but I've got no further than this.



Answer



Since $L_i = \epsilon_{ijk} x_jp_k$ (operators) one has


$$ [L_i,L_j] = \epsilon_{iab}\epsilon_{jcd}[x_ap_b,x_cp_d] = \epsilon_{iab}\epsilon_{jcd}(x_a[p_b,x_c]p_d + x_c[x_a,p_d]p_b) $$


This first step relies on the following property of the commutator: [AB,CD] = A[B,CD] + [A,CD]B + C[AB,D] + [AB,C]D, and then performing the expansion again. The only terms that 'survive' are those involving the canonical conjugate variables. terms like $[x_a,x_b] =0$. So,


$$ [L_i,L_j] = \epsilon_{iab}\epsilon_{jcd}(x_a\underbrace{[p_b,x_c]}_{-i\hbar \delta_{b,c}}p_d + x_c\underbrace{[x_a,p_d]}_{i\hbar \delta_{ad}}p_b) = i\hbar \epsilon_{iab}\epsilon_{jcd}(-x_ap_d \delta_{bc} + x_cp_b\delta_{ad}) $$


Because of the definition of levi-civita tensor, you can absorb a minus sign by just permuting any two neighboring indices. Furthermore, after carying out the deltas, I like to rename $x_cp_b$ to $x_ap_d$ in the second term. This leads to


$$ [L_i,L_j] =i\hbar(\epsilon_{iab}\epsilon_{bjd} + \epsilon_{dib}\epsilon_{bja})x_ap_d$$



Keep in mind that any index apart from i and j are summed over $$[L_i,L_j] = i\hbar(\delta_{ij}\delta_{ad} - \delta_{id}\delta_{aj} + \delta_{dj}\delta_{ia} - \delta_{da}\delta_{ij})x_ap_d = i\hbar(x_ip_j - x_jp_i) = i\hbar \epsilon_{ijk}L_k $$


I suggest you work out the missing parts to understand how this levi-civita business works.


Monday 24 April 2017

waves - Why does water appear still near the shore



Often when there's a light wind I notice this behaviour on lakes that there appears to be a very distinct line between the water with waves and the calm water. I don't know how well it comes through in the picture but the line really seams to follow the shoreline. Does anyone have any explanation to this phenomenon?


enter image description here


EDIT


Thanks for all the input. The theories so far and some comments regarding them


1 It's an optical illusion. (@Cheeku in a comment) Thats what I thought at first but it most definitely is not. I walked back and forth along the shore and the line was at a constant distance


2 The water near the shore is protected from the wind (@David Rose in comment) I really don't think this is the case since the wind was blowing towards the red houses and that part of the lake has the "line" furthest out in the water.


3 The calmness is due to the water being shallow near to the shore (@David Rose and @dmckee in comments) Seemed reasonable (sort of) at first and also the lake is really shallow near the red houses where the "line" is really far out. But another observation that I failed to mention earlier (and where I could't get a picture) is that on the other side of the lake from the red buildings the lake is also really shallow but there the waves went all the way to the shore. Also @dmckee link busted this theory because the waves were really small (on the order of mm) and I know for a fact that the water is a couple of meters deep in the foreground of this picture and even at the shore next to the red buildings the water is about a meter deep a couple of meters from the beach.


4 Theres a thin oily film on the surface that is pushed by the wind towards the shore where it "piles up" and alters the surface tension properties of the liquid (@David Rose in comment and @docscience in answer)" By far the most reasonable theory in my opinion since it a rather small lake with a muddy bottom that contains lot of decomposing matter and behind me there are lots of roads and stuff that could leek pollutants into the lake. Also with the added observation I mentioned in response to theory 3 (that the waves spread all the way to the shoreline on the side of the lake "closest" to the wind) this makes for a really nice theory. Sadly I can't test it since I can't really do the water analysis but if there is no more plausible theory in a couple of days I will accept @docsciences' answer



Answer



This question can't be answered from the picture alone without additional observations/data but @David Rose has given a good list of hypotheses. The size of the waves appear to be within the regime of 'capillary waves' Capillary waves are surface boundary waves with wavelengths on the order of millimeters up to a centimeter or two, where the energy restoring force is dominantly surface tension rather than gravity. Given that, the hypothesis of surface tension is the more arguable cause.



I would suppose that there are oils on the water surface. perhaps from run-off that have coalesced and that were driven to that side of the lake by wind. The presence of oil reduces the traction forces of wind. The action of oil in calming water has long been known since the ancients who used olive oil to still the water in ocean going vessels.


To test that hypothesis would require analysis of surface fluid samples from each location.


homework and exercises - Most general Lagrangian in CFT in 0+1D


My question is about $CFT_1$. Page 18 of this says that $$L={\frac{\overset{.}{Q}^2}{2} - \frac{g}{2Q^2}}\tag{1.11}$$ is the most general Lagrangian that preserves time translation and scale invariance in $CFT_1$. How does one prove that?



Answer



Classically a theory is invariant under a transformation if its action is invariant (up to boundary terms). In our case a conformal transformation is given by $$t'=\lambda t\\ Q'=\lambda ^{-\Delta}Q $$ where $\Delta$ is the scaling dimension of Q, which is just its energy dimension classically.


For now let's assume a Lagrangian with only the kinetic term and infer the dimension. To this end plug the transformed variables into the action \begin{align} S' &=\int \mathrm{d}t' \frac{1}{2}\left(\frac{\mathrm{d}Q'}{\mathrm{d}t'} \right)^2 \\ &=\lambda^{-2\Delta-1}S \end{align} We can read of that $\Delta = -\frac{1}{2}$.



We can now try to add more terms to our Lagrangian but we would not want to include additional kinetic terms so we restrict them to have the form $g_n Q^n$. Plugging these terms into the action we see that they transform as $$\mathrm{d}t'\ Q'^n = \lambda^{1+\frac{n}{2}} \mathrm{d}t \ Q^n $$ which implies $n=-2$ if we require conformal invariance. We thus showed that the most general Lagrangian is given by $$\mathcal{L} = \frac{1}{2}\dot{Q}^2 - \frac{g}{2Q^2} $$


quantum mechanics - Can we write the wave function of the living things? If yes then how?




In quantum mechanics we studied that everything has a wave function associated with it.My question is can we write down the wave functions of things. Then how we can write down the wave functions of the things like animals, human eye, motion of snake etc.



Answer



There are 37.2 trillion cells in a typical human body, (probably a good few more in mine ;), then in each cell there are 20 trillion atoms, then you have to obtain the wave function for each of the electrons.......


Actually, it may well be that you cannot describe a wavefunction for a macroscopic object, like a human body. In the study of quantum mechanics, we are usually presented with the exercise of writing a wave equation for a single microscopic particle, an electron, proton and so on.


But a macroscopic object is "joined" to it's surroundings by entanglement, rather than the single electron wavefunctions we are used to deal with, which does not need to take account of this.


If two (or more) systems are entangled, such as the parts of our body and their surroundings, as in this case, then we cannot describe the wave function directly as a product of separate wavefunctions, as I implied incorrectly in my first line.


However, by the use of Reduced Density Matrices, as pointed out by Mitchell Porter below, we can describe entangled states. With the number of wave functions involved, this would theorically possible, but in practice, not a feasible option.


Incidentally, this may be one reason why the STAR TREK, "beam me aboard" transporter system may be rather difficult to achieve, but that is probably covered elsewhere on this site.


particle physics - Does science show that matter and the universe were created out of nothing?


I recently came into argument with an atheist regarding the origin of the universe. I told him that it is an unsolved problem in physics and in cosmonogy in particular. But he kept saying that it has already been confirmed by scientific evidence that "matter and the universe were created out of nothing by random fluctuations" citing these two statements below:



Inflation is today a part of the Standard Model of the Universe supported by the cosmic microwave background (CMB) and large scale structure (LSS) datasets. Inflation solves the horizon and flatness problems and naturally generates density fluctuations that seed LSS and CMB anisotropies, and tensor perturbations (primordial gravitational waves).



http://www.worldscientific.com/doi/abs/10.1142/S0217751X09044553



The inflation theory is a period of extremely rapid (exponential) expansion of the universe prior to the more gradual Big Bang expansion, during which time the energy density of the universe was dominated by a cosmological constant-type of vacuum energy that later decayed to produce the matter and radiation that fill the universe today.




http://wmap.gsfc.nasa.gov/universe/bb_cosmo_infl.html


I'm not really familiar with scientific jargon but I'm dubious if these statements especially the bolded part actually translate or mean "matter were created out of nothing by random fluctuations." Can anyone translate these statements in layman's terms?




electromagnetism - Can we think of the EM tensor as an infinitesimal generator of Lorentz transformations?


I'm asking this question because I'm feeling a bit confused about how Lorentz transformations relate to the electromagnetic tensor, and hope someone can help me clear out my possible misunderstandings. Please excuse me if the answer is obvious.


In special relativity, the EM field is described by the tensor



$$F^{\mu\nu} = \begin{pmatrix}0 & -E_{x} & -E_{y} & -E_{z}\\ E_{x} & 0 & -B_{z} & B_{y}\\ E_{y} & B_{z} & 0 & -B_{x}\\ E_{z} & -B_{y} & B_{x} & 0 \end{pmatrix}$$


which is an anti-symmetric matrix. Then, recalling the one-to-one correspondence between skew-symmetric matrices and orthogonal matrices established by Cayley’s transformation, one could view this tensor as an infinitesimal rotation matrix, that is, a generator of 4-dim pseudo-rotations. This seems at first natural: given that space-time 4-velocities and 4-momenta for a fixed mass particle have fixed 4-vector norms, all forces (including EM) and accelerations on the particle will be Lorentz transformations. However, this page is the unique reference I've found which states such relationship (and I don't fully understand the discussion which follows, which I find somewhat disconcerting).



  • Is this line of reasoning correct?


On the other hand, according to Wikipedia, a general Lorentz transformation can be written as an exponential,


$$\mathbf \Lambda(\mathbf ζ,\mathbf θ) = e^{-\mathbf ζ \cdot \mathbf K + \mathbf θ \cdot \mathbf J}$$


where (I'm quoting) $\mathbf J$ are the rotation generators which correspond to angular momentum, $\mathbf K$ are the boost generators which correspond to the motion of the system in spacetime, and the axis-angle vector $\mathbf θ$ and rapidity vector $\mathbf ζ$ are altogether six continuous variables which make up the group parameters in this particular representation (here the group is the Lie group $SO^+(3,1)$). Then, the generator for a general Lorentz transformation can be written as $$-\mathbf ζ \cdot \mathbf K + \mathbf θ \cdot \mathbf J = -ζ_xK_x - ζ_yK_y - ζ_zK_z + θ_xJ_x + θ_yJ_y +θ_zJ_z = \begin{pmatrix}0&-\zeta_x&-\zeta_y&-\zeta_z\\ \zeta_x&0&-\theta_z&\theta_y\\ \zeta_y&\theta_z&0&-\theta_x\\ \zeta_z&-\theta_y&\theta_x&0\end{pmatrix}.$$



  • How does this matrix relate with the EM tensor? By comparison between the two matrices, it would appear that the components of the electric and magnetic field ($\mathbf E$ and $\mathbf B$) should be linked, respectively, with $\mathbf ζ$ and $\mathbf θ$. I'm missing what the physical interpretation of this would be.




Answer



Physically, the only thing that the electromagnetic field tensor and a Lorentz transformation generator have in common is that they both happen to be antisymmetric rank 2 tensors. The link doesn't go any farther than that.


However, this coincidence does lead to a few analogies. For example, if you know about Lorentz transformations, then you know that an antisymmetric rank 2 tensor contains two three-vectors inside it, namely $\boldsymbol{\zeta}$ and $\mathbf{K}$. Then if somebody tells you the electromagnetic field is the same kind of tensor, you'll automatically know that it can be broken down into two three-vectors, namely the electric and magnetic fields. But this is a purely mathematical analogy.


A more physical result comes from the equation of motion $$\frac{d u_\mu}{d\tau} = (q/m) F_{\mu\nu} u^\nu.$$ where $u^\mu$ is the four-velocity; you can expand this in components to verify it's just the Lorentz force law. Now, comparing this with an infinitesimal (active) Lorentz transformation $$\Delta u_\mu = \Lambda_{\mu\nu} u^\nu$$ we see that the Lorentz force is equivalent to an active Lorentz transformation acting on the four-velocity, with generator $(q/m) F_{\mu\nu}$.




We can do some quick sanity checks:



  • Magnetic fields cause rotations. If we start with a nonzero three-velocity and apply a magnetic field, the velocity spins around.

  • Electric fields cause boosts. If we apply an electric field, the three-velocity grows in the direction of the field, just like it does in the direction of a boost.



Two caveats to this result:



  • As stated in the link you gave, this result doesn't allow us to think of electromagnetism as a geometric phenomenon, because different particles have different values of $q/m$ and hence are acted on by different Lorentz transformations. It's just a nice heuristic.

  • Be careful to distinguish between active and passive Lorentz transformations. Most of the ones you'll run into are passive (i.e. used to switch between coordinate systems), but as ACM points out, such transformations are described by matrices, not tensors. Above, I'm considering active rotations and boosts, and everything is taking place in a single coordinate system.


Sunday 23 April 2017

quantum mechanics - Why do wave packets spread out over time?


Why do wave functions spread out over time? Where in the math does quantum mechanics state this? As far as I've seen, the waves are not required to spread, and what does this mean if they do?




special relativity - Mass-Energy relation


Einstein mass- energy relation states $E=mc^2$. It means if energy of a paricle increases then mass also increases or vice-versa. My question is that what is the actual meaning of the statement "mass increases"? Is really the mass of the particle increasing or what?



Answer



The rest mass of an object is, by definition, independent of the energy. But all other forms of mass are indeed increasing with the energy, as $E=mc^2$. With the relativistic interpretation of the kinetic energy, the total mass is $$ m = \frac{m_0}{\sqrt{1-v^2/c^2}}$$ Here, $m_0$ is the mass measured at rest, i.e. the rest mass. The corrected, total mass goes to infinity if $v\to c$ and it holds for the following interpretations of the mass:




  • the inertial mass, i.e. the resistance towards the acceleration, increases. For example, the protons at the LHC have mass about 4,000 times larger than the rest mass (the energy is 4 TeV), and that's the reason why it's so hard to accelerate them above their speed of 99.9999% of the speed of light and e.g. surpass the speed of light. It's impossible to surpass it because the object is increasingly heavy, as measured by the inertial mass





  • the conserved mass. If you believe that the total mass of all things is conserved, it's true but only if you interpret the "total mass" as the "total energy over $c^2$". In this conserved quantity, the fast objects near the speed of light indeed contribute much more than their rest mass. If you considered the total rest mass of objects, it wouldn't be conserved




  • the gravitational mass that enters Newton's force $Gm_1m_2/r^2$. If an object is moving back and forth, by a speed close to the speed of light, it produces a stronger gravitational field than the same object at rest. For example, if you fill a box with mirrors by lots of photons that carry some huge energy and therefore "total mass" $m=E/c^2$, they will increase the gravitational field of the box even though their rest mass is zero. Be careful, in general relativity, the pressure from the photons (or something else) creates a gravitational field (its independent component curved in a different way), too.




Despite this Yes Yes Yes answer to the question whether the total mass indeed increases, Crazy Buddy is totally right that especially particle physicists tend to reserve the term "mass" for the "rest mass" and they always prefer the word "energy" for the "total mass" times $c^2$.


general relativity - Using the gravitational pseudotensor on a finite space


I get that the gravitational pseudotensor is generally only used for asymptotically flat spaces (aka quasi-Minkowski). In these cases, conserved total energy momenta can often be found for some system (and the various pseudotensors out there tend to agree with one another).


I recently came across Weinberg (Gravitation and Cosmology p.167) who states the reason for choosing quasi-Minkowski coordinates is to ensure convergence of the integral for energy and momenta. If this is the sole reason, then can't I utilize the gravitational pseudotensor in a finite space? (The Einstein universe being the simplest example I can think of)



Answer



Even in a compact or finite space there is not a well defined and definitive pseudotensor that has no problems. That's known as the quasi-local mass issue, and it is not settled, from anything I've seen except that in special cases a number of proposed pseudo-tensors give the same answer. But not in all cases or in general.


See a relatively recent review of the status and different versions of that at http://link.springer.com/article/10.12942/lrr-2009-4 by Szabados. But it doesn't give any easy general conclusions.


The different mass (some people call it energy, but more accurately in papers they label mass what should be invariant, and sometimes distinguish with energy what should be part of a 4 vector) definitions like ADM and Bondi masses work ok in asymptotically flat spacetimes, but it's always the global mass, not something local or pseudo-local or a density. They do get conserved, and one can use it to compute masses, and energies, where the mass energy distribution is isolated (does not extend to infinity), and when you are far enough. So it works ok for the gravitational radiation from black holes, but not well at all for the mass (or mass energy) in an expanding universe, even for local regions (the redshift causes energy loss, the cosmological energy causes gains). The psueudopseudo-local masses like that of Hawking and others also have their problems.



And as @Rankin correctly stated in his comment, Weinberg was not talking about finite spaces, more about using Minkowski like coordinates at infinity, where instead if you use spherical coordinates you get infinity, so there's nothing covariant about them even in these known cases.


thermodynamics - Is there a mathematical relationship between time and entropy?



If there is a relation between time and entropy, what is it?
Are there limitations for this equation?



Or if there is no relationship between them, what is the current state of research?




quantum mechanics - How can non-locality of entanglement be explained only in terms of correlation?


I'd like to ask a very specific question about the entanglement nonlocality. I know that it is not possible to send a faster than light signal using this phenomenon, so that's not what I'm asking about. Still, the entanglement - let's think in particular about the GHZ experiment and about the example in this answer - seems to involve a sort of signal exchanged between the entangled spins. After reading the other posts and comments, the main explanation I've found is based on the following points:



  1. we can dismiss realism in favor of locality => that would imply there are no hidden variables and the results of the measurements connot be predefined

  2. the results can be explained in terms of classical correlation, without any causal effect (on those spacelike separated measurement processes)


While point 1 is clear enough to me (after the answer, comments and links in reply to my previous question), I fail to see - in math equations - where is point 2 coming from. Actually the classical concept of correlation is defined for 2 separated distributions of probability and by definition they cannot include reference to each other. In quantum mechanics the mathematical part boils down to either the collapse of a wavefunction (=> nonlocal, not a correlation) or a state vector that includes both indices (refers to both the spacelike separated components) and - afaik - it translates back to a classical concept of causal effect, not to what is intended by correlation.


Difference between correlation and dependence



Classically, correlation does not imply dependence and it is not expressed - by definition - as a relation of dependence. You simply have two separated series of events and their probability distributions and then you compute their correlations. There are several articles showing that the entanglement could violate local causality, for example look at the Experimental test of nonlocal causality.



Local causality is the combination of what we call causal parameter independence— there is no direct causal influence from the measurement setting Y (X) to the other party’s outcome A (B) — and causal outcome independence, stating that there is no direct causal influence from one outcome to the other.



more specifically, they write



Local causality captures the idea that there should be no causal influence from one side of the experiment to the spacelike separated other side. Formally, this is a constraint on the conditional probability distributions: p(a|b,x,y,λ) = p(a|x,λ) and p(b|a,x,y,λ) = p(b|y,λ). We would like to stress that local causality is not equivalent to signal locality, which follows from special relativity and imposes constraints on the observable probabilities only: p(a|x, y) = p(a|x) and p(b|x, y) = p(b|y). The natural generalization of signal locality to include the hidden variable is typically referred to as parameter independence or locality: p(a|x,y,λ) = p(a|x,λ) and p(b|x,y,λ) = p(b|y,λ) (36). Parameter independence, together with what is often referred to as outcome independence p(a|b,x,y,λ) = p(a|x,y,λ) and p(b|a,x,y,λ) = p(b|x,y,λ), then implies local causality.



From the same article, the conclusion is




quantum mechanics allows for correlations that violate this inequality, therefore witnessing its incompatibility with causal models that satisfy local causality and measurement independence.



In other words, the point is that Quantum Mechanics satisfies setting independence but violates outcome independence (and - afaik - there's no clear explanation why), so it also violates local causality (implied by both locality and outcome independence).



Answer



The main point to consider is that the non-signaling theorem cannot be reduced to a notion of an absolute



no influence can exist which is faster than the speed of light.



The non-signalling theorem does not prohibit the existence of instantaneous influences in the formation of nonlocal correlations: such influences are indeed superluminal.


In other words, outcome independence doesn't matter as locality and the non-signalling theorem was introduced, on such purpose, as a theorem for avoiding conflict with special relativity:




no superluminal influence can exist that can be controlled for signalling purposes.



A more in depth read on the subject can be found in this IOP article.


Particle pattern of double slit experiment?


I hope this is not a stupid question, but has the particle pattern of a double slit experiment actually been observed or is it just in theory?


Seems there are many results/pictures of the wave pattern yet all the results/pictures of any particle pattern is just either an animation or an impression of what it looks like.


I'm curios to see this particle pattern when the photons are being observed.




homework and exercises - RMS Free Path vs Mean Free Path


I am trying to determine the mathematical difference between mean free path and root-mean-square free path. For an ideal gas, the relaxation time is $$\tau=\frac{1}{\sqrt2 \pi nd^2 \bar v}$$ and the mean free path is $$\Lambda=\tau \bar v $$ so the velocities cancel. When I am calculating the RMS free path, I am assuming I would use $$\Lambda_{rms}=v_{rms}\tau_{rms}$$ and I am assuming $$\tau_{rms}=\frac{1}{\sqrt2 n \pi d^2 v_{rms}}$$


This would again cause $v_{rms}$ to cancel leaving the RMS free path as the same value as the mean free path which seems odd to me. Should I just use regular $\tau$ instead of $\tau_{rms}$?



Answer



You're running into a tricky property of statistical variables: what is true for an individual particle is not necessarily true when averaged across a distribution. In particular, you can say that the distance one particle travels between collisions, its free path length $\ell$, is equal to that one particle's speed times its free path time $t$:


$$\ell = vt$$


but that does not necessarily mean that the mean free path length will be equal to the mean (or RMS) speed times the mean free time. So one needs to be careful when trying to write relations like $\Lambda = \bar{v}\tau$.




Consider an event in which a particle traveling at a speed $v$ bounces off one particle and then off another particle a time $t$ later, traversing a distance $\ell$ between bounces. As I mentioned before, $\ell = vt$ for this one event.


Now, in a gas there are many of these events happening. The times $t$, the speeds $v$, and the lengths $\ell$ all have probability distributions: respectively,


$$\begin{align} &p_t(t) & &p_v(v) & &p_L(\ell) \end{align}$$


This means, for example, the probability that a randomly chosen event's path length is between $a$ and $b$ is


$$\int_{a}^{b}p_L(\ell)\mathrm{d}\ell$$


The mean and RMS free paths are then defined as


$$\begin{align} \Lambda &= \int_0^\infty \ell p_L(\ell)\mathrm{d}\ell & \Lambda_\text{rms} &= \sqrt{\int_0^\infty \ell^2 p_L(\ell)\mathrm{d}\ell} \end{align}$$


respectively, and similarly for mean ($\bar{v}$) and RMS ($v_\text{rms}$) speed and time ($\tau$ and $\tau_\text{rms}$).


Now, these probability distributions are uncorrelated except that we know $\ell = vt$ for each event. So we can write



$$p_L(\ell) = \int_0^\infty\int_0^\infty p_v(v) p_t(t)\delta(\ell - vt)\mathrm{d}v\,\mathrm{d}t$$


This is essentially stating that for each length $\ell$, an event with that length can occur for any speed $v$ (with probability $p_v(v)$) and any time $t$ (with probability $p_t(t)$) such that $vt = \ell$. Then you integrate over all possible $v$'s and $t$'s.


Plugging this into the definition of the mean free path,


$$\begin{align} \Lambda &= \int_0^\infty \ell p_L(\ell)\mathrm{d}\ell \\ &= \int_0^\infty \int_0^\infty\int_0^\infty \ell p_v(v) p_t(t)\delta(\ell - vt)\mathrm{d}v\,\mathrm{d}t\,\mathrm{d}\ell \\ &= \int_0^\infty \int_0^\infty vt p_v(v) p_t(t)\mathrm{d}v\,\mathrm{d}t \\ &= \int_0^\infty vp_v(v)\,\mathrm{d}v \int_0^\infty t p_t(t)\,\mathrm{d}t \\ &= \bar{v}\tau \end{align}$$


You can do the same thing for the RMS free path:


$$\begin{align} \Lambda_\text{rms}^2 &= \int_0^\infty \ell^2 p_L(\ell)\mathrm{d}\ell \\ &= \int_0^\infty \int_0^\infty\int_0^\infty \ell^2 p_v(v) p_t(t)\delta(\ell - vt)\mathrm{d}v\,\mathrm{d}t\,\mathrm{d}\ell \\ &= \int_0^\infty \int_0^\infty v^2t^2 p_v(v) p_t(t)\mathrm{d}v\,\mathrm{d}t \\ &= \int_0^\infty v^2p_v(v)\,\mathrm{d}v \int_0^\infty t^2 p_t(t)\,\mathrm{d}t \\ &= v_\text{rms}^2\tau_\text{rms}^2 \end{align}$$


or


$$\Lambda_\text{rms} = v_\text{rms}\tau_\text{rms}$$


thus confirming that the RMS free path is the product of RMS speed and RMS free time, and the mean free path is the product of mean speed and mean free time.




The other thing to check is whether $\tau = \frac{1}{\sqrt{2}\pi nd^2\bar{v}}$ carries over to RMS quantities. First, though, let's see where that relationship comes from.


In this other answer of mine, I explain why the probability distribution for a particle to experience an interaction after a time $\Delta t$ is proportional to time for very short times:


$$P_\text{int}(\Delta t) \sim n\pi d^2\sqrt{2}\bar{v}\Delta t \equiv a\Delta t$$


(where $n = \frac{N}{V}$ is the number density of particles and $a$ is defined to be that combination of constants). Over longer times, this produces an exponential distribution:


$$p_t(t) = ae^{-at}$$


The mean free time is then


$$\tau = \int_0^\infty tp(t)\mathrm{d}t = \int_0^\infty ate^{-at}\mathrm{d}t = \frac{1}{a} = \frac{1}{n\pi d^2\sqrt{2}\bar{v}} $$


Now, doing the same thing for the RMS free time:


$$\tau_\text{rms}^2 = \int_0^\infty t^2p(t)\mathrm{d}t = \int_0^\infty at^2e^{-at}\mathrm{d}t = \frac{2}{a^2} = \frac{1}{(n\pi d^2\bar{v})^2} $$


or



$$\tau_\text{rms} = \frac{1}{n\pi d^2\bar{v}}$$


So the RMS free time actually differs from the mean free time by a factor of $\sqrt{2}$.


Incidentally, you can express this in terms of the RMS speed, by finding $\bar{v}$ as a function of $v_\text{rms}$, but doing so requires the use of the probability distribution for speed $p_v(v)$, which will be the Maxwell-Boltzmann distribution. I'll omit the derivations and simply copy the results from that Wikipedia page:


$$\begin{align} \bar{v} &= \frac{2}{\sqrt{\pi}}v_p & v_\text{rms} &= \sqrt{\frac{3}{2}}v_p \end{align}$$


so


$$\bar{v} = \sqrt{\frac{8}{3\pi}}v_\text{rms}$$


which finally yields


$$\tau_\text{rms} = \frac{\sqrt{3\pi/8}}{n\pi d^2v_\text{rms}}$$


conventions - Covariant and contravariant permutation tensor


I have been reading up on the permutation tensor, and have come across the following expression (in 'Generalized Calculus with Applications to Matter and Forces' by L.M.B.C Campos page 709): $$e_{i_1,\ldots,i_n}=e^{i_1,\ldots,i_n}=\begin{cases} 0 & \text{if repeated indices} \\ 1 & \text{if ($i_1,\ldots,i_n$) is an even permutation} \\ -1 & \text{if ($i_1,\ldots,i_n$) is an odd permutation} \end{cases}$$ However when I try and prove that: $$e_{i_1,\ldots,i_n}=e^{i_1,\ldots,i_n}$$ I instead get: $$e_{i_1,\ldots,i_n}=\textrm{det}(g)e^{i_1,\ldots,i_n}$$ why is there a difference between my result and that given above? Is it to do with the way $e_{i_1,\ldots,i_n}$ has been defined?


My working $$e_{i_1,\ldots, i_n}=\sum_\sigma g_{i_{\sigma(1)}i_1}\ldots g_{i_{\sigma(n)}i_n}e^{i_{\sigma(1)}\ldots i_{\sigma(n)}}$$ where the summation is over all permutations, $\sigma$, of $(1,\ldots,n)$ $$\begin{align}e_{i_1...i_n} &=e^{i_1...i_n}\sum_\sigma \textrm{sgn}(\sigma) g_{i_{\sigma(1)}i_1}\ldots g_{i_{\sigma(n)}i_n}\\ &=\textrm{det}(g)e^{i_1,\ldots,i_n}\end{align}$$



Answer



The symbol defined as \begin{align} e_{i_1i_2\ldots i_n} &= e^{i_1i_2\ldots i_n} = \begin{cases} 0 &\text{if repeated indices} \\ 1 &\text{if even permutation} \\ -1 &\text{if odd permutation} \end{cases}\end{align} is indeed not a tensor. It is called the Levi-Civita symbol (and is a pseudo-tensor density), but we can turn it into a pseudo-tensor, by defining the Levi-Civita tensor \begin{align} \epsilon_{i_1i_2\ldots i_n} \equiv \sqrt{|\det(g)|}e_{i_1i_2\ldots i_n}, \\ \epsilon^{i_1i_2\ldots i_n} \equiv \frac{1}{\sqrt{|\det(g)|}}e^{i_1i_2\ldots i_n}. \end{align} Since you have already found how the symbol reacts to index lowering, you can immediately verify that \begin{align} g_{i_1j_1}g_{i_2j_2}\cdots g_{i_nj_n}\epsilon^{j_1j_2\ldots j_n} = (-1)^s\epsilon_{i_1i_2\ldots i_n}, \end{align} where $s$ is the number of negatives in the metric signature. The $(-1)^s$-factor is why we call it a pseudo-tensor. Note that the definition carries two consequences: \begin{align} \epsilon_{i_1i_2\ldots i_n}\epsilon^{j_1j_2\ldots j_n} = e_{i_1i_2\ldots i_n}e^{j_1j_2\ldots j_n}, \end{align} and crucially, under some frame transformation $\Lambda_k^\ell$ we have \begin{align} \Lambda_{i_1}^{j_1}\Lambda_{i_2}^{j_2}\cdots\Lambda_{i_n}^{j_n}\epsilon_{j_1j_2\ldots j_n} = \mathrm{sgn}(\Lambda)\sqrt{|\det(\Lambda^2g)|}e_{i_1i_2\ldots i_n} = \mathrm{sgn}(\Lambda)\widetilde{\epsilon}_{i_1i_2\ldots i_n}, \end{align} where $\widetilde{\epsilon}_{i_1i_2\ldots i_n}$ is the Levi-Civita tensor of the transformed frame, and $\mathrm{sgn}(\Lambda)$ is the sign of the determinant. We can also consider this the reason to call it a pseudo-tensor, since both the $(-1)^s$ factor and the $\mathrm{sgn}(\Lambda)$ factor are consequences of the same property. In particular, note that $(-1)^s = \mathrm{sgn}(g)$.


A finaly word of caution: in the literature it is common to use $\epsilon$ or $\varepsilon$ for either the symbol or the tensor, and sometimes the other for the other, and sometimes without clarifying which one is used. In such cases it can typically be inferred from the context.


scattering - Why does the conductivity $sigma$ decrease with the temperature $T$ in a semi-conductor?


We performed an undergrad experiment where we looked at the resistance $\rho$ and Hall constant $R_\text H$ of a doped InAs semiconductor with the van der Pauw method. Then we cooled it down to around 40 K and did temperature-dependent measurements up to around 270 K. We were asked to create the following three plots from our measurements and interpret them.



This is conductivity $\sigma = 1 / \rho$ versus the inverse temperature $T^{-1}$. I see that increasing the temperature (to the left) decreases the conductivity. I do understand that higher temperatures do that since the electrons (or holes) have more resistance due to phonon scattering. However, since higher temperatures mean a higher amount of free electrons, I would think that $\sigma$ should go up, not down.


http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot1.png


The density of holes $p = 1/(e R_\text H)$ does increase with the temperature, that is what I would expect:


http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot2.png


And the electron mobility $\mu = \sigma R_\text H$ decreases with the temperature as well:


http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot3.png


Now, I am little surprised that even though $p$ goes up with $T$, $\mu$ and $\sigma$ go down with $T$. Are the effects of phonon scattering and other things that increase the resistance that strong?



Answer



Phonon scattering goes up a lot as temperature increases -- faster than electron numbers increase in the conduction band.


Keep in mind that phonons obey the Bose-Einstein distribution, so their numbers scale like



$$N_{BE}=\frac{1}{e^{\frac{\hbar\omega}{k_b T}}-1}$$


In the large $T$ limit, this becomes


$$\frac{k_b T}{\hbar\omega}$$


So their numbers roughly scale linearly with temperature at "high temperature". For phonons, "high temperature" means above the Debye temperature, but that's only ~650K for silicon; you're a good chunk of the way there at room temperature.


However, electrons follow a Fermi-Dirac distribution, so you'd expect their numbers to scale like.


$$N_{FD}=\frac{1}{e^{\frac{\epsilon}{k_b T}}+1}$$


In the large T limit, this goes to $\frac{1}{2}$.


There's also a chemical potential for the electrons that limits their numbers. Phonons have no such restriction; given the energy, you can have as many phonons as you want.


Even if you're not talking about high temperatures, note that the $N_{BE}>N_{FD}$ is always true.


Saturday 22 April 2017

statistical mechanics - Why this is the density of points in $k$-space?


I'm reading a solid state physics book and there's something which is confusing me, related to the free electron gas.



After solving Schrodinger's equation with $V = 0$ and with periodic boundary conditions, one finds out that the allowed values of the components of $\mathbf{k}$ are:


$$k_x = \dfrac{2n_x\pi}{L}, \quad k_y=\dfrac{2n_y \pi}{L}, \quad k_z = \dfrac{2n_z\pi}{L}.$$


In the book I'm reading the author says that it follows from this that: there is one allowed wavevector - that is, one distinct triplet of quantum numbers $k_x,k_y,k_z$ - for the volume element $(2\pi/L)^3$ of $\mathbf{k}$ space.


After that he says that this implies that in the sphere of radius $k_F$ the total number of states is


$$2 \dfrac{4\pi k_F^3/3}{(2\pi/L)^3}=\dfrac{V}{3\pi^2}k_F^3 = N,$$


where the factor $2$ comes from spin.


Now, why is that the case? Why it follows from the possible values of $k_x,k_y,k_z$ that density of points in $k$-space? I really can't understand this properly.




general relativity - Is an atomic clock itself affected by gravity?


Sometimes I read that only time flows at different rates in different conditions when atomic clocks shows a different time compared to atomic clocks at altitude. But sometimes I read that an atomic clock ticks slower when gravity is higher.


Now the question is whether the number of cycles per second of a cesium atomic clock also slows down at higher gravity or that only the second is (also?) longer with higher gravity so that there is no difference?




Can a black hole be explained by newtonian gravity?



In the simple explanation that a black hole appears when a big star collapses under missing internal pressure and huge gravity, I can't see any need to invoke relativity. Is this correct?



Answer



By a coincidence, the radius of a "Newtonian black hole" is the same as the radius of the Schwarzschild black hole in general relativity. We demand the escape velocity $v$ to be the speed of light $c$, so the potential energy $GMm/R = mc^2/2$, i.e. $$ R = \frac{2GM}{c^2} $$ The agreement, especially when it comes to the numerical factor of $2$, is a coincidence. But one must appreciate that these are totally different theories. In particular, there's nothing special about the speed $c$ in the Newtonian (nonrelativistic) gravity. To be specific, objects are always allowed to move faster than $c$ which means that they may always escape the would-be black hole. There are no real black holes (object from which nothing can escape) in Newton's gravity.


newtonian mechanics - Can an object be acted upon by both static and kinetic friction at the same time?


Why is it that, as soon as the 'required' static friction for no relative motion between two objects exceeds the maximum static friction, kinetic friction 'takes over'? Shouldn't static friction continue acting at it's maximum value, in addition to the kinetic friction?


Essentially, if there is no relative motion between two objects, we have static friction trying to to maintain that state, but if there is relative motion, then static friction just completely 'gives up'. Why is this?




general relativity - Why can't Newton's 1st law be expressed as an autoparallel transportation in space?


I'm following this series of lectures on differential geometry and general relativity. In the linked lecture (Lecture 9), at around 24:24, professor Frederic Schuller made the conclusion that one can not express Newton's 1st law as an autoparallel transportation in space but can in spacetime, i.e there exists no $\Gamma$ such that the following equation is valid: $${-g^{\alpha}[x(t)]}~=~{{\Gamma}^{\alpha}_{{\beta}{\gamma}}[x(t)]{\dot{x}}^{\beta}(t){\dot{x}}^{\gamma}(t)}, \qquad \alpha=1,2,3.\tag{1}$$ Could someone explain to me why this is the case? If you could provide an intuitive picture it'll be even better.




Friday 21 April 2017

energy - Which is more efficient, heating water in microwave or electric stove?


So our propane tank in the kitchen ran out again today.


Which is more energy efficient, boiling water in a microwave on an electric stove? All things being equal i.e. starting temperature and mass of water.


Not so much about which is faster, but which will cost us less kWh generally.


I realize boiling from the stove noticeably heats up the environment as well, and continues emitting warmth long after its power had been switched off. Does the kettle have a higher thermal capacity than the micro-safe glass container (therefore needing to absorb more calories) or is that difference negligible with say 1kg of water? Haven't been inside a microwave to feel its thermal capacity/overhead though.


As far as dominant conduction/convection/radiation methods of transfer, it seems fairly obvious in both cases.



Answer



Looks like Ron Maimon is right, and the efficiency is pretty much the same for a microwave oven and an electric stove. There are some results of an actual comparison for boiling a cup of water (the method does not look very accurate though, and the models used are old) at http://michaelbluejay.com/electricity/cooking.html : 0.087 kWh for a microwave oven vs. 0.095 kWh for an electric stove. Furthermore, energy used for cooking does not make a large part of your energy bills anyway (http://www.aceee.org/consumer/cooking : "If you don’t cook much, more efficient cooking appliances won’t save much energy!").


Thursday 20 April 2017

thermodynamics - Entropy and Gibbs Free Energy


I've been struggling with the notion of entropy and gibbs free energy for almost three days now. Different sources on and off the internet say different things about entropy.


Gibbs Free Energy is said to be both a measure of spontaneity and maximum work extractable from a reaction, and somehow I am unable to reconcile the two ideas. While the laws of probability favour increases in entropy as a system can take on microstates, what is the cause of the energetics associated with it. How can it be both a "measure of spontaneity"and "maximum work".



Answer



You may have seen the reasoning to follow in most textbooks already but apparently it is not emphasized enough so I will say it again here.


The crucial starting point is the second law of thermodynamics that claims that the entropy change of the universe $\Delta S_\text{univ}$ is either zero or strictly positive for any physical change that occurs in it. I specify physical to stress that not all possible changes necessarily meet this constraint. In particular, when it comes to chemical reactions, they tend to happen in one way and not the other.


It would be fine to stick to this definition: a chemical reaction is physically favoured if it leads to an increase of the entropy of the universe.


However, it is not very convenient because most of the time we care about a particular system and not the universe as a whole.



It is therefore common to partition the universe in two parts: the system of interest and the environment.


We then apply the property of additivity go entropy that says that $\Delta S_\text{univ} = \Delta S_\text{sys} + \Delta S_\text{env} \geq 0$


This is the most general statement we can make although it is not yet very useful.


It is now time to become more specific about the conditions under which the change or the reaction or the transformation in the system will occur.


For chemical reaction, it is often the case that they are carried at constant pressure $P$, temperature $T$ and mass $M$ or amount of matter.


The key is then to express $\Delta S_\text{env}$ in terms of the system thermodynamic properties to end up with a closed condition to be satisfied by the system only to fulfil the second law for the entire universe.




  • Since the temperature is fixed it means that the environment acts as a thermostat and we can write, by definition, that $\Delta S_\text{env} = \frac{Q_\text{env}}{T}$ where $Q_\text{env}$ is the heat received by the environment during the change in the system. Then, since the only exchanges of heat occur between the environment and the system, it has to be the case that $Q_\text{env} = - Q_\text{sys}$ where $Q_\text{sys}$ is the heat received by the system during the transformation. We now apply the first law of thermodynamics that tells us that $\Delta U_\text{sys} = W_\text{sys} + Q_\text{sys}$ from which we deduce that $Q_\text{env} = W_\text{sys}-\Delta U_\text{sys}.$





  • We now use the fact that the pressure is constant during the process. If $\Delta V_\text{sys}$ is the change of volume of the system during the transformation then we can write $W_\text{sys} = -P\Delta V_\text{sys} = \Delta (P_\text{sys}V_\text{sys}).$




Putting all this together we get that


\begin{equation} \Delta S_\text{sys} + \frac{-\Delta (PV_\text{sys})-\Delta U_\text{sys}}{T} \geq 0 \end{equation}


Upon multiplying this last equation by $-T$ and using that fact $T$ is constant during the reaction, we get that the second law of thermodynamics is satisfied (for the whole universe) iff the system satisfies:


\begin{equation} \Delta (U_\text{sys}+PV_\text{sys}-TS_\text{sys}) = \Delta G_\text{sys}(P,T) \leq 0 \end{equation}


That's for the spontaneity aspect.


For the maximum work extractable aspect, it comes from the assumption above that the work is only due to the imposed pressure and changes in the system volume. You can write more generally $W_\text{sys} = W_\text{other} -P\Delta V_\text{sys}$.



By redoing the same type of calculation as before, you then end up with the following relation for the second law of thermodynamic to hold:


\begin{equation} \Delta G_\text{sys}(P,T) \leq W_\text{other} \end{equation}


In particular, this other work is also related to the work to provide to reverse a chemical reaction.


resource recommendations - Books about musical acoustics



Apart from physics, I love music and I want to learn about musical acoustics.


I know some things, but the problem is that most of musical acoustics books are written for musicians. That means that there're no demonstrations, and the physical concepts are explained so briefly. You can usually find formulas for the vibrating string or tubes, frequencies of the notes, etc. But I know that already!. On the other hand, acoustics books doesn't include information about musical acoustics. Or the basics, again.



I'm trying to find a musical acoustics book (or books, or articles) which fulfill:



  • Written for physicist. I want demonstrations of the formulas, even if they're sketched.

  • Complete information about musical instruments. Not only vibrating strings / tube formulas. I want also information about vibrating membranes and metals, application of the formulas to particular cases (for example, difference between violin and viola, or clarinet and saxophone).

  • Application of acoustics/waves concepts to music. For example, acoustic impedance, or diffraction.

  • Basic concepts about sonority.

  • Tuning system. Differences of tuning system between instruments.


Of course, I've studied waves in my undergraduate course. I know about solving the wave equation for a particular case with initial conditions, wave packets, Fourier analysis...




general relativity - Do the Earth and I apply the same gravitational force on each other in GR?


Our high-school teacher told us that the Earth pulls us with some force $F$ and we pull the Earth with the same force $F$. Within Newtonian physics this is true because of Newton's 3rd law, but let’s consider Einsteinian gravity. My mass is small; so I don’t warp space-time much. But Earth’s large mass warps space-time to a far greater extent.


So do I pull Earth with the same force it pulls me? If yes, how?



Answer



You both fall toward the common center of mass. Because the mass of the Earth is quite a bit larger than yours, the center of mass is very close to the center of the Earth, but rather far away from you. Thus, as you both fall to the common center, the Earth hardly moves while you fly until you hit the ground.


More specifically, you are talking about the two-body solution. Both bodies curve spacetime and move in this curved spacetime. As you justly stated, your contribution is small and for this reason the Earth movement toward you is very small as well.


However, when you interact with the Earth, the momentum you get equals the momentum the Earth gets. And yes, in the classical view, you attract the Earth with the same force as the Earth attracts you. While your gravity is very weak, the mass of the Earth attracted by it is enormous. Therefore the forces work out to be the same, as expected.


Wednesday 19 April 2017

classical mechanics - What is the work done against a force?


Suppose a particle travels a path $\gamma : I\subset \mathbb{R}\to \mathbb{R}^3$ subject to a force $\mathbf{F}: \mathbb{R}^3\to T\mathbb{R}^3$, then we know that we define the work done by the force as


$$W=\int_\gamma \mathbf{F} = \int_I \langle \mathbf{F}\circ \gamma, \gamma'\rangle$$


Now usually I see the term "work done against a force" and I don't really understand what it means. The reason is that in my understanding, work is always done by a force upon a particle or system of particles. If we talk about work done against a force, it is work done by which force on which particle or system?


Also, mathematically, how it is obtained? If we want to know not the work done by a force, but against it how we obtain it? I imagine is just the opposite, just changing the sign, but I'm unsure, and if it is really that I can't grasp why it should be.





Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...