Saturday, 29 February 2020

homework and exercises - Free fall in non-uniform field


Imagine I'm a space-diver, with mass $m_1 $, 500 miles above the Earth's surface at $x_i$. I want to calculate my position, velocity, and acceleration as a function of time, accounting for the Earth's non-uniform gravitational field, and neglecting air resistance. I've done some basic calculations, and am confused by the answer I get; it seems to imply no acceleration as a function of time, if I start off at rest. Purely Newtonian regime. Here, I imagine I'm falling purely along the x-axis:


Conservation of energy: (Earth mass $m_2$, diver mass $ m_1$)


$$ \frac{1}{2}m_1 \dot{x}^2 = \frac{G m_2 m_1}{x} $$


Square root and integrating:


$$ \int_{x_i}^{x(t)} x^{1/2} dx = \int_0^t (2Gm_2)^{1/2} dt $$


This gives the solution


$$ x(t) = ( \frac{3}{2} (2Gm_2)^{1/2} t + x_i^{3/2} )^{2/3} $$



However, my problem with this is that it seems to imply that the velocity scales:


$$ \dot{x} \sim t^{-1/3} ,$$


but if I start off at rest at t = 0, it seems that my velocity will not increase, but decrease with time? What am I doing wrong here? Is there something wrong with my assumption that I can simply place myself at $ x_i $ with zero velocity? One would expect the diver's velocity to increase as a function of time, and for the gravitational field (since it will be stronger as I get closer to the surface of the Earth).


This should be straightforward, but I'm missing something?



Answer



Your mistake is in your conservation energy equation. The way you wrote it is valid only when falling from infinity, from rest. The correct is: $$dE=dK+dU=0,$$ that is $$mvdv=-\frac{K}{x^2}dx,$$ where $K\equiv Gm_1m_2$. Integrating from $(x_i,v_i)$ to $(x,v)$ we get $$\frac 12m(v^2-v_i^2)=K\left(\frac{1}{x}-\frac{1}{x_i} \right).$$ This is the correct equation which you have to start with. Now $$v=\frac{dx}{dt}=\pm\sqrt{v_{i}^2+\frac{2K}{m}\left(\frac{1}{x}-\frac{1}{x_i} \right)}.$$ Assuming $v_i=0$ and integrating again from $(t=0,x_i)$ to $(t,x)$ we obtain $$t=-\int_{x_i}^{x(t)}\frac{dx}{\sqrt{\frac{2K}{m}\left(\frac{1}{x}-\frac{1}{x_i} \right)}},$$ where I am using the minus sign because the axis is oriented upwards. To solve this integral you use the substitution $x=x_i\sin^2{\theta}$, $$t=-\sqrt{\frac{2mx_i^3}{K}}\int_{\frac{\pi}{2}}^{\theta(x)}\sin^2{\theta}d\theta,$$ where $\theta(x)=\arcsin{\sqrt{\frac{x}{x_i}}}$. Therefore, $$t=\sqrt{\frac{mx_i^3}{2K}}\left[\frac{\pi}{2}-\arcsin{\sqrt{\frac{x}{x_i}}}-\frac 12 \sin\left(2\arcsin{\sqrt{\frac{x}{x_i}}}\right)\right].$$ However this equation cannot be solved for $x$.


energy - Bass and Treble-Car Steroes




In a car which phenomenon, diffraction or the resonant frequency of the car, lends itself more to the ability of bass to go farther?


Related Answer: Why do bass tones travel through walls?




astrophysics - Evidence for black hole event horizons


I know that there's a lot of evidence for extremely compact bodies. But is there any observation from which we can infer the existence of an actual horizon?


Even if we are able to someday resolve directly the "shadow" of a possible black hole, isn't it still possible that it's just a highly compact body that's just gravitationally redshifting any radiation that it emits to beyond the detection capability of our telescopes? What would be indisputable evidence for a causal horizon?



Answer



At the galactic center, there is an object called Sagittarius A* which seems to be a black hole with 4 million solar masses. In 1998, a wise instructor at Rutgers made me make a presentation of this paper


http://arxiv.org/abs/astro-ph/9706112


by Narayan et al. that presented a successful 2-temperature plasma model for the region surrounding the object. The paper has over 300 citations today. The convincing agreement of the model with the X-ray observations is a strong piece of evidence that Sgr A* is a black hole with an event horizon.


In particular, even if you neglect the predictions for the X-rays, the object has an enormously low luminosity for its tremendously high accretion rate. The advecting energy is pretty "visibly" disappearing from sight. If the object had a surface, the surface would heat up and emit a thermal radiation - at a radiative efficiency of 10 percent or so which is pretty canonical.


Of course, you may be dissatisfied by their observation of the event horizon as a "deficit of something". You may prefer an "excess". However, the very point of the black hole is that it eats a lot but gives up very little, so it's sensible to expect that the observations of black holes will be via deficits. ;-)


fluid dynamics - Lift and drag coefficients on other planets


The question I'm trying to answer seemed simple: how hard would it be to fly on a planet with lower gravity but also thinner atmosphere compared to Earth. If the answer could hint me at how much different an airplane designed to fly there would look like.


I know atmospheric pressure, atmospheric composition (and hence molar mass) and temperature at the surface of the hypothetical planet. However I have a problem with determining lift and drag coefficients. The NASA site says this coefficients are dependent on viscosity and compressibility of air, the form of the aircraft and angle of attack. My first thought was to separate the part of the coefficients that is dependent on the aircraft from atmospheric parameters. However, I have trouble finding a formula for it. This page says that for certain condition lift coefficient is $$ C_l=2\pi \alpha $$ But I'm not sure if this approximation is true for different atmosphere.


L/D Ratio and Mars Aircraft may be relevant.



Also, can I assume that lift to drag, or maximum lift to drag, is the same in any atmosphere, and if so under what conditions?



Answer



Physics should not be different on other planets, so the same laws apply as on earth. Only the results of an optimization might look unfamiliar. See here for an answer on Aviation SE on a Mars solar aircraft.


The lift slope equation you found is only valid for slender bodies, like fuselages and fuel tanks, and once wing span becomes a sizable fraction of length, more complicated equations will be needed, and Mach number effects must be considered, too. See here for a more elaborate answer.


Generally, to fly like on earth would mean that the ratio of dynamic pressure and mass is the same. Then you would use the same aircraft as on earth (provided the other planet's atmosphere contains enough, but not too much oxygen for the engine to function).


Dynamic pressure $q$ is the product of the square of airspeed $v$ and air density $\rho$: $$q = \frac{\rho\cdot v^2}{2}$$


Lift is dynamic pressure times wing area $S$ and lift coefficient $c_L$ and must be equal to weight, that is the product of mass $m$ and the local gravitational acceleration $g$: $$L = q\cdot S\cdot c_L = m\cdot g$$


The lift coefficient is a measure how much lift can be created by a given wing area and can reach values of up to 3 in case of a landing airliner. Then the wing uses all kinds of high-lift devices (slats, slotted flaps), and once those are put away, the lift coefficient of an airliner is at about 0.5. For observation aircraft, less speed is required, and a normal lift coefficient for them would be 1.2. I see no reason why this number should be different just because the atmosphere is different.


The most important number would be the Reynolds number $Re$. It is the ratio of inertial to viscous forces in a flow and is affected by the dimensions of your plane (on earth we use the wing chord $l$) and the density and dynamic viscosity $\mu$ of your planet's atmosphere. $$Re = \frac{v\cdot\rho\cdot l}{\mu}$$


Lower Reynolds numbers will translate into higher friction drag, which depresses the maximum achievable lift-to-drag ratio. Gliders fly at Reynolds numbers between 1,000,000 and 3,000,000 and airliners can easily achieve 50,000,000. When you need to optimize for a more gooey atmosphere, your wings will become less slender than on earth, because you will enlarge wing chord $l$ to keep $Re$ up.



Once you need speed to get the weight lifted, the Mach number $Ma$ might become important. Generally, subsonic flight is the most efficient, and it has a natural limit at $Ma^2 \cdot c_L = 0.4$. This is what can be achieved with todays technology. The speed of sound in a gas is mainly a function of temperature - Mach 1 on Mars is 238 m/s.


The first part of an airplane to hit a Mach limit are the propeller tips. Maybe you need to have several slow-spinning, small propellers than one big, honking propeller which would provide the best efficiency as long as its tips are well below Mach 1.


Last, you need to know the number of atoms per gas molecule. Air is dominated by two-atomic molecules, but maybe your planet has an atmosphere like early earth with lots of carbon dioxide. This will affect the ratio of specific heats $\kappa$ - this means the rate of heating and cooling with compression and expansion of the gas might be different than on earth. This will come into play when you approach or exceed Mach 1.


electricity - Power dissipation in High Voltage Cables


I was doing the following physics problem in physics class:



You have two dimensionally identical pieces of metal, one made from aluminium the other made from iron. It is given to us that aluminium has a lower resistively then iron. Which metal glows first when they are connected in parallel to a battery. What about if they are connected in series?



If they are connected in parallel the voltage across them is the same however the current going thru the aluminum by ohms law is higher. Therefore the aluminum glows first.



When they are connected in series the current thru them is the same therefore they should glow at the same time. But then I remembered that power dissipated across a resistor is $P=VI$ and the voltage drop across the iron is greater - so the iron glows first.


Then I tried to look for the source in my mind for the first reasoning - hat heat losses depend on the current and not the voltage. And this is the point where the high voltage transmission lines come in.


I was taught that we transmit electricity at a high voltage (and then transform it down for home usage) to allow for a lower current and therefore decreases power dissipated. But now that I think about it increasing voltage to decrease current wouldn't work as what we same in a lower current we lose by a higher voltage according to $P=VI$


What is going on here, can someone please explain?



Answer



But recall that power dissipated


$P= VI$


is also , from Ohm's law, expressible as


$P = I^2 R$


So the dependency of power dissipated is linear in voltage, but quadratic in current, given the same resistance.



Also remember that the voltage supplied from the power station, and the voltage drop across the transmission line - which is what is important in power loss- , are not the same voltage. The former is considerably larger than the latter.


To see why, consider supplying a fixed amount of power at the end of a transmission line with a supply voltage $V_s$ and supply current $I$.


You would use the first equation, $P= V_s I$, to compute that power.


But the voltage drop across the transmission wire is $V_{drop} = IR$, which is less than the supply voltage. They are different quantities. The power dissipation is only quadratic in $V_{drop}$, not $V_s$.


Of course, if we are talking about transmission lines, the above is a vast oversimplification, since transmission lines carry AC power. For a short enough transmission line , it can be modeled as a resistance and inductance in series. For longer transmission lines, capacitive effects come into play. But still the qualitative picture of current dominating the loss over voltage


There is a report from Purdue University at this link that covers transmission line power loss in considerably more detail than there is room for here.


quantum mechanics - Hilbert space of a free particle: Countable or Uncountable?


This is obviously a follow on question to the Phys.SE post Hilbert space of harmonic oscillator: Countable vs uncountable?


So I thought that the Hilbert space of a bound electron is countable, but the Hilbert space of a free electron is uncountable. But the arguments about smoothness and delta functions in the answers to the previous question convince me otherwise. Why is the Hilbert space of a free particle not also countable?



Answer



The Hilbert dimension of the Hilbert space of a free particle is countable. To see this, note that




  1. The Hilbert space of a free particle in three dimensions is $L^2(\mathbb{R}^3)$.





  2. An orthonormal basis of a Hilbert space $\mathcal H$ is any subset $B\subseteq \mathcal H$ whose span is dense in $\mathcal H$.




  3. All orthornormal bases of a given non-empty Hilbert space have the same cardinality, and the cardinality of any such basis is called the Hilbert dimension of the space.




  4. The Hilbert space $L^2(\mathbb R^3)$ is separable; it admits a countable, orthonormal basis. Therefore, by the definition of the Hilbert dimension of a Hilbert space, it has countable dimension.




Addendum. 2014-10-19



There is another notion of basis that is usually not being referred to when one discusses Hilbert spaces, namely a Hamel basis (aka algebraic basis). There is a corresponding theorem called the dimension theorem which says that all Hamel bases of a vector space have the same cardinality, and the dimension of the vector space is then defined as the cardinality of any Hamel basis.


One can show that every Hamel basis of an infinite-dimensional Hilbert space is uncountable.


As a result, the dimension (in the sense of Hamel bases) of the free particle Hilbert space is uncountable, but again, this is not usually the sense in which one is using the term dimension in this context, especially in physics.


relative motion - How can you accelerate without moving?


I know this question has been asked in other forms, generally regarding the balance of forces. This time I want to focus on motion. I've got a laser accelerometer on my desk. It tells me that I'm accelerating at $9.8~\rm m/s^2$. For the first experiment I'm travelling in space. I pick a nearby star and discover that I move about $490$ meters in $10$ seconds from that star. For the next experiment I'm on the surface of Earth. I measure the same acceleration with my laser accelerometer. I pick a spot (the center of the Earth) and discover I don't move at all in $10$ seconds. How is acceleration without motion possible?



Answer



In relativity (both flavours) we consider trajectories in four dimensional spacetime, and acceleration is a four-vector not a three-vector as in Newtonian mechanics. We call this four-acceleration while the Newtonian acceleration is normally referred to as coordinate acceleration.


Supoose we pick some coordinate system $(t,x,y,z)$ and measure the trajectory of some observer in these coordinates. The way we usually do this is to express the value of the coordinates as a function of the proper time of the observer, $\tau$. That is the position is given by the functions $\left(t(\tau), x(\tau), y(\tau), z(\tau)\right)$. The proper time $\tau$ is just the time recorded by a clock travelling with the observer, so we are describing the trajectory by how the position in our coordinates changes with the observer's time.


If we start by considering special relativity, i.e. flat spacetime, then the four-velocity and four-acceleration are calculated by differentiating once and twice respectively wrt time, just like in Newtonian mechanics. However we differentiate wrt the proper time $\tau$. So the four-velocity $U$ and four-acceleration $A$ are:


$$ \mathbf U = \left( \frac{dt}{d\tau}, \frac{dx}{d\tau}, \frac{dy}{d\tau}, \frac{dz}{d\tau} \right) $$


$$ \mathbf A = \left( \frac{d^2t}{d\tau^2}, \frac{d^2x}{d\tau^2}, \frac{d^2y}{d\tau^2}, \frac{d^2z}{d\tau^2} \right) $$


The four acceleration defined in this way is coordinate independent, and it behaves in a very similar way to Newtonian acceleration. For example we can (though we usually don't) write a relativistic equivalent of Newton's second law:


$$ \mathbf F = m \mathbf A $$



where $\mathbf F$ is the four-force.


To complete the comparison with Newtonian mechanics we can choose our $(t,x,y,z)$ to be the coordinates in which the accelerating observer is momntarily at rest, and in these coordinates the four-acceleration becomes the proper acceleration, which is just the acceleration felt by the observer. Let me emphasise this because we'll use it later:



the four-acceleration is equal to the acceleration felt by the observer in their rest frame.



Anyhow, this is all in flat spacetime, and in flat spacetime a non-zero four-acceleration means that in every inertial frame the position of the observer is changing with time. This ties up with the first part of your paragraph where you're talking about your position relative to a star changing with time. However in general relativity the expression for the four-acceleration has to include effects due to the curvature, and it becomes:


$$ A^\alpha = \frac{d^2x^\alpha}{d\tau^2} + \Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu \tag{1} $$


I've written this using Einstein notation as it's rather long to write out otherwise. The index $\alpha$ is zero for $t$, one for $x$, two for $y$ and three for $z$. The new parameters $\Gamma^\alpha_{\,\,\mu\nu}$ in the equation are the Christoffel symbols that describe how the spacetime is curved.


The difference from flat spacetime is that now we can have a (locally) inertial frame, where the spatial coordinates are not changing with time, and we can still have a non-zero four-acceleration. That is even if $x$, $y$ and $z$ are constant, so $d^2x/d\tau^2$ etc are zero, the contribution from the Christoffel symbols means the four-acceleration $\mathbf A$ can still be non-zero.


And in general relativity it's still true that the four acceleration is the same as the acceleration felt by the observer in their rest frame, and this is the link to the second part of your question. Because of the curvature you can be (spatially) at rest on the surface of the Earth with respect to the distant star but still have a non-zero four-acceleration. But remember that above we said:




the four-acceleration is equal to the acceleration felt by the observer in their rest frame.



That means even though you are at rest in your coordinates your non-zero four-acceleration means you still feel an acceleration. That acceleration is of course just what we call gravity.


Response to comment: Moving in a straight line


The obvious way to define motion in a straight line is to say that the acceleration is zero. In Newtonian mechanics this is just Newton's first law, where the acceleration is the coordinate acceleration $\mathbf a$. Likewise in relativity (both flavours) a straight line means the four-acceleration $\mathbf A$, defined by equation (1) above, is zero. Looking at equation (1), the only way for $\mathbf A$ is if the $dx^\alpha/d\tau^2$ term exactly balances out the Christoffel symbol i.e.


$$ \frac{d^2x^\alpha}{d\tau^2} = -\Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu \tag{2} $$


This equation is called the geodesic equation, and it describes the trajectory of a freely falling particle in a curved spacetime. That is, it is the equation for a straight line in curved spacetime or more formally a geodesic.


Actually solving the geodesic equation is usually hard (like most things in GR) but for an overview of how this equation describes things falling in Earth's gravity see How does "curved space" explain gravitational attraction?.


Footnote: The elevator, the rocket, and gravity: the equivalence principle



The above discussion provides a nice way to understand the elevator/rocket description of the equivalence principle. See this article for a full discussion, but in brief suppose you are inside a lift with the doors closed so you can't see out. You can feel a force pulling you down with an acceleration of $1$g, but you can't tell if the lift is stationary on the Earth and you're feeling gravity, or if you're in outer space and the lift has been attached to a rocket accelerating at $1$g.


To see why this is we take equation (1) and rewrite it as:


$$ \mathbf A = \mathbf A_\text{SR} + \mathbf A_\text{GR} \tag{3} $$


where $\mathbf A_\text{SR}$ is the term we get from special relativity, $d^2x^\alpha/d\tau^2$, and $\mathbf A_\text{GR}$ is the term we get from general relativity, $\Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu$.


But all you can measure is $\mathbf A$. Remember that $\mathbf A$ is equal to the acceleration in your rest frame, so if you have a set of scales in the lift you can measure your weight, divide by your mass, and you get your proper acceleration $\mathbf A$.


The point is that although you can experimentally measure the left side of equation (3) the equivalence principle tells us that you can't tell what is on the right hand side. If the elevator is blasting through space on a rocket $\mathbf A_\text{GR}$ is zero and all your acceleration comes from $\mathbf A_\text{SR}$. Alternatively if the elevator is stationary on Earth $\mathbf A_\text{SR}$ is zero and your acceleration comes from the $\mathbf A_\text{GR}$ term. The equivalence principle tells us that there is no way for you to tell the difference.


Friday, 28 February 2020

Does orbital angular mometum has no meaning for single photons?




  1. In the quantization of free electromagnetic field, it is found that the left-circularly polarised photons corrsponds to helicity $\vec{S}\cdot\hat p=+\hbar$ and right-circularly polarised photons corrsponds to $\vec{S}\cdot\hat p=-\hbar$. They respectively corresponds to the states $$a^{\dagger}_{\vec k,+}|0\rangle, \hspace{0.2cm}\text{and}\hspace{0.2cm} a^{\dagger}_{\vec k,-}|0\rangle$$ where $$a^{\dagger}_{\vec k,\pm}=\frac{1}{\sqrt{2}}(a^{\dagger}_{\vec k,1}\pm a^{\dagger}_{\vec k,2}).$$ This little calculation is performed in the QFT book by Maggiore by looking at the action of the spin operator $S^{ij}$ on these states. But nothing is mentioned about the orbital angular momentum of individual photons. My question is whether individual photons also carry orbital angular momentum? If yes, what are the values of orbital angular momentum in one-particle states? Can the superposition of two photons have orbital angular momentum? If yes, how to determine its possible values?




  2. In Classical Electrodynamics (Ref. J. D. Jackson, 3rd edition, page 350) or Classical field theory, the angular momentum of the electromagnetic field is defined as $$\vec J=\epsilon_0\int d^3x \vec x\times (\vec E\times \vec B)$$which can be reduced to the form $$\vec J=\epsilon_0\int d^3x [\vec E\times \vec A+\sum\limits_{i=1}^{3}E_j(\vec x\times \vec\nabla)A_j).$$ The fist term can be identified with the spin contribution of the angular momentum of the field which has its origin in the spin angular momentum of individual photons. The second term is identified with orbital angular momentum of the field? Is there a quantum mechanical origin to this orbital angular momentum?




  3. If there is no meaning to orbital angular momentum of individual photons? Is it only a property of that emerges only when collection of photons builds up a classical field?







electromagnetism - What is the magnetic field inside hollow ball of magnets?


Setup: we have a large number of thin magnets shaped such that we can place them side by side and eventually form a hollow ball. The ball we construct will have the north poles of all of the magnets pointing toward the center of the ball, and the south poles pointing away from the center. The magnets in this case are physically formed such that in this hollow ball arrangement they are space filling and there are no gaps between them.


Is such as construction possible? If so, what is the magnetic field (B-field) inside and outside the ball?



Answer



This is interesting. You would definitely have to 'nail down' the magnets to the sphere, because it will be an unstable configuration. Also in the real world, edge-effects will destroy any chance of perfect radial field lines, so let's assume we're in an ideal scenario.


Outside the sphere, the magnetic field would be that of a source monopole placed at the sphere's center. But we need $\nabla\cdot B=0$, so as a result there is no B-field on the outside.



Inside the sphere, there is nowhere the magnetic field lines can end, especially when they are all pointed towards the center... In fact, such a magnetic field would have divergence less than zero (the center of the sphere being a 'sink'), and this is a property that magnetic fields cannot have (since $\nabla\cdot B=0$). As a result, my answer is that there is no $B$-field on the inside either.


The real reason the B-field must have zero divergence: If there are no physical source monopoles in the vicinity, then any configuration is made of dipoles, and there is no way mathematically (I think) for a collection of dipoles to produce a monopole.


Angular momentum of a rotating black hole



Is there an upper limit to the angular momentum of a rotating (Kerr) black hole?




particle physics - Inverse beta decay; energy of anti-neutrino


Assuming that the target protons are at rest, calculate the minimum energy of the anti-neutrino for this reaction to take place: $$\bar{\nu}_e+p\rightarrow e^++n$$


I know the answer is given by $E_{\bar{\nu}}=m_{e^+}+m_n-m_p$ but I can't see why this is the case. How can this conserve momentum? Which frame of reference was this calculated in? It seems that we are assuming that the momentum of the positron and neutron are zero and so their energy is their rest mass. But If the proton's are at rest, how can this conserve momentum? I have tried to do the problem in other ways by using the invariant mass and calculating it in different reference frames but I always end up with some variable I don't know.



Answer



The solution you quote actually doesn't conserve momentum. You can use that $p^2$ is a Lorentz invariant and solve $(p_\nu+p_p)^2=(p_e+p_n)^2$, considering the left hand side in the lab frame and the right hand side in the CM frame. You can find $E_\nu$ and check that now momentum is conserved. Anyway, as mentioned in the previous comment, $E_\nu=m_e+m_n-m_p$ is a very good approximation.


newtonian mechanics - Rigid body dynamics derivation from Newton's laws for higher dimensions


Since Newton's laws are defined for point particles, I'd like to derive some laws of motions for rigid bodies only by considering a rigid body as a system of particles such that the distances from every particle to every other particle doesn't change with time. I think I derived that the force applied on one particle of a rigid body must be the same for every other particle of the rigid body in one dimension by the following:


Consider two particles on a line $P_1$ and $P_2$ both with masses $dm$ and positions $x_1$ and $x_2$. Let's say that a force $F_1$ acts on the particle $P_1$. By Newton's second law we get: $$F_1 = dm\frac{d^2x_1}{dt^2}$$ By the definition of a rigid body, the distance between $P_1$ and $P_2$ doesn't change with time. Define $r$ as this distance ie. $r = x_1 - x_2$. Therefore: $$\frac{dr}{dt} = 0$$ Taking the derivative of both sides we further get that $$\frac{d^2r}{dt^2} = 0$$ $$\frac{d^2(x_1 - x_2)}{dt^2} = 0$$ $$\frac{d^2x_1}{dt^2} = \frac{d^2x_2}{dt^2}$$ By Newton's second law this is the same as: $$\frac{F_1}{dm} = \frac{F_2}{dm}$$ (where $F_2$ is the force acting on $P_2$), and since $dm \ne 0$ finally: $$F_1 = F_2$$


These steps can be done for arbitrary amount of particles, and so we get that in one dimension, if a force is applied on one of the particle of a rigid body, every other particle of the rigid body experiences the same force.



The problem is that I cannot do a similar proof for two dimensions by defining the distance $r = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}$, but I'm sure that it can be done, and that if done torque, moment of inertia and center of mass would arise. Can someone do a similar proof for two dimensions, if it can be done like this at all?




Thursday, 27 February 2020

astrophysics - Can the Sun / Earth have a dark matter core?


If dark matter interacts with ordinary matter at all, it should most likely occur where ordinary matter is densest. Hence we have papers about neutron stars possibly containing dark matter cores (example).


But if neutron stars can have dark matter cores, white dwarfs, the Sun, or even the Earth can have dark matter cores too - they're just less likely to have such cores. If the Sun / Earth does have such a core dark matter would be much easier to study since they're so nearby. Is there any observational evidence that the Sun / Earth is 100% ordinary matter? If no, what is the observational limit of the Sun / Earth's dark matter fraction? I've seen popular-level articles (example) about such theories, but they're all rather speculative.



Answer



The easiest way for dark matter to become trapped inside another object is if it interacts and loses some kinetic energy. Otherwise it would just gain kinetic energy as it fell into a gravitational potential and then shoot out the other side. To be clear - this answer assumes that the "non-ordinary" dark matter that the question refers to is non-baryonic dark matter consisting of particles, as-yet unknown.


In order to interact we have to suppose some weak interaction of these particles is possible and if so this is going to be most effective when there is a large cross-section and interaction probability.


If the mass of an object is $M$ and its radius is $R$ and if the interaction cross-section is $\sigma$, then the following is illustrative.



The number of nucleons is $\sim M/m_u$. The number density of nucleons is $$n \sim \frac{3M}{4\pi R^3 m_u}.$$ The mean free path is $(n\sigma)^{-1}$ and the probability of interaction for a dark matter particle passing through the object will be $$ p \sim 1 - \exp(-2n\sigma R) \sim 2n\sigma R$$ $$ p \sim \frac{3 M\sigma}{2\pi R^2 m_u}$$


So for the Earth, putting in some estimates for $M$ and $R$, we have $p \sim 4 \times 10^{37} \sigma$; for the Sun we have $p \sim 10^{39} \sigma$; and for a neutron star with $M = 1.5M_{\odot}$ and $R=10$ km, we have $p \sim 10^{49}\sigma$.


Of course interaction alone is insufficient, the dark matter particle needs to lose energy and there are also considerations of gravitational focusing, the incoming energy spectrum (including the rest-mass of the particles) and the density of the dark matter and rate at which it might "build up". However, whatever $\sigma$ is (and we know it is small), it is 10 orders of magnitude more likely to interact and get captured inside a neutron star, than the Sun (or Earth). I suppose therefore the argument is that if there were any dark matter captured inside the Earth or Sun, then neutron stars must be full of the the stuff. It makes sense therefore to search for evidence of dark matter inside neutron stars.


Dark matter behaves gravitationally in the same way as ordinary matter, however it does not have the same equation of state as ordinary matter. There would therefore be structural differences (a different mass-radius relation) and also differences in the cooling rates for neutron stars (see this popular article for example). Given that we do not know what is at the core of a neutron star, then we don't know that there is no dark matter there. I will try to ascertain what observational limits do exist.


There have been various theoretical studies that show how the capture of dark matter might affect the structure of the Sun (e.g. Cumberbatch et al. 2010). Capture of dark matter lowers the core temperature and could have a potential effect on helioseismology results and the neutrino flux (especially at particular energies --Lopes & Silk 2010; Garani & Palomares-Ruiz 2017). No such effects have been unambiguously detected.


There also could be a neutrino signature from dark matter self-annihilation and this is a possible route to detecting dark matter trapped inside the Earth. Upper limits have been found from the ICECUBE experiment that are of course consistent with there being no dark matter there, but also consistent with the presence of dark matter with small self-interaction cross-sections (e.g. Kunnen 2015; Aartsen et al. 2017).


newtonian mechanics - If something that is moving at constant velocity has no net force acting on it, how come it is able to move other objects?


Let's say 10 kg block is sliding on a frictionless surface at a constant velocity, thus its acceleration is 0.



According to Newton's second law of motion, the force acting on the block is 0:


$a = 0$


$F = ma$


$F=0$


So let's say that block slid into a motionless block on the same surface, the motionless block would move.


Wouldn't the first block need force to be able to move the initially motionless block? I understand that it has energy due its constant velocity, but wouldn't it be its force that causes the displacement?



Answer



Here's a slightly different but equivalent way to think about it.


Forces describe interactions between two objects. If two objects are interacting, they exert forces on each other. If two objects are not interacting, they do not exert forces on each other. Thus, an object doesn't "carry around" a force with it. A force is not a property of an object, just as dmckee explains. Instead, we describe interactions between two objects using the more-abstract concept of force.


In your block-hits-other-block scenario, it's tempting to ask where did the force come from if colliding object had $F_\text{net}=0$? But when forces are viewed as interactions, it becomes more apparent that the force didn't come from anywhere within one of the objects. There simply wasn't an interaction before they collided, so we wouldn't ascribe the existence of a force force.



quantum mechanics - Uniqueness of the probability function for the Schrödinger equation


David Bohm in Section (4.5) of his wonderful monograph Quantum Theory after defining the usual density probability function $P(x,t)=\psi^{*} \psi$ for the Schrödinger equation for the free particle in one dimension: \begin{equation} i \hbar \frac{\partial \psi}{\partial t}= - \frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2}, \end{equation} states that $P(x,t)$ is the unique function of $\psi(x,t)$ and the partial derivatives of $\psi$ with respect to $x$ all computed in $(x,t)$ which satisfies the following properties:




  1. P is never negative;





  2. the probability is large when $|\psi|$ is large and small when $|\psi|$ is small;




  3. the significance of $P$ does not depend in a critical way on any quantity which is known on general physical grounds to be irrelevant: in particular this implies (since we are dealing with a nonrelativistic theory) that $P$ must not depend on where the zero of energy is chosen;




  4. $\int P(x,t) dx$ is conserved over time, so that by eventually normalizing $P$ we can choose $\int P(x,t) dx=1$ for all $t$.





Bohm gives no mathematical argument at all and actually the statement seems completely unjustified to me.


Does someone know some reason why it should be true?


NOTE (1). Since the Schrödinger equation is a first-order equation, the time evolution of $\psi$ is fixed given the initial state $\psi(x,0)$: this is the reason why we require that the probability $P(x,t)$ depends on the state at time $t$, that is $\psi(x,t)$, and its spatial derivatives. To be explicit, the requirement that $P(x,t)$ is a function only of $\psi(x,t)$ and the partial derivatives of $\psi$ with respect to $x$ all computed in $(x,t)$ means that there exists a function $p$ such that $P(x,t)=p\left(\psi(x,t),\frac{ \partial \psi}{\partial x}(x,t),...,\frac{\partial^m \psi}{\partial x^m}(x,t)\right)$.


NOTE (2). The one given above is not a mathematically rigorous formulation of the problem, but the one originally given by Bohm. So we can feel free to attach a rigorous mathematical meaning to the different properties. In particular, as for property (iv) we can formulate it in a different (and not equivalent) mathematical form, by requiring that a local conservation law holds, in the sense that there exists a function $\mathbf{j}$, such that, if we put $\mathbf{J}(x,t)=\mathbf{j}\left(\psi(x,t),\frac{ \partial \psi}{\partial x}(x,t),...,\frac{\partial^m \psi}{\partial x^m}(x,t)\right)$, we get \begin{equation} \frac{\partial P}{\partial t} + \nabla \cdot \mathbf{J} = 0. \end{equation}


NOTE (3). Similar questions are raised in the posts Nonexistence of a Probability for Real Wave Equations and Nonexistence of a Probability for the Klein-Gordon Equation. Presumably Bohm had in mind the same kind of mathematical argument to tackle these three problems, so the real question is: what mathematical tool did he envisage to use? Maybe some concept from classical field theory or the theory of partial differential equations?



Answer



Bohm's assumptions are not mathematically precise, so you must attach a mathematical interpretation to them (especially statements $2$ and $3$). Since you have not done so yourself, I will try to interpret them in a way that I find reasonable.


Definition for $P$: We will require that the probability density $P_\psi(x,t)$ of any smooth function $\psi$ be a local function of its partial derivatives at $(x,t)$.


More formally, let $\psi(x,t)$ be a smooth function and let $j^\infty_{x,t}\psi$ denote the infinite jet prolongation of $\psi$ at $(x,t)$, i.e., its formal Taylor series expansion about $(x,t)$. Then we can write $$P_\psi(x,t) = p(j_{x,t}^\infty\psi),$$ for some function $p$ defined on the jet bundle. In terms of regularity, we will require $p$ to be continuous.


This is essentially the definition you proposed for $P$. In fact, this is already problematic since we will necessarily have to work with wavefunctions which are not smooth. Therefore it is already problematic to require that $P$ depend on higher derivatives since they are not guaranteed to even exist. Nevertheless, we will allow arbitrary dependence on higher partials and apply the consistency conditions for $P$ evaluated on smooth functions. We can then recover $P$ uniquely for arbitrary $L^2$ functions by continuity.



Assumption 1: The function $p$ is a real-valued and non-negative.


Assumption 2: The probability density $P_\psi(x,t)$ is a non-decreasing function of $|\psi(x,t)|$, i.e., $$P_{\psi_1}(x,t) \ge P_{\psi_2}(x,t) \iff |\psi_1(x,t)| \ge |\psi_2(x,t)|.$$


Assumption 3: The function $p$ is invariant under global phase, i.e., $$p(e^{i\theta} j^\infty_{x,t}\psi) = p(j^\infty_{x,t}\psi).$$


Assumption 4: If $\psi(x,t)$ is a normalized function, then $P_\psi(x,t)$ is likewise a normalized function.


Let us go through the assumptions one by one. Assumption $1$ is relatively straightforward as probability densities must be real and non-negative.


Property $2$ is, in my view, the most difficult to properly interpret. The way I have interpreted the property in Assumption $2$ is to say that the magnitude of the probability density at a point is directly reflective of the magnitude of the wavefunction at that point. This is what I feel to be the most direct transcription of Bohm's second property.


This assumption is in fact extremely strong, and it necessarily implies that $p$ is independent of all derivatives of $\psi$. This is essentially because the value of a smooth function and all of its derivatives can be independent prescribed at any point. This was already pointed out by @Kostas.


Lemma: Suppose that $p(j^\infty_{x,t}\psi)$ is a continuous non-decreasing function of $|\psi(x,t)|$. Then $p$ is independent of all derivatives of $\psi$, i.e., $$p(j^\infty_{x,t}\psi) = p(j^0_{x,t}\psi) = p(\psi(x,t)).$$


Proof: By Borel's theorem, given any complex sequence $(a_{n,m})_{n,m=0}^\infty,$ and any point $(x,t)$, there exists a smooth function $\psi$ such that $$\frac{\partial^{n+m}}{\partial x^n \partial t^m}\psi(x,t) = a_{n,m}.$$ Therefore we are able vary the individual entries of the Taylor series completely independently.


Suppose that $p$ is not constant on some partial $\partial_i \psi$. Then by Borel's theorem we can find smooth functions $\psi_1$ and $\psi_2$ such that all Taylor coefficients of $\psi_1$ and $\psi_2$ agree at $(x,t)$ except for $\partial_i$. Then $$p(j_{x,t}^\infty\psi_1) \neq p(j_{x,t}^\infty\psi_2),$$ and without loss of generality we may assume that $$p(j_{x,t}^\infty\psi_1) > p(j_{x,t}^\infty\psi_2).$$ Next, we can find some other smooth function $\psi_3$ which agrees with $\psi_2$ for all Taylor coefficients at $(x,t)$ except for the constant term, $\psi_3(x,t)\neq \psi_2(x,t)$. By continuity, we can choose $\psi_3(x,t)$ slightly larger than $\psi_2(x,t)$ but still such that $$p(j_{x,t}^\infty\psi_1) > p(j_{x,t}^\infty\psi_3).$$ Therefore we have $$|\psi_1(x,t)| = |\psi_2(x,t)| < |\psi_3(x,t)|,\ \ \ \ \text{and}\ \ \ \ \ p(j_{x,t}^\infty\psi_1) > p(j_{x,t}^\infty\psi_3),$$ in contradiction to the monotonicity assumption. $\square$



Therefore we will assume that $P_\psi(x,t) = p(\psi(x,t))$ from now on. Note that at this point, we no longer need the assumption that $\psi$ is smooth.


Assumption $3$ is also essentially just a direct transcription of Property 3. If we change the energy, we effectively change the wavefunction by a global phase factor $e^{iEt}$. Since we can always make the wavefunction real and positive at any given point $(x,t)$ by an appropriate choice of a global phase factor, it follows that $p$ is independent of the phase of $\psi(x,t)$, i.e., $$p(\psi(x,t)) = p(|\psi(x,t)|).$$


Note that this assumption is actually completely unnecessary. We could've deduced the above equation by a slightly modification of our lemma using Assumption $2$. I will keep it just for the sake of completeness.


Finally, we come to Assumption $4$. Bohm's statement for his Property $4$ is that the probabilities should be normalized at all times, namely for all $t$ we should have $$1 = \int_\mathbb{R} P(x,t)\ dx.$$


This has certain ambiguities however. Which time-evolution should we use? Naively, any self-adjoint operator $H$ with a spectrum which is bounded below (so that there is a lowest energy level) should be able to serve as a valid Hamiltonian. If we require that the assignment $\psi \mapsto P_\psi$ be be universally valid, i.e., Hamiltonian independent, then we must require that $P(x,t)$ be normalized with respect to the unitary evolution generated by any Hamiltonian.


It can be shown that given any unitary $U$, there exists some admissible (self-adjoint, bounded below) Hamiltonian $H$ such that $U = e^{iH}$. In fact, we do not even need to consider the set of all admissible Hamiltonians, but rather just the set of all bounded Hamiltonians due to the following theorem.


Theorem: Let $\mathcal{H}$ be a Hilbert space and let $U$ be any unitary operator on $\mathcal{H}$. Then there exists a bounded self-adjoint operator $A$ (with norm at most $\pi$) such that $$U = e^{iA}.$$


Proof: This is a simple consequence of the Borel functional calculus for bounded operators applied to the principal branch of the logarithm. See here for a complete proof. $\square$


Now, let $\psi_1(x)$ be some normalized wavefunction. Let us assume without loss of generality that $P$ is normalized so that $$1 = \int_\mathbb{R} P_{\psi_1}(x)\ dx.$$ Let $\psi_2(x)$ be some other arbitrary normalized wavefunction. Let $U$ be any unitary such that $\psi_2 = U\psi_1$. Then there exists some bounded Hamiltonian such that the time-evolution brings the initial state $\psi(x,t=0) = \psi_1(x)$ to $\psi(x,t=1) = \psi_2(x)$. This means that we must have $$1 = \int_\mathbb{R} P_{\psi_1}(x)\ dx = \int_{\mathbb{R}} P_{\psi_2}(x)\ dx.$$ Since $\psi_2$ was an arbitrary normalized function, it follows that $P_\psi$ is normalized for all normalized $\psi$. We take this as our Assumption $4$.


Physically, this assumption is essentially saying that we should be able to vary the potential of the Hamiltonian so as to drive any normalized wavefunction arbitrarily close to any other normalized wavefunction. Since $P$ is conserved under this evolution, it must be normalized given any normalized wavefunction.



Note that this implies that we must have $p(0) = 0$. Otherwise $p(0) > 0$ will give a divergent integral for any normalized compactly supported function $\psi$.


Now let $y>0$. Define $\psi_y(x,t)$ to be equal to $1/\sqrt{y}$ for $x\in (0,y)$ and zero elsewhere. Then we have $$\int_\mathbb{R} |\psi_{y}(x,t)|^2\ dx = 1 = \int_\mathbb{R} p(|\psi_y(x,t)|)\ dx = \int_0^y p(1/\sqrt{y})\ dx = yp(1/\sqrt{y}).$$ Therefore we must have $$p(|\psi(x,t)|) = |\psi(x,t)|^2.$$


This is the desired claim. Of course, you might disagree with how I've interpreted some of Bohm's statements. But as you've said yourself in the question, some rigorous definitions must be assigned to these physical properties. These are simply what I felt to be the most faithful.


visible light - What is the difference between a white object and a mirror?


I was taught that something which reflects all the colors of light is white. The function of a mirror is the same, it also reflects all light. What's the difference?


Update:
But what if the white object is purely plain and does not scatter light? Will it be a mirror?



Answer



The difference is the direction the light is emitted in. Mirrors 'bounce' light in a predictable direction, white objects scatter light.



cosmology - Expanding universe - Creation of Space



Is the expansion of the space between the galaxies caused by stretching of existing space or the creation of new space?


The fact that the energy content remains constant, and is therefore not being diluted, would suggest to me that it is not being stretched and therefore must be the result of the creation of additional space.


If this is so, are there any theories as to how space is created.



Answer



The unit of the Hubble constant is 1/sec. and it is determining the percentage by which space is expanding each second. The Hubble constant is not constant in time, for this reason it is called also Hubble parameter.


For getting an idea of space expansion you have to imagine a cubic grid throughout the whole space where the grid/ the number of cubes does not change, but each cube is increasing in size. It is similar to a sort of "stretching" of the size of the cubes.



Space includes dark energy whose energy density is constant and does not dilute. This is why in an advanced stage the universe is/ will be dominated by dark energy, which is prevailing/ will prevail over matter and radiation.


Local assemblies of objects in space such as galaxies are practically not concerned by space expansion because their gravitational forces are holding them together even if space is expanding


Space expansion has been observed in the form of redshifting of radiation from very far objects, but at my knowledge there is no current theory explaining how space is created.


Wednesday, 26 February 2020

newtonian mechanics - Why can't we find a true inertial frame?


From one book, I read that if we release a ball horizontally on earth, its velocity does change in very accurate observations. If we track its motion, we get full information of its motion (incl. acceleration respect to an true inertial frame).


And we can then compare it to the ideal linear uniform motion, then we get our acceleration to an true inertial frame (in which newtons 1st law is true), right? So with the developed formulas, we get all things we ever wanted in an exactly true inertial frame, can't we?




general relativity - Non-static spherical symmetry spacetime


The Schwarzschild solution is a static spherically symmetric metric. But I wanted to know that how would the space-time interval look in a Non-Static case. I tried to work it out and got $$ds²= Bdt² - Cdtdr - Adr² - r²dΩ²$$ Where $A,B,C$ are functions if $r$ and $t$. Being an amateur I am not sure whether this is correct.



Answer



While Qmechanic's answer is not incorrect, I think it is misleading and misses the point of the question.


First, there is the well-known "Vaidya metric", which is a simple generalization of Schwarzschild to achieve a non-static spherically symmetric spacetime. Of course, in agreement with Birkhoff's theorem, this is also a non-empty (non-vacuum) spacetime — but the OP never restricted to vacuum solutions.


Second, the question strikes me as being more immediately about the general form of a spherically symmetric metric. There's a nice treatment given by Schutz in his chapter 10.1, which is also mostly reproduced here, as well as chapter 23.2 and box 23.3 of MTW. The answer is that yes, a spherically symmetric metric can generally be put in the form given by the OP's equation. But it's also worth pointing out that the $dt\, dr$ term can be eliminated by redefining $t$ by a linear combination of $t$ and $r$ involving $B$ and $C$ (as MTW explain).



cosmology - Why did the Big Bang produce hydrogen?


I know that first generation stars' main fuel was Hydrogen. I know the Big Bang happened at some point in time. Now if the strong force exists, then why aren't different, higher mass, number elements produced? why was there single proton nucleus like hydrogen?



Answer



The answer touches upon the concept of Big Bang Nucleosynthesis (BBN), which is excellently explained on a graduate level in Baumann's lecture notes.


The key idea is the following: In order to form metals (anything heavier than hydrogen and helium), you need deuterium nuclei. But deuterium nuclei are only formed significantly when the temperature of the primordial plasma falls far below the binding energy of deuterium (T ~ 2.2 MeV). Why? Well, the formation of deuterium has to compete with the enormous amount of high-energy photons in the universe at the time, which split up the deuterium nuclei. So the photon bath has to be cool enough, so that most of the photons don't have enough energy to split the nuclei apart again. The relevant number is the baryon-to-photon ratio $\eta\sim10^{-9}$, i.e. for each baryon we have $10^9$ photons.



Once deuterium is produced, it is almost instantly fused to helium nuclei. However, basically no elements heavier than helium are formed in BBN because they require high enough number densities of helium nuclei from which they would be fused. But by the time helium fusion has begun, the reaction rates for those are already too slow.


For a more detailed and quantitative discussion see the link to the lecture script, chapter 3 'Thermal history', the section about BBN.


hawking radiation - Why isn't black hole information loss this easy (am I missing something basic)?


Ok, so on Science channel was a special about Hawking/Susskind debating black holes, which can somehow remove information from the universe.


A) In stars, fusion converts 4 hydrogen into 1 helium, with a small fraction of the mass being converted to energy.


This happens all the time, and nobody is that amazed about it anymore. So, I have to assume that science is ok with the idea of fusion happening, and so, converting matter into energy doesn't destroy the information of the matter.


Let's take that as Assuption 1:
** 1: Converting matter to energy is OK.



B) In empty space, quantum fluctuations of the energy levels are presumed to allow "pair creation" where a particle and its anti-particle can be created out of the really tiny amounts of energy that exist in empty space. For quantum fluctutations, if I understand it, the uncertaincy principle actually dictates that the amount of energy can fluctuate, allowing a pair to be created even if there "shouldn't be enough energy", because the amount of energy in a small space is uncertain.


The tiny amount of energy becomes a particle + anti-particle, for a tiny instant, and then they fall back into each other, and are converted back to the same tiny amount of energy.


Anyways, ignoring the fact that I'm fuzzy on all the requirements for pair creation -- it is generally accepted as part of quantum physics that it can happen.


So, again, I have to assume that science is ok with the idea of pair creation. And so, converting energy into matter, must also not destroy the information value of the energy, and that information value must remain even after the 2 particles annihilate and go back to being energy.


Let's take that as Assumptions 2 + 3: ** 2: Converting energy into matter is OK. ** 3: Pair creation, doesn't mess up anything.


C) Ok, now black holes. My (weak) understanding of Hawking's idea on how black holes emit energy is that pair creation can happen close to the black hole. Since the position of energy is uncertain at the quantum level, some of the energy that "should be" inside the black hole's event horizon, can be treated as uncertainly outside the black hole, and then that extra energy can cause pair creation. If the black hole then pulls in both particles, then nothing special happens -- the mass/energy stays inside the black hole.


But if one or both of the particles moves away from the black hole, and somehow escapes, that can cause the energy to leave the black hole. This is assumed to be a very unlikely event, but still within possibility.


Anyways, that's my (very rough) understanding of Hawking's theory on how black holes can loose mass over time, when in theory "nothing can escape a black hole". Let's assume Hawking's correct...


If a black hole continues to loose mass over cosmic time, it will eventually disappear, and the problem is stated that when it vanishes: All of the information which the black hole had formerly consumned, will be lost.


Which contradicts the Law of Conservation of Information. At least, that's how the Science channel presented it.



So, my idea/question:


** Every time the black hole looses mass, it's through the processes of energy->matter, or matter->energy->matter, involved in pair creation. Based on assumptions 1, 2 + 3 above, none of these should violate the Law of Conservation of Information. It seems that the very slow process of loosing mass, should also be a very slow process of "leaking" information back into the rest of the universe.


*** So, WHAT's the problem? ***

By my understanding: By the time the black hole vanishes, it's OK -- because all of it's information should also have leaked back into the rest of the universe, and no information will be lost.


OK, so please tell me why this is wrong. It seems intuituve to me, but really smart people have spent years looking into this. So, am I missing something?


Thanks! -Chuck


ps. Also, if my statement of fusion, pair creation, and/or hawking radition is wrong -- BUT NOT IN A WAY THAT MATERIALLY AFFECTS THE QUESITON -- please overlook that and just focus on the question.



Answer



Hawking thought - and could "rigorously" deduce from semiclassical gravity - that the information has to be lost because it can't get out of the black hole interior once it gets there.



enter image description here


To see why, look at this "Penrose causal diagram" that may be derived for a black hole solution. Diagonal lines at 45 degrees are trajectories of light, more vertical lines are time-like (trajectories of massive objects), more horizontal lines are space-like.


If you look e.g. at the yellow surface of the star (its world line), you see that it ultimately penetrates through the green event horizon into the purple black hole interior. Once the object - and the information it carries - is inside, it can no longer escape outside (don't forget: time is going up), into the light green region, because it would have to move along spacelike trajectories.


So the information carried by the star that collapsed to the black hole inevitably ends at the violet horizontal singularity and it may never be seen in a completely different region outside the black hole.


The information loss of course depends on the detailed geometry of the black hole that is not shared by a helium nucleus, so it shouldn't be surprised that helium nuclei and black holes have different properties.


As we know today, the information is allowed to "tunnel" along spacelike trajectories a little bit in quantum gravity (i.e. string/M-theory). This process is weak but this weak "non-local process" allows the information to be preserved.


thermodynamics - Why does the law of increasing entropy, a law arising from statistics of many particles, underpin modern physics?


As far as I interpret it, the law of ever increasing entropy states that "a system will always move towards the most disordered state, never in the other direction".


Now, I understand why it would be virtually impossible for a system to decrease it's entropy, just as it is virtually impossible for me to solve a Rubik's cube by making random twists. However the (ever so small) probability remains.


Why does this law underpin so much of modern physics? Why is a theory that breaks this law useless, and why was Maxwell's demon such a problem? Does this law not just describe what is most likely to happen in complex systems, not what has to happen in all systems?



Answer



Hannesh, you are correct that the second law of thermodynamics only describes what is most likely to happen in macroscopic systems, rather than what has to happen. It is true that a system may spontaneously decrease its entropy over some time period, with a small but non-zero probability. However, the probability of this happening over and over again tends to zero over long times, so is completely impossible in the limit of very long times.


This is quite different from Maxwell's demon. Maxwell's demon was a significant problem because it seemed that an intelligent being (or more generally any computer) capable of making very precise measurements could continuously decrease the entropy of, say, a box containing gas molecules. For anyone who doesn't know the problem, this entropy decrease could be produced via a partitioning wall with a small window that the demon can open or close with negligible work input. The demon allows only fast-moving molecules to pass one way, and slow-moving ones the other way. This effectively causes heat to flow from a cold body of gas on one side of the partition to a hot body of gas on the other side. Since this demon could be a macroscopic system, you then have a closed thermodynamical system that can deterministically decrease its entropy to as little as possible, and maintain it there for as long as it likes. This is a clear violation of the second law, because the system does not ever tend to thermodynamic equilibrium.


The resolution, as you may know, is that the demon has to temporarily store information about the gas particles' positions and velocities in order to perform its fiendish work. If the demon is not infinite, then it must eventually delete this information to make room for more, so it can continue decreasing the entropy of the gas. Deleting this information increases the entropy of the system by just enough to counteract the cooling action of the demon, by Landauer's principle. This was first shown by Charles Bennett, I believe. The point is that even though living beings may appear to temporarily decrease the entropy of the universe, the second law always catches up with you in the end.


cosmology - How do people calculate proportions of dark matter, dark energy and baryonic matter of the universe?


The Wikipedia page on dark matter mentions that the Planck mission had revealed that in our universe ordinary baryonic matter, dark matter and dark energy are present in the ratio: 4.9%, 26.8% and 68.3% respectively. I do not understand exactly how this result had been got. Did the Planck mission scan the entire universe as it is today to get these figures or just a part of the universe? (Is it really possible to scan the entire universe if the universe is infinitely large?) Can anybody please explain the principle behind the calculation?


I also notice that the above figures are given in terms of percentages. Can I get the absolute values of the individual quantities like, total dark energy = .... Joule in the universe?


In case only a part had been scanned then is it possible that in future these figures may change if a larger portion of the universe is scanned by another mission?




special relativity - Relation between the Dirac Algebra and the Lorentz group


In their book Introduction to Quantum Field Theory, Peskin and Schroeder talk about a trick to form the generators for the Lorentz group from the commutators of the gamma matrices, using their anti-commutation relations. Using the anti-commutation relations


$$\{ \gamma^\mu, \gamma^\nu\} = 2 g^{\mu \nu} \times \mathbf{1}_{n \times n}$$


the generators of the Lorentz group are formed as


$$S^{\mu\nu} = \frac {i}{4} \big[ \gamma^\mu, \gamma^\nu \big].$$


This can be seen as the general case of a set of basis vectors (here the 16 matrices corresponding to the multiples of the gamma matrices) which form a Clifford algebra of $\mathcal{R}(p,q)$, and whose commutators form generators of a Lie group (Lorentz group here) which conserves the quadratic form of the Clifford algebra itself.



Is there a way to formalize this idea? I want to know if we take any arbitrary metric $g^{\mu\nu}$ on some space $V$, will the generators defined as $S^{\mu\nu}$ generate a Lie group whose elements are transformations on $V$ that conserve the inner product corresponding to the metric?



Answer




I want to know if we take any arbitrary metric $g_{\mu\nu}$ on some space $V$, will the generators defined as $S^{\mu\nu}$ generate a Lie group whose elements are transformations on $V$ that conserve the inner product corresponding to the metric?



Yes. The result is called the "Spin" group. A good overview is in this paper.


In general, Clifford algebras are created from an arbitrary vector space $V$ (over a field $\mathbb{F}$) and a quadratic norm $Q : V \to \mathbb{F}$, where $\mathbb{F}$ is usually (certainly by physicists) taken to be either $\mathbb{R}$ or $\mathbb{C}$. If you have a metric, that's a slightly stronger statement than just having a quadratic norm, so you can certainly use it to construct the Clifford algebra — by defining the norm as $Q(v) = g_{\mu \nu} v^\mu v^\nu$. On the other hand, if you have the norm, you can use it to define the inner product between any two vectors $v$ and $w$ by polarization: $g(v, w) = \frac{1}{2}[Q(v+w) - Q(v) - Q(w)]$. Of course, that only works if you can divide by $2$, which isn't the case for all fields. On the other hand, I can't remember seeing any useful application of Clifford algebra using a field other than $\mathbb{R}$ or $\mathbb{C}$.


In the case of spacetime, the vector space is just the set of $\gamma^\mu$ vectors, which shouldn't be thought of as complex matrices, but rather as just the usual basis vectors: $\hat{t}, \hat{x}, \hat{y}, \hat{z}$. This approach is usually called geometric algebra. The field should actually be taken to be $\mathbb{R}$ (because the complex structure we usually use in quantum mechanics actually shows up automatically in the Clifford algebra). What you get is called the spacetime algebra.


This same logic can be extended to other spaces, of any dimension and signature (including indefinite and degenerate signatures). Any two vectors in the Clifford algebra can be multiplied by each other, and thus the anticommutative product constructed — the result is called a bivector. The set of all bivectors forms the $\mathfrak{spin}$ algebra, where the product is not just the Clifford product but its commutator. More generally, we can take any even number of vectors and take their product. The invertible elements of this form give us the Spin group, related to the bivectors through exponentiation (much as the Lie group is related to the Lie algebra). And they transform vectors by conjugation, which naturally leaves the inner product invariant. So that's the answer to your question.


We also have a sort of inverse to the above:




Every Lie algebra can be represented as a bivector algebra; hence every Lie group can be represented as a spin group.



This result is found here. While they do use a sort of "doubled" Clifford algebra in general, this isn't always necessary. That paper gives a good overview of these issues (as does the one about Spin groups, though not in as much detail).


Tuesday, 25 February 2020

work - How can energy be useful when it is 'abstract'?



The topic which haunted me for two years until I gave up on it. But now I am doing engineering and this topic suddenly popped out of my textbook from nowhere. I seriously need to understand this topic because I wasn't able to do so in the past. I have read many book on 'energy' and it came to nothing. Maybe just because the many books I have read all said 'energy' is something which doesn't 'exist'! It is 'abstract'! It is just a number which represents the state or orientation of a system. But then I see so many examples that 'uses' energy to do 'work'. But the question here is that if anything doesn't exist in this universe, then how can the same thing be used to do something that exist?


My problem is that I didn't understood the topic 'energy' and all the other topics related to it (work, power etc.)




spectroscopy - What is the spectrum of a nuclear bomb in a vacuum?


This question about 'nukes in space' mentions that the two forms of energy released from a nuclear bomb come from neutrons and photons (the latter about 104 times the former).


It's mentioned that the photons are in the form of X-rays, but what is the actual spectrum of the light emitted? How much of the light comes from



  • the fission (assume pure-fission bomb for simplicity) where 239Pu is split into a mish-mash of lighter elements (is this energy quantized, i.e. has "peaks"?)

  • black-body emission (the results being heated to 10x K; is this continuous or does strange stuff happen at very high temperatures?)

  • extremely short-lived fission products





homework and exercises - Gamma matrices in (2+1)



I am sure that is very well-known question and see on this site several similar questions but I would like to specify the answer


1) I know that in $(2+1)$-dimensions one can construct $\gamma$-matrices as in $(1+1)$ dimensions


2) But I see in several books/papers that in $(2+1)$ one can also take $\gamma$-matrices in $(3+1)$ and just remove $\gamma^3$


I do not understand why the second way is possible. Can anybody explain it?



Answer



Gamma matrices don't have a unique representation. They only requirement is that they satisfy the axiom of the Clifford algebra $$ \{\gamma^\mu,\gamma^\nu\} = 2 \,\eta^{\mu\nu}\,. \tag{1} $$ The usual choice is to take the representation which has the smallest dimension (for obvious reasons). For space-time dimension $d$ the matrices would be $n\times n$ where $n = 2^{\lfloor d/2\rfloor}$.


The construction 1) would be the one with smallest dimension and thus the preferable one. The construction 2) has bigger dimension but we trivially see that it satisfies the Clifford algebra as it is inherited from the $(3+1)$ dimensional one. Indeed any subset of $d'$ $\gamma$ matrices of a Clifford algebra in dimension $d$ satisfies his own Clifford algebra in $d'$ dimensions.


The proof is trivial, if $(1)$ holds for all $\mu,\nu \in 1,\ldots,d$, then it holds for $\mu,\nu \in 1,\ldots,d'.


general relativity - Variation of the metric with respect to the metric


For a variation of the metric $g^{\mu\nu}$ with respect to $g^{\alpha\beta}$ you might expect the result (at least I did):



\begin{equation} \frac{\delta g^{\mu\nu}}{\delta g^{\alpha\beta}}= \delta^\mu_\alpha\delta^\nu_\beta. \end{equation}


but then to preserve the fact that $g^{\mu\nu}$ is symmetric under interchange of $\mu$ and $\nu$ we should probably symmetrise the right hand side like this:


\begin{equation}\frac{\delta g^{\mu\nu}}{\delta g^{\alpha\beta}}= \delta^\mu_\alpha\delta^\nu_\beta + \delta^\mu_\beta\delta^\nu_\alpha.\end{equation}


Is this reasonable/correct? If not, why not?


It seems that I can derive some weird results if this is right (or maybe I'm just making other mistakes).



Answer



Since the metric $g_{\mu\nu}=g_{\nu\mu}$ is symmetric, we must demand that


$$\tag{1} \delta g_{\mu\nu}~=~\delta g_{\nu\mu}~=~\frac{1}{2}\left(\delta g_{\mu\nu}+\delta g_{\nu\mu}\right)~=~\frac{1}{2}\left( \delta_{\mu}^{\alpha}\delta_{\nu}^{\beta} + \delta_{\nu}^{\alpha}\delta_{\mu}^{\beta}\right)\delta g_{\alpha\beta},$$


and therefore


$$\tag{2} \frac{\delta g_{\mu\nu}}{\delta g_{\alpha\beta}} ~=~\frac{1}{2}\left( \delta_{\mu}^{\alpha}\delta_{\nu}^{\beta} + \delta_{\nu}^{\alpha}\delta_{\mu}^{\beta}\right).$$



The price we pay to treat the matrix entries $g_{\alpha\beta}$ as $n^2$ independent variables (as opposed to $\frac{n(n+1)}{2}$ symmetric elements) is that there appears a half in the off-diagonal variations.


Another check of the formalism is that the RHS and LHS of eq. (2) should be idempotents because of the chain rule. For further motivation, see e.g. this Phys.SE post.


What is "quantum locking"?


I've always assumed that "quantum locking" was a term invented by the writers of Dr Who, but this video suggests otherwise.


What is quantum locking? Is it real?



Answer



Apparently, a key detail is that the superconducting layer on the disk is very thin.


Usually superconducting levitation demonstrations use thicker layers of superconductors which completely deflect the field from magnets - so they float above the magnet but also tend to wobble around a bit.


This demo uses a very thin superconducting layer - this allows the magnetic field to penetrate the superconductor at a small number of defect sites. The small but non-zero magnetic field passing through the defect sites 'locks' the superconductor into whatever orientation is set initially - and also prevents the wobbly shaking normally seen in Meisner-effect levitation.



See: http://io9.com/5850729/quantum-locking-will-blow-your-mind--but-how-does-it-work


No 'Weeping Angels' are involved.


Conservation of information and determinism?


I'm having a hard time wrapping my head around the conservation of information principle as formulated by Susskind and others. From the most upvoted answers to this post, it seems that the principle of conservation of information is a consequence of the reversibility (unitarity) of physical processes.


Reversibility implies determinism: Reversibility means that we have a one to one correspondence between a past state and a future state, and so given complete knowledge of the current state of the universe, we should be able to predict all future states of the universe (Laplace's famous demon).


But hasn't this type of determinism been completely refuted by Quantum Mechanics, the uncertainty principle and the probabilistic outcome of measurement? Isn't the whole point of Quantum Mechanics that this type of determinism no longer holds?


Moreover, David Wolpert proved that even in a classical, non-chaotic universe, the presence of devices that perform observation and prediction makes Laplace style determinism impossible. Doesn't Wolpert's result contradict the conservation of information as well?



So to summarize my question: How is the conservation of information compatible with the established non-determinism of the universe?



Answer



The short answer to this question is that the Schrödinger equation is deterministic and time reversible up to the point of a measurement. Determinism says that given an initial state of a system and the laws of physics you can calculate what the state of the system will be after any arbitrary amount of time (including a positive or negative amount of time). Classically, the deterministic laws of motion are given by Newton's force laws, the Euler-Lagrange equation, and the Hamiltonian. In quantum mechanics, the law that governs the time evolution of a system is the Schrödinger equation. It shows that quantum states are time reversible up until the point of a measurement, at which point the wave function collapses and it is no longer possible to apply a unitary that will tell you what the state was before, deterministically. However, it should be noted that many-world interpreters who don’t believe that measurements are indeterministic don’t agree with this statement, they think that even measurements are deterministic in the grand scheme of quantum mechanics. To quote Scott Aronson:



Reversibility has been a central concept in physics since Galileo and Newton. Quantum mechanics says that the only exception to reversibility is when you take a measurement, and the Many-Worlders say not even that is an exception.



The reason that people are loose with the phrasing “information is always conserved” is because the “up until a measurement” is taken for granted as background knowledge. In general, the first things you learn about in a quantum mechanics class or textbook is what a superposition is, the Heisenberg uncertainty principle and then the Schrödinger equation.


For an explanation of the Schrödinger equation from Wolfram:



The Schrödinger equation is the fundamental equation of physics for describing quantum mechanical behavior. It is also often called the Schrödinger wave equation, and is a partial differential equation that describes how the wavefunction of a physical system evolves over time.




The Schrödinger equation explains how quantum states develop from one state to another. This evolution is completely deterministic and it is time reversible. Remember that a quantum state is described by a wave function $|\psi\rangle$, which is a collection of probability amplitudes. The Schrödinger equation states that any given wave function $|\psi_{t_0}\rangle$ at moment $t_0$ will evolve to become $|\psi_{t_1}\rangle$ at time $t_1$ unless a measurement is made before $t_1$ . This is a completely deterministic process and it is time reversible. Given $|\psi_{t_1}\rangle$ we can use the equation to calculate what $|\psi_{t_0}\rangle$ is equal to.


If the electron is in a superposition then the wave function will be so:



$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ where $\alpha$ and $\beta$ are equal to $\frac{1}{\sqrt{2}}$.



The state of an electron that is spin up is $|\psi\rangle = 1|1\rangle$. Clearly, a quantum state that is in a superposition of some observables is a valid ontological object. It behaves in a way completely different than an object that is collapsed into only one of the possibilities via a measurement. The problem of measurements, what they are and what constitutes one, is central to the interpretations of quantum mechanics. The most common view is that a measurement is made when the wave function collapses into one of its eigenstates. The Schrödinger equation provides a deterministic description of a state up to the point of a measurement.


Information, as defined by Susskind here, is always conserved up to the point of a measurement. This is because the Schrödinger equation describes the evolution of a quantum state deterministically up until a measurement.


The black hole information paradox can be succinctly stated as this:




Quantum states evolve unitarily governed by the Schrödinger equation. However, when a particle passes through the event horizon of a black hole and is later radiated out via Hawking radiation it is no longer in a pure quantum state (meaning a measurement was made). A measurement could not have been made because the equivalency principle of general relativity assures us that there is nothing special going on at the event horizon. How can all of this be true?



This paradox would not be a paradox if the laws of quantum mechanics didn't give a unitary, deterministic, evolution for quantum states up to a measurement. The reason being, if measurements are the only time unitarity breaks down and the equivalency principle tells us a measurement cannot be happening at the horizon of a black hole, how can unitarity break down and cause the Hawking radiation to be thermal and therefore uncorrelated with the in-falling information? Scott Aaronson gave a talk about quantum information theory and its application to this paradox as well as quantum public key cryptography. In it he explains



The Second Law says that entropy never decreases, and thus the whole universe is undergoing a mixing process (even though the microscopic laws are reversible).


[After having described how black holes seem to destroy infomration in contradiction to the second law] This means that, when bits of information are thrown into a black hole, the bits seem to disappear from the universe, thus violating the Second Law.


So let’s come back to Alice. What does she see? Suppose she knows the complete quantum state $|\psi\rangle$ (we’ll assume for simplicity that it’s pure) of all the infalling matter. Then, after collapse to a black hole and Hawking evaporation, what’s come out is thermal radiation in a mixed state $\rho$. This is a problem. We’d like to think of the laws of physics as just applying one huge unitary transformation to the quantum state of the world. But there’s no unitary U that can be applied to a pure state $|\psi\rangle$ to get a mixed state $\rho$. Hawking proposed that black holes were simply a case where unitarity broke down, and pure states evolved into mixed states. That is, he again thought that black holes were exceptions to the laws that hold everywhere else.



The information paradox was considered to be solved via Susskind's proposal of black hole complementarity and the holographic principle. Later AMPS showed that the solution is not as simple as it was stated and further work needs to be done. Currently the field of physics is engaged in an amazingly beautiful collection of ideas and solutions being proposed to solve the black hole information paradox as well as the AMPS paradox. At the heart of all of these proposals, however, is the belief that information is, conserved up to the point of a measurement.


Monday, 24 February 2020

computational physics - Coordinate system for numerical simulation of general relativity


Lets say i want to simulate the differential equations of GR with some numerical method. I can express the Einstein tensor in terms of the christoffel symbols which in turn can be expressed in terms of the metric and its first and second derivative.


Now i can impose a set of coordinates $[t, x, y, z]$ and set up a big cartesian grid. Each point contains information about the metric of that point, as seen by an observer at infinity. Initially the space is empty so the metric will reduce to the Minkowski-metric.


Now i place some mass at the origin, but with finite density. Assume that the mass encapsulates many grind points and the grid extends to a far distance so the metric at the end of the grid is approximatly flat.


Now i want to simulate what happens. To do this i rearrange the equations and solve for $\frac{\partial^2}{\partial t^2}g_{\mu\nu}$ which should govern the evolution of the system. Since i know the initial conditions of the metric and $T_{\mu\nu}$ i should be able to simulate the dynamics of what happens.


(I assume the metric outside would converge to the outer schwarzschild metric while the parts inside the mass would converge to the inner schwarzschild metric. Additionally a gravitational wave should radiate away because of the sudden appearance of a mass).


However, by doing so i have placed the spacetime itself on a background grid, which seems fishy to me.


Question 1: How does the coordinate system influences the equation? For example i could have choosen $[t, r, \phi, \theta]$ and would have gotten the same equations since it involves only ordinary derivatives. Do i assume correctly that the coordinate system properties only appear during numerical integration?


Question 2: What physical significance does this "cartesian grid" system have? If i look at a point near the surface of the mass after a long time, where is this point in the spacetime? A stationary observer would follow the curvature and may already have fallen into the mass. Does this mean my coordinate system itself "moves" along? How can i get a "stationary" (constant proper radial distance) coordinate system?



Question 3: Since i have the metric at every grid point i could calculate (numerically) the geodesic through this spacetime and find the path of an infalling observer, right?


Question 4: Would this work for simulating non-singular spacetimes? Or is this approach too easy?


edit1: To clarify question 1, different coordinate systems have different artefacts on their own. Instead of a grid i could use schwarzschild-coordinates. If i now expand the whole equation i would get the same form, because it is coordinate independent. However the resulting metric would look different (same metric, but expressed in another coordinate system). I'm a bit confused because the metric and the coordinate system are tied together. If i'm solving for the metric itself i still need to provide the coordinate system. And i dont understand how i introduce the coordinate system.




Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...