Monday, 31 August 2020

general relativity - What happens when a "matter-black-hole" and an "anti-matter-black-hole" collide?


Let's say we have one black hole that formed through the collapse of hydrogen gas and another that formed through the collapse of anti-hydrogen gas. What happens when they collide? Do they (1) coalesce into a single black hole or do they (2) "annihilate" into radiation?


One would expect (1) to be the case if the No Hair Theorem were to hold. So I guess what I'm really asking for is a modern understanding of this theorem and its applicability given what we know today.




experimental physics - Experimentally measure velocity/momentum of a particle in quantum mechanics


In the context of quantum mechanics one cannot measure the velocity of a particle by measuring its position at two quick instants of time and dividing by the time interval. That is, $$ v = \frac{x_2 - x_1}{t_2 - t_1} $$ does not hold as just after the first measurement the wavefunction of the particle "collapses".


So, experimentally how exactly do we measure the veolcity (or say momentum) of a particle?


One way that occurs to me is to measure the particle's de Broglie wavelength $\lambda$ and use $$p = \frac{h}{\lambda}$$ and $$v = \frac{p}{m}$$ to determine the particle's velocity. Is this the way it is done? Is there any other way?




Sunday, 30 August 2020

nucleosynthesis - What could explain the presence of Technetium in the spectral lines of stars?


So, I understand that TC doesn't exist in nature [though, I don't know why every reference I see regarding TC says that and then goes on to state that it is found in some stars...] but, if that's the case, then why is it found in some stars? Furthermore, why is this element skipped during the regular process of nucleosynthesis? And, why do some stars magically fuse technetium while others don't? Lastly, how many technetium stars have been discovered? Thanks much.


***Just a note, I was just looking at this all again and started thinking about how Technetium is way heavier than iron, so it's not formed in regular nucleosynthesis, right? Which means the TC is left over from supernovae explosions, right? So, if that's the case, why is it not produced naturally? What is natures aversion to this element?




space - Keeping air in a well


Let's say I've got an Earth-like planet with no atmosphere: it's just a barren ball of rock. I want to live there, but I don't like domes, so instead I'm just going to dig a big hole and let gravity keep the air in.


How deep a hole do I need?


According to a chart I found, the density of the atmosphere drops to pretty much zero by about 50km, at the top of the stratosphere. But 'pretty much zero' is not zero; the mesosphere beyond that extends up to about 80km and while vanishingly thin is responsible for dealing with most meteors.


If my hole is a mere 50km deep, then, some of my air is going to diffuse out of the hole and onto the planet's surface. But the surface of my planet is largely flat; there's nowhere for the air to go, so it's just going to hang around and form a dynamic equilibrium. (Unlike, say, if I built a 50km wall and tried to keep the air inside. Air would leak over the top of the wall, fall down into the vacuum on the other side, and be lost forever. Which is why the Ringworld had walls 1000km high.)


So I don't really know how shallow a hole I can get away with. I can replace the air, but I would like it to go without maintenance for at least small geological timescales. Any advice before I start up the earth-moving equipment?


(Yes, it's SF worldbuilding.)



Answer



Even a normal planet doesn't permanently lock its atmosphere: a little bit of it is creeping out all the time. The air molecules are distributed according to a Maxwell-Boltzmann distribution, which falls off to zero exponentially. A small fraction of that air will always be above escape velocity and will disappear into space. The distribution of air re-thermalizes, and thus another fraction is lost to space. The fraction that is above escape velocity depends on the mass of the molecule: it's appreciable for helium on Earth (popped balloons are gone forever).


For your deep well, you'd have to consider the shape of the Maxwell-Boltzmann distribution and the variation of pressure with altitude (and include a non-Earth "g"). Frame the problem in terms of the amount of loss that you're comfortable with--- something so small that it won't be missed or can easily be replenished.



Someone who's actually engineering this might also want to chill the upper layer of gas with some kind of large-scale air conditioning. That would reduce the loss so that the hole wouldn't need to be as deep. Maybe a greenhouse effect could be useful to keep the upper layer cold and the lower layer warm. After all, who needs to see the sun?


general relativity - What methods can astronomers use to find a black hole?


How can astronomers say, we know there are black holes at the centre of each galaxy?


What methods of indirect detection are there to know where and how big a black hole is?



Answer



There are three main feasible ways of detecting a black hole:




  • Gravitational lensing: the strong gravitational attraction of a black hole bends space time and the light coming from nearby stars (nearby in the sense of being in the same are in our sky) is bent inwards. There are a few well known distorsion types due to gravity, but mainly we can see galaxies, which are more or less elliptical, bent into pancake shapes.


    alt text





  • Accretion disks and jets: as the black hole "sucks in" dust and other similar matter from nearby space, the matter is accelerated at relativistic velocities and it emits x-rays as it goes to die inside the event horizon.


    alt text




  • Stars orbiting black holdes: if a star is orbiting a black hole, it will appear to be orbiting empty space (since we can't basically see a black hole directly).


    alt text





Other ways, like Hawking radiation, are only theoretically possible for now -we could maybe be able to see old mini black holes "popping" but it's not really clear how that would happen exactly and none has been seen so far.


particle physics - What is chirality?



I actually wanted to make the title as "What is the difference between chirality and helicity"? But I didn't do that because I don't understand properly what chirality is.


I have gone through this Wikipedia article: chirality to get the meaning of chirality? and what I get from there is that something is said to have chirality if it is not identical to its mirror image.


But I have often seen people saying a massless particle have same helicity (handedness, I think) and chirality? Now if the chirality has the above defination then how do people say it?


Again for massive particle we can have changed helicity by changing our reference frame. Ok I understand that, but How does it have anything to compare with chirality?


I have read these: Does the concept of both helicity and chirality make sense for a massive Dirac spinor? and many others in this site. But didn't find(or maybe didn't understand) the answer there which I am looking for.




experimental physics - Photometer: measured Irradiance L converted to photon rate


I am conducting an experiment in which the power meter reading of $410\,nm$ narrow bandpass stimulus is noted to be 30 $\frac{\mu W}{cm^2}$ at a distance of 1 inch away from the light source.


I wish to convert this to $\frac{\text{photon}}{ cm^{2} s}.$


Can anyone tell me how to do this?




Saturday, 29 August 2020

special relativity - What is the time component of velocity of a light ray?


If we have a light ray $x^\mu$ with velocity $c$, what is $c^0$ (the time component)?




homework and exercises - How to measure percentage of nitrogen present in Nitro Coffee


Starbucks had launched nitro coffee. Unlike carbonated, this coffee contains nitrogen gas which makes that peculiar down draft waviness in the drink. In carbonated beverages, the gas effervescence and move upward. My question is, carbon dioxide percentage in carbonated beverages can be measured using a cuptester


How do we measure nitrogen percentage in nitro coffee.


I found this video of Professor Philip Moriarty on nitrogenated drink



Answer



The answer may depend on how accurate you need the result to be, and what fraction of nitrogen bubbles you expect to be present.


In principle, the presence of nitrogen will lower the density of the liquid, but so does temperature. You need to find a method to distinguish the two - recognizing that the nitrogen will slowly make its way to the surface and disappear.


Given that these are very tiny bubbles, I would probably look for the bulk modulus of the material. As you know, sound travels through a liquid with a speed given by the bulk modulus:


$$c = \sqrt{\frac{K}{\rho}}$$


Now if you add some bubbles, they will greatly affect the compressibility of the liquid. You may have noticed this phenomenon when you make a cup of hot chocolate from powder and boiled water. As you stir the cup, the sound of the stirring starts very low, and increases as the powder dissolves (and the tiny gas bubbles that are being created as the powder dissolves are moving to the surface and disappearing). As the bulk modulus increases, the speed of sound goes up and the resonant frequency of sound bouncing around in the cup increases.



You might be able to take advantage of this effect by setting up a liquid-proof transmit/receive device that you can immerse in the liquid. Measure the transit time of sound - it will relate to the nitrogen content. Effectively, the very small bubbles are "very compressible" compared to the liquid; so if we have a small fraction (by volume) $f$ of nitrogen ("air") in the liquid, we can consider the displacement at a given stress to give the effective bulk modulus. We find


$$\frac{1}{K_{eff}}=\frac{f}{K_a}+\frac{(1-f)}{K_w}$$


$$K_{eff} = \frac{K_{w} \times K_{a}}{f\cdot K_{w} + (1-f)\cdot K_{a}}$$


We can rearrange this, assuming that $f\ll 1$ and $K_a\ll K_w$, to


$$K_{eff} = K_w\left(1-\frac{K_w}{K_a}\cdot f\right)$$


Because the bulk modulus of air is so much lower than that of water, a small fraction of air has a large impact on sound propagation. So this is a very sensitive test.


We might want to take account of the change in density (but that is a much smaller effect):


$$\rho_{off} = (1-f)\rho_w$$


Putting this into the equation for the velocity of sound, we get


$$c=\sqrt{\frac{K}{\rho}}\sqrt{\frac{1-\frac{K_w}{K_a}\cdot f}{1-f}}$$



If we can assume that $K_{air}\ll K_{water}$, and that $f\ll 1$, then the reduction in sound speed for a given fraction $f$ will be roughly given by


$$c(f) = c(0) \left(1-\frac12\left(\frac{K_w}{K_a}-1\right)\cdot f\right)$$


So if you measure the drop in speed of sound, you can use this equation to get a good estimate of the volume fraction of bubbles. This assumes that the amplitude of the sound is small enough that the bubbles don't collapse / dissolve, and that you calculate $K_a$ correctly - it needs to be the adiabatic bulk modulus (since sound transmits through air adiabatically).


The ratio $\frac{K_w}{K_w}$ is roughly 15,000 so if you measure the resonant frequency of a cavity filled with your mixture (for example by tapping a spoon against the bottom of a cup filled with your coffee) the frequency shift can be calculated as follows:


$$\frac{\Delta \nu}{\nu}=\frac{\Delta c}{c}$$


Let's look for the change in fraction of gas volume required to cause a shift in the resonant frequency of a cavity filled with a gas/air mixture (assumptions as before) of one half a note (1/12th of an octave):


$$\frac{\Delta \nu}{\nu}=2^{1/12}\approx \frac{\log 2}{12}$$


Combining with the earlier expression for $c$ as a function of $f$ I get


$$f = 2\frac{K_w \log 2}{12 K_a} = 7.7\cdot 10^{-6}$$


So you can expect the tone to change by one half note (1/12th of an octave) for a change in fraction of air of 7.7 ppm (parts per million). That's a pretty sensitive test and it helps explain the "hot chocolate effect".



quantum field theory - Lie algebra of axial charges


Starting from the lagrangian (linear sigma model without symmetry breaking, here $N$ is the nucleon doublet and $\tau_a$ are pauli matrices)


$L=\bar Ni\gamma^\mu \partial_\mu N+ \frac{1}{2} \partial_\mu\sigma\partial^\mu\sigma+\frac{1}{2}\partial_\mu\pi_a\partial^\mu\pi_a+g\bar N(\sigma+i\gamma_5\pi_a \tau_a)N$


we can construct conserved currents using Noether's Theorem applied to $SU(2)_L\otimes SU(2)_R$ symmetry: we get three currents for every $SU(2)$. By adding and subtracting them, we obtain vector and axial currents.
We could have obtained vector charges quickly by observing that they are just isospin charges, so nucleons behave as an $SU(2)$ doublet (fundamental representation), pions as a triplet (adjoint representation) and sigma as a singlet (so basically it does not transform):


$V_a=-i\int d^3x \,\,[iN^\dagger\frac{\tau_a}{2}N+\dot\pi_b(-i\epsilon_{abc})\pi_c]$



But if I wanted to do the same with axial charges, what Lie algebra/representation must I use for pions and sigma?
I mean, axial charges are


$A_a=-i\int d^3x \,\,[iN^\dagger\frac{\tau_a}{2}\gamma_5N+i(\sigma\dot\pi_a-\dot\sigma\pi_a)]$


and I would like to reproduce the second term using a representation of Lie algebra generators of axial symmetry which act on $\sigma$ and $\pi$, but I don't know the algebra (I think it is $SU(2)$), neither the representation to use.
I tried to reproduce that form using the three matrices


$T^1=\begin{bmatrix} 0&-i&0&0\\i&0&0&0\\0&0&0&0\\0&0&0&0 \end{bmatrix}\quad T^2=\begin{bmatrix} 0&0&-i&0\\0&0&0&0\\i&0&0&0\\0&0&0&0 \end{bmatrix}\quad T^3=\begin{bmatrix} 0&0&0&-i\\0&0&0&0\\0&0&0&0\\i&0&0&0 \end{bmatrix}$


which should act on the vector $(\sigma,\pi_1,\pi_2,\pi_3)$, but I calculated their commutator and they don't form an algebra, so I think I'm getting wrong somewhere in my reasoning.



Answer



In the linear sigma model, the chiral action on the pion fields can be implemented on the following matrix combination of the fields:


$$U(2) \ni \Sigma = \sigma + i \tau^a \pi_a $$



An element $ (U_L = exp(\frac{i}{2}\theta^{(L)}_a \tau^a), U_R = exp(\frac{i}{2}\theta^{(R)}_a \tau^a)) \in SU(2)_L \otimes SU(2)_R $ acts on \Sigma as follows:


$$\Sigma \rightarrow \Sigma' = U_L \Sigma U_R^{\dagger}$$


The kinetic term of the Lagrangian in the matrix representation is given by:


$$L_{kin} = \frac{1}{2} \partial_{\mu}\Sigma \partial^{\mu}\Sigma^{\dagger}$$.


This term is manifestly invariant under all transformations. The interaction term has also a manifestly invariant form:


$$L_{int} = \bar{N}_L \Sigma N_R+ \bar{N}_R \Sigma^{\dagger} N_L$$.


where $N_{L,R} = (1\pm \gamma_5)N $. Thus the whole Lagrangian is invariant under the chiral transformations.


The vector transformation is generated by the subgroup characterized by:


$$\theta^{(L)} = \theta^{(R)} = \theta^{(V)}$$


The axial transformation is generated by the subset characterized by:



$$\theta^{(L)} = -\theta^{(R)} = \theta^{(A)}$$


Substituting in the transformation equations of $\Sigma$ and keeping only the linear terms (this is sufficient for the application of the Noether's theorem), we obtain:


-Vector transformation:


$$ \pi_a' = \pi_a +\epsilon_{abc}\theta^{(V)}_b \pi_c $$


$$ \sigma' = \sigma$$


-Axial transformation:


$$ \pi_a' = \pi_a +\theta^{(A)}_a \sigma $$


$$ \sigma' = \sigma + \theta^{(A)}_a \pi_a$$


Now it is not hard to see that these transformations generate the correct contributions of the pionic fields to the currents.


newtonian mechanics - Free body diagram of block on accelerating wedge



Consider the following system:


Block on wedge


I am thoroughly confused about certain aspects of the situation described in this diagram in which a block is placed on a wedge inclined at an angle θ. (Assume no friction everywhere)


Let us consider a few different cases:


Firstly, when the wedge is accelerating toward the left, if I were to observe the system from the ground(assumed to be an inertial reference frame), what will I see? Will I see the block stay put on the wedge and accelerating along with it toward the left or will I see it move down the inclined plane, which itself is moving leftwards?


Secondly, in some problems, they have mentioned that the block is accelerating "down the inclined plane with acceleration $a$ w.r.t the wedge". In such problems, I pick the wedge as my reference frame, introduce a pseudo force and deal with the situation. However, if I were to observe the block from the ground, what would its motion look like to me?


Thirdly, when drawing the free body diagram of a block that is given to be "moving down an inclined plane", in which direction should I assume its acceleration? Directly downward or along the plane?


Fourthly, if given that the block doesn't "slip over the wedge", what is the condition to be used?


As you may see from all this, I am spectacularly confused about all this changing reference frames and accelerations. If anyone could please sum it up concisely, it would be so so helpful for me. I hope that I have conveyed my doubts clearly. If more clarity is required, please let me know and I will edit my question accordingly. MUCH thanks in advance :) Regards.



Answer




Rather than answer your individual questions I will give you an overview and then discuss some of the points that you have raised.
There are many ways of tackling such problems but drawing a few FBDs together with some coordinate axes is always a good to start.


enter image description here


I will use the laboratory frame of reference as it is perhaps then easier to describe what one sees from that reference frame and I will further assume that there is no friction and that everything starts from rest.
The other important assumption for the first part of the analysis is that the block and the wedge stay in contact with one another. Newton's second law can then be applied which will yield equations with the vertical and horizontal accelerations of the block, $z$ and $x$, the horizontal acceleration of the wedge $X$ and the normal reaction between the block and the wedge $N$ as the four unknowns.
The problem is that application of Newton's second law only yields three equations.


As with a lot of mechanics problems the fourth equation comes from the geometry of the system.
The block keeps in contact with the wedge and relative to the wedge it slides down the wedge at an angle $\theta$.
That is, if you sit on the wedge you will see the block accelerating down the wedge but staying in contact. The downward acceleration of the wedge relative to the block is $z$ (the wedge has no downward movement as the table is assumed immoveable) and the horizontal acceleration of the block relative to the wedge is $x-X$.


The acceleration vector diagrams looks like this:



enter image description here


It yield the fourth equation $\tan \theta = \dfrac{z}{x-X}$


I hope that this is sufficient to answer all your questions?


The wedge has to go left and the block towards the right. This must be so in that the net horizontal force on the block-wedge system is zero and so the centre of mass of the system does not move.
Using this idea one can get a equation linking the horizontal acceleration of the block $x$ and that of the wedge $X$ directly; $(m_1\;x + m_2 \; X = 0 +0 \Rightarrow X = - \dfrac {m_1\; x}{m_2} $.


If for some reason the acceleration of the wedge to the left is greater than $X$ in the example above, eg due to an external horizontal force on the wedge acting to the left, then the situation becomes more complicated.
Assume that the force is such that the horizontal acceleration of the wedge to the left stays constant with a magnitude $Y$.


The normal force between the wedge and the block will decrease so downward acceleration of the block $z$ will increase whereas its horizontal acceleration of the4 block $x$ will decrease but it will still stay in contact with the wedge.
In the acceleration diagram remembering that because the acceleration of the wedge is to the left the magnitude of $x-Y$ will increase as will the magnitude of $z$ to ensure that the block stays in contact with the wedge.
So if you sit on the wedge you will see the block staying in contact with the wedge but with a greater acceleration downwards than before.

Sitting in the laboratory frame you again will see the wedge accelerating faster down the wedge but with a trajectory whose angle with the horizontal is greater than the angle of the wedge $\theta$.


The limiting case is reached when the downward acceleration of the block is $g (= z) $ and its horizontal acceleration $x$ is zero.


So in this limiting case $\tan \theta = \dfrac{g}{(-)Y}$


Any further increase in the horizontal acceleration of the wedge to the left will result in the block losing contact with the wedge and undergoing free fall.


I am not entirely sure about the last part of the analysis but the limiting acceleration $Y$ formula seems to predict what one might expect.
As the angle $\theta$ gets smaller and smaller to keep just in contact with the block the horizontal acceleration of the wedge $Y$ has to get bigger and bigger whereas as the angle $\theta$ tends towards $90 ^\circ$ the acceleration of the wedge $Y$ has to get smaller and smaller.


hilbert space - How do you subtract colors and divide them by irrational numbers? (Gluons)



There is a gluon that is $$\frac{1}{\sqrt{3}} (red \cdot\overline{red} + blue\cdot\overline{blue} - 2\cdot green \cdot\overline{green})$$ This confuses me because I do not understand how adding and subtracting and dividing these colors would work. I know that in matrix form it is $$ A = \frac{1}{\sqrt{3}} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -2 \end{pmatrix}$$ This still confuses me, please help me. This is the 8th Gell-Mann matrix.




black holes - Why was M87 targeted for the Event Horizon Telescope instead of Sagittarius A*?


The first image of a black hole has been released today, April 10th, 2019. The team targeted the black hole at the center of the M87 galaxy.


Why didn't the team target Sagittarius A* at the center of our own galaxy? Intuitively, it would seem to be a better target as it is closer to us.



Answer



Of course they targeted Sgr A* as well.


I think though that this is a more difficult target to get good images of.


The black hole is about 1500 times less massive than in M87, but is about 2000 times closer. So the angular scale of the event horizons should be similar. However Sgr A* is a fairly dormant black hole and may not be illuminated so well, and there is more scattering material between us and it than in M87.


A bigger problem may be variability timescales$^{\dagger}$. The black hole in M87 is light days across, so images can be combined across several days of observing. Sgr A* is light minutes across, so rapid variability could be a problem.


The penultimate paragraph of the initial Event Horizon Telescope paper says:




Another primary EHT source, Sgr A*, has a precisely measured mass three orders of magnitude smaller than that of M87*, with dynamical timescales of minutes instead of days. Observing the shadow of Sgr A* will require accounting for this variability and mitigation of scattering effects caused by the interstellar medium



$\dagger$ The accretion flow into a black hole is turbulent and variable. However, the shortest timescale upon which significant changes can take place across the source is the timescale for light (the fastest possible means of communication) to travel across or around it. Because the material close to the black hole is moving relativistically, we do expect things to vary on these kinds of timescales. The photon sphere of a black hole is approximately $6GM/c^2$ across, meaning a shortest timescale of variability is about $6GM/c^3$. In more obvious units: $$ \tau \sim 30 \left(\frac{M}{10^6 M_{\odot}}\right)\ \ {\rm seconds}.$$ i.e. We might expect variability in the image on timescales of 30 seconds multiplied by the black hole mass in units of millions of solar masses. This is 2 minutes for Sgr A* and a much longer 2.25 days for the M87 black hole.


si units - Is anything actually 1 meter long (or 1kg of weight)?


I believe that no real objects are actually (exactly) 1 meter long, since for something to be 1.00000000... meters long, we would have to have the ability to measure with infinite precision. Obviously, this can be extended to any units of measurement. Am I wrong?



Answer



You're not wrong. However, there used to be an object exactly $1$ meter long until 1960, because a meter was defined to be the length of a certain platinum-iridium rod at certain conditions. Since then, the meter is defined in terms of interferometry, and now it is specifically the distance traversed by light in vacuum within a certain period of time.


Similarly, the kilogram has a prototype whose mass is $1$ kg by definition. There are some proposals to replace that definition, but it hasn't been done yet.



Friday, 28 August 2020

electrostatics - Is potential difference or potential used in defining capacitance?


In my textbook I came across the capacitance of a certain body (i.e. a sphere, not two different spheres as in a spherical capacitor) and in it the formula,


$$Q = CV$$


where $V$ is the potential of the body with respect to the Earth. Now in a parallel plate capacitor, why do we choose the potential difference and not the potential of a single plate to the Earth?



Answer



V in th either situation is potential difference but in the case of an isolated sphere as written in Halliday/Resnick ( Indian edition)



We can assign a capacitance to a single isolated spherical conductor of radius R by Assuming that the " missing plate " is a conducting sphere of infinite radius




So the potential on single sphere comes out to be potential difference


homework and exercises - Transformation of the energy-momentum tensor under conformal transformations


I am reading the yellow book of Di Francesco about conformal field theory, and there is a step that he takes that I cannot follow while deriving the transformation law of the energy-momentum tensor under conformal transformations (eq.(5.136)). The free boson energy momentum tensor is given by:


$$T(z) = -2\pi g \lim\limits_{\delta \to 0} \left(\partial \phi\left(z+\frac{\delta}{2} \right) \partial \phi\left(z-\frac{\delta}{2} \right) + \frac{1}{4\pi g \delta^2} \right) \tag{1}$$


The field derivative transforms as follows:


$$\partial_z \phi(z) = \frac{\partial w}{\partial z} \partial_w \phi'(w) \tag{2}$$


Inserting eq. (2) in eq. (1) results in:


\begin{align} T(z) & = \left(\frac{\partial w}{\partial z} \right)^2 T'(w) + \frac{1}{2} \lim\limits_{\delta \to 0} \left( \frac{w^{(1)}(z+\delta/2)\ w^{(1)}(z-\delta/2)}{(w(z+\delta/2)-w(z-\delta/2))^2} - \frac{1}{\delta^2} \right) \tag{3} \\ & = \left(\frac{\partial w}{\partial z} \right)^2 T'(w) + \frac{1}{12} \left(\frac{w^{(3)}}{w^{(1)}} - \frac{3}{2} \left( \frac{w^{(2)}}{w^{(1)}} \right)^2 \right) \tag{4} \end{align}


where $w^{(n)}$ refers to the n-th derivative, and where I skipped the first steps of the calculation. Now my problem is: how do you get from line (3) to line (4)? I tried expanding, but I cannot reproduce the result with the higher order derivatives.


Thank you very much in advance.




Answer



The Quick Answer


The big yellow book (namely Di Francesco et al) that the OP quotes, largely obscures the distinction between what I call (b) and (c) below. If the OP is just interested in deriving the result in the fastest way he can Taylor-expand in $\delta$ the quantities $w(z+\delta/2)$, etc., and take the limit $\delta\rightarrow 0$. E.g., $$ w(z+\delta/2)\simeq w(z)+\frac{\delta}{2}\partial_zw(z)+\frac{1}{2!}\Big(\frac{\delta}{2}\Big)^2\partial_z^2w(z)+\frac{1}{3!}\Big(\frac{\delta}{2}\Big)^3\partial_z^3w(z)+\dots $$ $$ \partial_zw(z+\delta/2)\simeq \partial_zw(z)+\frac{\delta}{2}\partial_z^2w(z)+\frac{1}{2!}\Big(\frac{\delta}{2}\Big)^2\partial_z^3w(z)+\dots $$ The higher order terms do not contribute. Using messy but straight-forward algebraic manipulations one indeed finds that equation (3) implies (4) in the OP's question. For example, since the denominator in (3) seems to have been causing the OP some trouble I'll also note that (from the above Taylor expansion it follows that): $$ \big(w(z+\delta/2)-w(z-\delta/2)\big)^2=\big(\partial_zw(z)\big)^2\delta^2+\frac{1}{12}\big( \partial_z^3w\,\partial_zw(z)\big)\delta^4+\mathcal{O}(\delta^6), $$ so the inverse is then, $$ \frac{1}{[w(z+\delta/2)-w(z-\delta/2)]^2}=\frac{1}{\delta^2}\frac{1}{(\partial_zw(z))^2}-\frac{1}{12}\frac{\partial_z^3w(z)}{(\partial_zw(z))^3}+\mathcal{O}(\delta^2). $$ Making use of the above Taylor expansions for the numerator, subtracting $1/\delta^2$ from the result and multiplying by a factor $1/2$ yields precisely (4).


However, I don't believe this is the right way to think about it: this derivation may be fast but it also hides many subtleties under the rug, so that in fact one has learnt very little (if not a negative amount) by following the big yellow book's derivation.


So I want to rather discuss a much more pedagogical (but also longer) derivation: in what follows we show how to derive an explicit expression for a normal ordered operator under any holomorphic change of coordinates in detail. (Hopefully, future readers interested in related questions will also benefit.) We take the energy-momentum tensor as our basic example.






The Long Answer


The OP asks to show that the energy-momentum tensor, $T(z)$, of a free scalar, $\phi(z)$, in 2 dimensions transforms with the funny Schwarzian derivative term under a conformal change of coordinates, $z\rightarrow w(z)$, $$ \boxed{(\partial_{z_2}w_2)^2T^{'(w)}(w_2) = T^{(z)}(z_2)-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]\,\,} $$ taking as a starting point the defining equation for the normal ordered energy-momentum tensor for a free scalar which in my conventions reads: $$ T^{(z)}(z_2) = \lim_{z_1\rightarrow z_2}-\frac{1}{2}\!:\!\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\!:_z $$ $$ T^{'(w)}(w_2) = \lim_{w_1\rightarrow w_2}-\frac{1}{2}\!:\!\partial_{w_1}\phi'(w_1)\partial_{w_2}\phi'(w_2)\!:_w $$ I have deliberately cluttered the notation slightly (included some superscripts on $T$ and corresponding subscripts on $:\!(\dots)\!:$ and wrote $z_2,w_2$ rather than $z,w$ etc.) for reasons that will become clear momentarily. (In fact, this notation exposes the precise data on which these operators depend and will therefore allow us to track how these objects change as we change this data, one piece at a time. E.g., dropping the superscript from the energy-momentum tensor makes it impossible to distinguish between the quantities: $$ T^{(z)}(z_2), \qquad\longleftrightarrow\qquad T^{(w)}(z_2), $$ but this distinction will in turn play a crucial role below since it corresponds to changing normal ordering keeping fixed the coordinates - this is where the Schwarzian derivative makes its appearance. We can conversely also change coordinates keeping fixed the normal ordering, $$ T^{(w)}(z_2), \qquad\longleftrightarrow\qquad T^{'(w)}(w_2), $$ and this corresponds to the classical or ordinary change of coordinates (where one transforms coordinates assuming the object transforms as a conformal tensor) that is also used in the path integral. Incidentally, from an honest path integral viewpoint these points are manifest, and this is why people say the 'path integral is useful primarily because it provides the useful understanding/intuition', but I won't elaborate on this connection further. But let's go through the reasoning slowly and carefully.)


We will break down the computation into three independent steps:



(a) Normal ordering


(b) Change of normal ordering keeping coordinates fixed


(c) Change of coordinates keeping normal ordering fixed


We will proceed by exposing these three steps, (a), (b) and (c), (one at a time and in this order). Then, to change coordinates in any given normal ordered operator is to derive the map associated to the following composition: $$ {\bf (c)}\circ{\bf (b)}\circ{\bf (a)}:\mathcal{O}(\phi)\,\longrightarrow \,\,?? $$ and when, e.g., $\mathcal{O}(\phi)$ is identified with the (non-normal-ordered) energy-momentum tensor then the "codomain" of this map will correspond to the coordinate-transformed normal-ordered energy-momentum tensor (given in terms of the Schwarzian derivative term above).


Let me add that the OP's question is a good question, in that I'm not even aware of a transparent and explicit derivation along these lines in the literature (but that doesn't mean it doesn't exist, somewhere..). The only paper that I know of that really exposes these issues is a paper by Polchinski (from 1987) on vertex operators, but there are intermediate steps between that paper and what follows that I'm not including here. Finally, I will focus on $c=1$ bulk scalars, $\phi(z)$, the generalisation to tensors (Grassmann-even or odd ghosts, matter fermions, etc.) and boundary operators being similar. For tensors the change of normal ordering with fixed coordinates then acquires an additional factor in the propagator but is otherwise entirely parallel.


We must first understand what it means to normal order an operator. We will use the path integral definition (although this is implicit).




(a) Normal Ordering


A normal ordering prescription is a prescription for subtracting infinities arising from self contractions within a (possibly composite) operator. In a free theory, such as the case of interest here, Wick's theorem gives all the self contractions and we therefore have succinctly that:$^*$ $$ \boxed{:\mathcal{O}(\phi)\!:_z \,\,= \mathcal{O}(\delta_J)\,\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(z',z)+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\,\,} $$ where $G(z',z)=\langle\phi(z')\phi(z)\rangle$ is the free propagator used in the $z$ normal ordering, e.g., it will be sufficient to consider the standard expression for scalars: $$ G(z',z) = -\ln |z'-z|^2, $$ where (to justify the name '$z$ normal ordering') by $z,z'$ we implicitly mean here $z(p),z(p')$, where $p,p'$ are points on the surface, so $z$ is really a holomorphic chart coordinate.$^{**}$ Note also that I'm using a more traditional normalisation than the OP (obtained by taking $g=1/(4\pi)$). The integration measures, e.g. $d^2z$, are implicit in the above boxed expression (and we could more fully write $J(z,\bar{z})$ instead of $J(z)$, etc.).$^{***}$


The quantity $\mathcal{O}(\phi)$ is any (typically infinite if the elementary constituents are evaluated at coincident points) operator of interest, such as: $$ \mathcal{O}(\phi) = \lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big], $$ where we will take $z_1,z_2$ to be coordinate points specified in the $z$ coordinate system, e.g., $z_1\equiv z(p_1)$, where $p_1$ is a marked point on the surface. Let us check that the boxed equation makes sense, \begin{equation} \begin{aligned} T^{(z)}(z_2)&\equiv\,:\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_z \\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\frac{\delta}{\delta J(z_1)}\partial_{z_2}\frac{\delta}{\delta J(z_2)}\Big]\,\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(z',z)+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\frac{\delta}{\delta J(z_1)}\partial_{z_2}\Big]\,\Big(-\int_{z'}\,J(z')\,G(z',z_2)+\phi(z_2)\Big)\\ &\qquad\qquad\times\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(z',z)+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\partial_{z_2}\Big]\,\Big(\phi(z_1)\phi(z_2)-G(z_1,z_2)\Big)\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(z_1,z_2)\Big)\Big]\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)+\frac{1}{z_{12}^2}\Big)\Big]\\ \end{aligned} \end{equation} where in the second equality we used the boxed equation above, in the third we carried out one of the two functional derivatives using the defining property, $$ \int_z \frac{\delta J(z)}{\delta J(z_2)}f(z)=\int_z \delta^2(z-z_2)f(z)=f(z_2), $$ in the fourth equality we carried out the remaining functional derivative and set $J=0$, and in the sixth we made use of the definition of $G(z',z)$ above (with $z_{12}\equiv z_1-z_2$).



So this defines what we mean by 'the energy momentum for a scalar in the $z$ normal ordering'.


Incidentally, inside the normal ordering we can freely take the limit as it is non-singular, $$ :\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_z\,=\,:\!\Big[-\frac{1}{2}\partial_{z_2}\phi(z_2)\partial_{z_2}\phi(z_2)\Big]\!:_z\,. $$




$^*$ If you are curious and happen not to know that this is equivalent to Joe Polchinski's conformal normal ordering definition, namely (2.2.7) in his volume 1 (or his vertex operator paper where he introduced it), the hint is on p.152 in Coleman's book 'Aspects of Symmetry'. (As a historical note, Joe once mentioned that he learnt all about normal ordering in 2-d quantum field theories from Coleman's lectures.)


$^{**}$ To avoid confusion let me be pedantic and mention that the integrals over $z,z'$ integrate over images of all points $p,p'$ in the manifold using $z$ chart coordinates, rather than integrating over all chart coordinates for fixed $p,p'$! (Had I not made the notation so explicit it would most likely not have been exposed how subtle but sharp all of these steps actually are; and there's more I'm not even mentioning for the sake of "brevity", otherwise this post would turn into a book..)


$^{***}$ The boxed equation above that defines normal ordering is actually a "baby version" of equation (3.1) in this paper; the latter provides the natural generalisation of the notion of normal ordering to interacting theories where it is termed complete normal ordering. For free theories (the case of interest here) the two notions are indistinguishable.




(b) Change of Normal Ordering (keeping coordinates fixed)


Very generally, we obtain different normal ordering prescriptions by replacing $G(z',z)$ in the above boxed equation by $G(z',z)+\Delta(z',z)$. We want to do something more specific here, namely we want to go through the exact same computation as we did above but in (what we will call) the '$w$ normal ordering'. We define the latter to be related to the $z$ normal ordering by a conformal transformation, $z\rightarrow w(z)$, by which we mean precisely the following: we are to simply$^{****}$ replace $G(z',z)$ by $G(w(z'),w(z))$ on the right-hand side in the above boxed equation keeping everything else fixed, $$ \boxed{:\mathcal{O}(\phi)\!:_w \,\,= \mathcal{O}(\delta_J)\,\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(w(z'),w(z))+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\,\,} $$ The subscript $w$ on the left-hand side is the reminder that this is $w$ normal ordering and the corresponding $w$ dependence on the right-hand side is entirely explicit (and contained solely in $G(w(z'),w(z))$). This is the definition of '$w$ normal ordering'. Notice that it is defined with respect to the reference/auxiliary '$z$ normal ordering'. (Clearly, we can similarly define a, say, '$u$ normal ordering' in precisely the same way, namely we simply replace $w$ by $u$, and that also will then be defined with respect to the reference '$z$ normal ordering', or we can consider $w(u(z))$ normal ordering, etc.., depending on context.)


Let us apply $w$ normal ordering to the case of interest, \begin{equation} \begin{aligned} T^{(w)}(z_2)&\equiv\,:\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_w \\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\frac{\delta}{\delta J(z_1)}\partial_{z_2}\frac{\delta}{\delta J(z_2)}\Big]\,\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(w(z'),w(z))+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\frac{\delta}{\delta J(z_1)}\partial_{z_2}\Big]\,\Big(-\int_{z'}\,J(z')\,G(w(z'),w(z_2))+\phi(z_2)\Big)\\ &\qquad\qquad\times\exp\Big(-\frac{1}{2}\int_{z'}\int_z\,J(z')J(z)\,G(w(z'),w(z))+\int_z J(z)\phi(z)\Big)\Bigg|_{J=0}\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\partial_{z_2}\Big]\,\Big(\phi(z_1)\phi(z_2)-G(w(z_1),w(z_2))\Big)\\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(w(z_1),w(z_2))\Big)\Big], \end{aligned} \end{equation} the steps being identical to the above. We next consider the last term in detail. We are interested in the limit $z_1\rightarrow z_2$. Since $w(z_1)$ is by definition a holomorphic function of $z_1$ this means we can Taylor expand it around $z_2$ in $G(w(z_1),w(z_2))$, \begin{equation} \begin{aligned} G(w(z_1),w(z_2))&=-\ln\big|w(z_1)-w(z_2)\big|^2\\ &=-\ln\Big|\sum_{n=0}^{\infty}\frac{1}{n!}z_{12}^n\partial_{z_2}^nw(z_2)-w(z_2)\Big|^2\\ &=-\ln\Big|\sum_{n=1}^{\infty}\frac{1}{n!}z_{12}^n\partial_{z_2}^nw(z_2)\Big|^2\\ &=-\ln\Big|z_{12}\sum_{n=1}^{\infty}\frac{1}{n!}z_{12}^{n-1}\partial_{z_2}^{n}w(z_2)\Big|^2\\ &=G(z_1,z_2)-\ln\Big|\sum_{n=1}^{\infty}\frac{1}{n!}z_{12}^{n-1}\partial_{z_2}^{n}w(z_2)\Big|^2\\ \end{aligned} \end{equation} Now I will leave the following as a fun



EXERCISE: Let us write $w_1\equiv w(z_1)$ and $w_2\equiv w(z_2)$. Show that for $|z_{12}|=|z_1-z_2|$ small: $$ \partial_{z_1}\partial_{z_2}\ln\Big|\sum_{n=1}^{\infty}\frac{1}{n!}z_{12}^{n-1}\partial_{z_2}^{n}w(z_2)\Big|^2=\frac{2}{12}\bigg[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\bigg]+\frac{1}{12}\bigg[3\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^3+\frac{\partial_{z_2}^4w_2}{\partial_{z_2}w_2}-4\frac{\partial_{z_2}^3w_2\,\partial_{z_2}^2w_2}{(\partial_{z_2}w_2)^2}\bigg]\,z_{12}+\mathcal{O}(z_{12}^2). $$ This follows directly by using the chain rule, taking into account that only the $z_{12}^{n-1}$ terms depend on $z_1$ and that both $z_{12}^{n-1}$ and $\partial_{z_2}^{n}w(z_2)$ depend on $z_2$. Since only the $z_{12}\rightarrow 0$ limit is of interest we can drop all terms on the right-hand side that vanish in this limit.


Substituting the result of this exercise into the above we learn that: $$ \boxed{\lim_{z_1\rightarrow z_2}\partial_{z_1}\partial_{z_2}G(w(z_1),w(z_2))=\lim_{z_1\rightarrow z_2}\partial_{z_1}\partial_{z_2}G(z_1,z_2)-\frac{2}{12}\bigg[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\bigg]} $$ Let us in turn substitute this into the above expression for $T^{(w)}(z_2)$, \begin{equation} \begin{aligned} T^{(w)}(z_2)&\equiv\,:\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_w \\ &=\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(w(z_1),w(z_2))\Big)\Big]\\ &=\lim_{z_1\rightarrow z_2}\bigg\{-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(z_1,z_2)+\frac{2}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]\Big)\bigg\}\\ &=\lim_{z_1\rightarrow z_2}\bigg\{-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(z_1,z_2)\Big)\bigg\}-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]\\ &=:\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_z-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]\\ &=T^{(z)}(z_2)-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]\\ \end{aligned} \end{equation} where we noted in the last two lines that: \begin{equation} \begin{aligned} T^{(z)}(z_2)&\equiv \,:\lim_{z_1\rightarrow z_2}\Big[-\frac{1}{2}\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)\Big]\!:_z\\ &= \lim_{z_1\rightarrow z_2}\bigg\{-\frac{1}{2}\,\Big(\partial_{z_1}\phi(z_1)\partial_{z_2}\phi(z_2)-\partial_{z_1}\partial_{z_2}G(z_1,z_2)\Big)\bigg\} \end{aligned} \end{equation} as shown above.


So we learn that a finite holomorphic change in normal ordering, $z\rightarrow w(z)$, with fixed coordinates, $z_2$, of the energy-momentum tensor is given by: $$ \boxed{T^{(w)}(z_2)=T^{(z)}(z_2)-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]}\qquad\qquad (*) $$ Notice that we have not actually changed the coordinates to derive the Schwarzian derivative! Evidently, the entire content of the Schwarzian derivative lives entirely in the change of normal ordering of the energy-momentum tensor keeping the coordinates fixed.




$^{****}$ As mentioned above, this procedure is as simple as stated in the case of scalars; it is slightly more complicated for ghosts and matter fermions or tensors more generally.




(c) Change of Coordinates (keeping normal ordering fixed)


All that remains is to change coordinates, $z_2\rightarrow w_2\equiv w(z_2)$. Since $\phi(z_2)$ transforms as a scalar and its derivative as a weight-1 operator we have that, $$ \partial_{z_2}\phi(z_2)dz_2 = \partial_{w_2}\phi'(w_2)dw_2. $$ Furthermore, since we have treated the change in normal ordering separately from the change of coordinates we can now perform the change of coordinates just as we would do naively, and I want to emphasise the following statement (which follows from the defining equation of normal ordering above) as strongly as possible:


$T^{(w)}(z_2)$ does transform as a (weight-2) holomorphic tensor under a holomorphic change of coordinates provided we keep the normal ordering fixed: $$ T^{(w)}(z_2)dz_2^2 = T^{'(w)}(w_2)dw_2^2\qquad\Rightarrow\qquad \boxed{T^{(w)}(z_2) = (\partial_{z_2}w_2)^2T^{'(w)}(w_2)} $$ independently of the fact that the central charge of a free scalar $c=1$. So you see why I insisted on using the cluttered notation above. Omitting the normal ordering symbol, $(w)$, from $T^{(w)}(z_2)$ clearly obscures the meaning of this local operator, while also leading to the perception that the Schwarzian derivative is somehow generated by a change of coordinates - as we have just seen, it is the change in normal ordering that is doing all of the magic. Unfortunately, almost all of the CFT literature (as do I most of the times) drops the normal ordering from the notation causing all sorts of unnecessary confusion.





Summarising


The final step of the derivation is trivial, we simply gather what we have learnt. In particular, we substitute the relation we have just derived into (*), leading precisely to the final answer for the transformation of the energy-momentum tensor under a change of coordinates (with a corresponding change of normal ordering), $$ \boxed{(\partial_{z_2}w_2)^2T^{'(w)}(w_2) =T^{(z)}(z_2)-\frac{1}{12}\Big[\frac{\partial_{z_2}^3w_2}{\partial_{z_2}w_2}-\frac{3}{2}\Big(\frac{\partial_{z_2}^2w_2}{\partial_{z_2}w_2}\Big)^2\Big]} $$ Note also that using the OPE with the energy momentum tensor generates the infinitesimal version of this last relation: it automatically merges steps (b) and (c) above.


Thursday, 27 August 2020

quantum field theory - Proof of Spin-statistics theorem



Is this proof of spin-statistics theorem correct?


http://bolvan.ph.utexas.edu/~vadim/classes/2008f.homeworks/spinstat.pdf


This proof is probably a simplified version of Weinberg's proof. What is the difference?


What is the physical meaning of $J^{+}$ and $J^{-}$ non-hermitian operators?



I'm especially interested in the beginnig of proof of second lemma. How to get this: \begin{eqnarray} F_{AB}(-p^{\mu}) = F_{AB}(p^{\mu})\times (-1)^{2j_{A}^{+}} (-1)^{2j_{B}^{+}} \\ \nonumber H_{AB}(-p^{\mu}) = H_{AB}(p^{\mu})\times (-1)^{2j_{A}^{+}} (-1)^{2j_{B}^{+}} \end{eqnarray}


Also why under CPT field transform as \begin{eqnarray} \phi_{A}(x)\rightarrow \phi_{A}^{\dagger}(-x) \times (-1)^{2J_{A}^{-}} \\ \nonumber \phi_{A}^{\dagger}(x) \rightarrow \phi_{A}(-x) \times (-1)^{2J_{A}^{+}} \end{eqnarray} conjugation is from charge reversal, - from space inversion and time reversal. What about $(-1)^{2J_{A}^{-}}$?


Where can I find similar proofs?




electromagnetism - Why does electricity need wires to flow?


If you drop a really heavy ball the ball's gravitational potential energy will turn into kinetic energy.


If you place the same ball in the pool, the ball will still fall. A lot of kinetic energy will turn into thermal energy because of friction, but the gravitational potential energy will still be converted.


Similarly, why doesn't electricity flow without a good conductor? Why won't Electrons flow from the negative terminal to the positive terminal without a wire attaching them?


Electricity flows like a wave and metals have free electrons in the electron cloud that allows the wave to propagate, or spread. But when these free electrons aren't available to propagate the wave, why don't the electrons just "move" like the ball? Why don't the electrons just "move" through the air to the positive terminal?


A slow drift speed means that the electrons most likely will take a long time to propagate the wave of electricity, but they should still get there.




Answer



To continue to use your ball analogy think of the ball as analogous to the electron. Now what if the ball were attached to a point by a spring? Would it still fall? It can oscillate about that point but it would not be able to escape the restraining effect of the spring entirely. The same is the case with bound electrons. They are more or less bound to the atom. If the gravitational field is very strong it may be able t o break the spring and rip the ball out of the spring. This happens sometimes in electricity too. In a lightning discharge, the electric field is so high that even the bound electrons are ripped out of their atoms thus ionizing the gas and creating what is known as plasma. With a pool of free electrons and positive ions available electric current can now flow freely through the plasma - you wouldn't need wires. But unless you have a high enough electric field to produce ionization(for ionization of air the field required is close to $10^6V/m$ - such high fields cannot be produced by the 100 - 250 V household voltages available in most countries) you would have to use wires made of conducting material where free electrons are readily available if you want to have electric conduction at normal voltages.


quantum mechanics - How is quantization related to commutation?



How are commutation (of observables) and quantization related? Reading about the Stone-Von Neumann Theorem, it seems that commutativity is the classical limit of quantum mechanics, and hence non-quantization, but I don't understand the intuition behind the fact that commutativity of operators should imply any kind of quantization.


In a theory involving `quantized' quantities, such as quantum mechanics, why does commutation of operators (observables) suddenly become an important topic- why does quantization come hand in hand with the uncertainty principle?





thermodynamics - Deriving Enthalpy from Statistical Mechanics


One can derive all the numerous thermodynamic potentials (Helmholtz, Gibbs, Grand, Enthalpy) by Legendre transformations, but I'm interested in seeing each from Stat Mech instead (ie taking the log of a partition function). I can do this easily for all but the Enthalpy, which has me stumped.


Easy example: Deriving Helmholtz


The Helmholtz energy can be derived by putting our system $S$ in thermal contact with a heat bath $B$ at temperature $T$ and maximizing the total entropy:


$$S_\mathrm{tot}/k_b=\log\sum_{E_S}\Omega_B(E_\mathrm{tot}-E_S)\Omega_S(E_S)$$


If the heat bath is big enough to maintain constant temperature $1/T=\left(\frac{dS_B}{dE_B}\right)_{V_B}$, then we can say $S_B\approx -E_S/T+S_0$ for small $E_S$. Then $\Omega_B\approx \Omega_0 e^{-\beta E_S}$ where $S_0=k_b\log\Omega_0$, $\beta=1/k_bT$, so


$$S_\mathrm{tot}/k_b=\log\sum_{E_S}\Omega_0e^{-\beta E_S}\Omega_S(E_S)=S_0/k_b+\log Z_S$$ where $Z_S=\sum_{E_S}e^{-\beta E_S}\Omega_S(E_S)$. So maximizing the total entropy is just maximizing $Z_S$. If we define the Helmholtz Free Energy $A_S$ as $-\beta A_S=\log Z_S$, and use the Shannon entropy $S/k_b=-\sum p\log p $, we see



$$S_S/k_b=-\sum \frac{\Omega_S(E_S)e^{-\beta E_S}}{Z}\log \frac{\Omega_S(E_S)e^{-\beta E_S}}{Z}$$ $$S_S/k_b=\frac{\beta}{Z}\sum \Omega_S(E_S)E_S e^{-\beta E_S} + \log Z$$ $$S_S/k_b=-\frac{\beta}{Z}\partial_\beta Z + \log Z$$ $$S_S/k_b=\beta \langle E_S\rangle -\beta A_S$$ $$A_S=\langle E_S\rangle -TS_S$$


The other thermodynamic potentials at fixed $T$ are similarly easy to derive.


But now try deriving Enthalpy


The same procedure does not work for enthalpy, because,


$$S_\mathrm{tot}/k_b=\log\sum_{V_S}\Omega_B(V_\mathrm{tot}-V_S,E_{B0}+pV_S)\Omega_S(V_S,E_{S0}-pV_S)$$


...if the bath is big enough to maintain a constant temperature, then its total entropy is constant as a function of $V_S$. That is, if $V_S$ increases, the bath entropy decreases due to less volume, but the bath energy increases by the same amount due to increased energy. So, to first-order, $\Omega_B$ is constant and the total entropy splits into a sum of both individual subsystem entropies.


Is there a way to derive the enthalpy from stat mech considerations, as there is for the other potentials, rather than by Legendre transforming the energy?


By "derive the enthalpy", I mean "derive that the quantity which should be minimized in equilibrium is given by $H=\langle E \rangle+p\langle V \rangle$."




astronomy - What is the probability that a star of a given spectral type will have planets?


There is a lot of new data from the various extrasolar planet projects including NASA's Kepler mission on extra-solar planets. Based on our current data what is the probability that a star of each of the main spectral types (O, B, A, etc) will have a planetary system?




Wednesday, 26 August 2020

quantum mechanics - Obtaining propagating solutions for Schrodinger equation from known bound states (in 2 and 3 dimensions)?


If I found all the bound states for a certaing potential in 2 or 3 dimensions (numerically), can I immediately obtain some information about the propagating solutions for the same potential (such as transmission and reflection coefficients in certain directions, scattering angle distribution)? I will gladly accept the answer for 1 dimensional problem as well.


The thing is, the numerical approach I use is very convenient, but only for the bound states since it uses the wave function expansion in the basis of eigenfunctions of quantum box or harmonical oscillator. This means that while I can easily obtain all the bound states with arbitrary accuracy, I can't solve for the propagating states. Or can I?


If you know some good approach that is useful mainly for finding the propagating solutions numerically, I would be grateful too.




Edit based on the comment below.


For clarification, the method I use to find the energy level and wavefunctions of the bound states can be summarized as follows:


We expand the wavefunction using the complete orthonormal basis of the known solutions (quantum box for example, any other basis can be used as well).


$$ \Psi( \vec{r} )=\sum_{j,k,l}^\infty C_{jkl} \psi_{jkl}( \vec{r} ) $$


We substitute this expansion into Schrodinger equation, calculate all the matrix elements then numerically solve the resulting matrix equation for $\{C_{jkl}\}$ and the corresponding eigenenergies $E_{jkl}$ for some finite number of basis functions $N$.



This method was first proposed in 1988 for nanowires and is very useful because there is no need to explicitly define the boundary conditions (they are incorporated during the calculation of matrix elements $<\psi^*_{JKL}| U(\vec{r})|\psi_{jkl}>$), it gives all energy levels and normalized wavefunctions and can be used for almost any potential in any number of dimensions.




But it seems to me that I can't use it to find the propagating wavefunction for $E>0$ because the system is still bound because we use the finite expansion, and our basis potential well kind of 'surrounds' the electron even if its energy is positive.


So, do I need to use a completely different method to calculate (for example) the scattering of electrons by the layer of quantum dots and such?




general relativity - Is it mathematically possible or topologically allowable for cutouts, or cavities, to exist in a 3-manifold?


A few weeks back, I posted a related question, Could metric expansion create holes, or cavities in the fabric of spacetime?, asking if metric stretching could create cutouts in the spacetime manifold. The responses involved a number of issues like ambient dimensions, changes in coordinate systems, intrinsic curvature, intrinsic mass of the spacetime manifold and the inviolability of the manifold. I appreciated the comments but, being somewhat familiar with the various issues, I felt that the question didn't get a very definitive answer.



So, if I may, I would like to ask what I hope to be a more focused question; a question about the topology of 3-manifolds in general. Are cutouts or cavities allowed in a 3-manifold or are these manifolds somehow sacrosanct in general and not allowed to be broken?


As I noted in the previous discussion, G. Perleman explored singularities in unbounded 3-manifolds and found that certain singularity structures could arise. Surprisingly, their shapes were three-dimensional and limited to simple variations of a sphere stretched out along a line.


Three-dimensional singularities, then, can be embedded inside a 3-manifold and the answer to my question seems to depend on whether or not these 3-dimensional singularities are the same things as cutouts in the manifold.


I also found the following, which seems to describe what I have in mind. It's a description of an incompressible sphere embedded in a 3-manifold: "... a 2-sphere in a 3-manifold that does not bound a 3-ball ..."


Does this not define a spherical, inner boundary of the manifold, i.e., a cutout in the manifold?




newtonian mechanics - Does juggling balls reduce the total weight of the juggler and balls?


A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that since the man only ever holds one $3$-pound object at a time, the maximum combined weight at any given moment is only $195 + 3=198$ pounds, and the bridge would hold.


I corrected him by explaining that the acts of throwing up and catching the ball temporarily make you 'heavier' (an additional force is exerted by the ball to me and by me onto the bridge due to the change in momentum when throwing up or catching the ball), but admitted that gentle tosses/catches (less acceleration) might offer a situation in which the force on the bridge never reaches the combined weight of the man and both balls.


Can the bridge withstand the man and his balls?



Answer




Suppose you throw the ball upwards at some speed $v$. Then the time it spends in the air is simply:


$$ t_{\text{air}} = 2 \frac{v}{g} $$


where $g$ is the acceleration due to gravity. When you catch the ball you have it in your hand for a time $t_{\text{hand}}$ and during this time you have to apply enough acceleration to it to slow the ball from it's descent velocity of $v$ downwards and throw it back up with a velocity $v$ upwards:


$$ t_{\text{hand}} = 2 \frac{v}{a - g} $$


Note that I've written the acceleration as $a - g$ because you have to apply at least an acceleration of $g$ to stop the ball accelerating downwards. The acceleration $a$ you have to apply is $g$ plus the extra acceleration to accelerate the ball upwards.


You want the time in the hand to be as long as possible so you can use as little acceleration as possible. However $t_{\text{hand}}$ can't be greater than $t_{\text{air}}$ otherwise there would be some time during which you were holding both balls. If you want to make sure you are only ever holding one ball at a time the best you can do is make $t_{\text{hand}}$ = $t_{\text{air}}$. If we substitute the expressions for $t_{\text{hand}}$ and $t_{\text{air}}$ from above and set them equal we get:


$$ 2 \frac{v}{g} = 2 \frac{v}{a - g} $$


which simplifies to:


$$ a = 2g $$


So while you are holding one 3kg ball you are applying an acceleration of $2g$ to it, and therefore the force you're applying to the ball is $2 \times 3 = 6$ kg.



In other words the force on the bridge when you're juggling the two balls (with the minimum possible force) is exactly the same as if you just walked across the bridge holding the two balls, and you're likely to get wet!


newtonian gravity - Do gravitational many-body systems fall apart eventually?


Imagine an $N$-body problem with lots of particles of identical mass (billions of them).


I saw several simulations on the Internet, where the particles first form small clumps, then bigger clumps, then finally one huge globular cluster like clump around the center of mass.


Is this clump a stable feature? If we keep running the simulation forever, will this cluster stabilize and remain there forever, or will the particles gradually leak away so that the cluster dissolves?


Clarifications:


The particles attract each other gravitationally (so if the distance is $r$ between two particles, the attractive force between them is proportional $1/r^2$).


Initial conditions are unconstrained, and indeed if the particles are too fast then it can't be bound gravitationally. So probably it's a function of kinetic energy and gravitational potential energy in system, but what function?



If you want me to mention a particular situation, then consider a globular cluster, for example.




quantum mechanics - Confused by Many-Body Formalism: Creation/Annihilation to Field Operators


I'm going through an introduction to many-body theory and I am getting tripped up on the formalism. I understand quantities such as $\hat {N} = \sum_{i}\hat{n}_{i}=\sum_{i}\hat{a}_{i}^{\dagger}\hat{a}_{i}=\int d^{3}x\psi^{\dagger}(x)\psi(x)$ but struggling with things interpreting things like the kinetic energy of the system


Specifically, how is it that one goes from the creation/annihilation formalism to the field operators? If you have a general many-body Hamiltonian, I can't really see the transformation (nor do I have a good intuition for it) between the creation and annihilation operators and the field operators formalism for the Hamiltonian. Why do we have the two-particle interaction "sandwiched" between the field operators, but the annihilation/creation operators do not follow the same pattern?


I am aware of basic quantum mechanics, commutation rules, as well as the Fourier transform. I need help developing an intuition for writing down a field operator Hamiltonian. When I read the field operator Hamiltonian, the story that I get is: There are some field operators that create and annihilate, and integrating over them with a energy density yields a total energy term.


But I get lost in the details. For instance, although it has been simplified by IBP above, the kinetic energy term acts on the annihilation operator before the creation operator acts on it. What is the meaning of the motif $H=\int d^{3}x\psi^{\dagger}(\mathbf{x})\hat{h}\psi(\mathbf{x})$?




Tuesday, 25 August 2020

quantum mechanics - Is the energy always discrete?



In the von Neumann axioms for quantum mechanics, the first postulate states that a quantum state is a vector in a separable Hilbert space. It means it is assumed the Hilbert space has a basis with at most infinite countable elements (cardinality). In other words, it states that the energy of an arbitrary system is discrete. Is it always true? If not, can you make an specific example in nature that its eigen-energies are uncountable?




Why does air remain a mixture?


As we all know, air consists of many gases including oxygen and carbon dioxide. I found that carbon dioxide is heavier than O2. Does the volume difference neglect the mass difference? Is it same for all other gases in air or is there another force that keeps all of these gases together?


If I take a breath of a fresh air, will the exhaled air be heavier because of its higher CO2 content? Will it fall on the floor?



Answer



CO2 will, on average, equilibrate slightly lower than O2 in a gravitational field. But the difference in the force of gravity is very small compared to the random thermal motion of the molecules, thus the effect is effectively negligible in day to day life.



In the context of the atmosphere as a whole this can be a non-negligible effect (e.g. this link), and in astrophysical contexts, this can be very important (e.g. this paper or this one).


Monday, 24 August 2020

diffusion - Gillespie's stochastic framework valid for particles in aqueous solution?


Gillespie proposed a stochastic framework for simulating chemical reactions which is predicated on non-reactive elastic collisions serving to 'uniformize' particle position so that the assumption of well-mixedness is always satisfied (see page 409 in the linked version). This is formulated from kinetic theory.


A corollary to this is that a non-reactive collision between two molecules that are able to react does not induce local correlation, i.e., two particles able to react with each other that just collided, but didn't react are no more likely to react with each other in the next dt than any other particle pair in the volume.


Gillespie's algorithm is commonly used in biology where biochemical species are modeled in the aqueous environment of cells. Is this valid, and if so why? It seems the validity may depend on an assumption of Boltzmann-distributed velocities which may or may not be valid in aqueous phase. I recently asked about this (question here), however, the question was deemed a duplicate even though there was some disagreement among the answers.



Answer



As I understand it, your question which was marked as a duplicate not only was not a duplicate of any previous question, but also seems to contain erroneous and/or misleading answers.


Let's start with the velocity distribution. Wikipedia says the following: "The Maxwell–Boltzmann distribution applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, relativistic speed limits, and quantum exchange interactions) that make their speed distribution sometimes very different from the Maxwell–Boltzmann form."


Contrast this with (my previous comment about a derivation and) the passage found in Landau Lifshitz Statistical Physics (pp. 79-80): "Let us consider the probability distribution for the momenta, and once again emphasize the very important fact that in classical statistics this distribution does not depend on the nature of the interaction of particles within the system or on the nature of the external field, so can be expressed in a form applicable to all bodies. (In quantum statistics this statement is not quite true in general.)" After a derivation of the distribution, the book then goes on explaining that it works also in molecular systems and in systems with Brownian motion. They also show the difference between quantum statistics and the classical description of a harmonic oscillator and how one gets different velocity distributions of the particle (in the latter case accordingly recovers the Maxwell-Boltzmann distribution).



Supposing that we can characterize everything by classical mechanics (a rather fair assumption given the application, certainly one made routinely in molecular dynamics simulations of biological matter), the velocity distributions ought to follow Maxwell-Boltzmann. Now is this to say that Gillespie's formulation is without any fault? No. You are entirely correct in being suspicious of the assumption of no velocity correlation between collisions. However, due to the abundant aqueous solution (and relatively dilute reactants), the non-reactive collisions between the chemical compound and water quickly thermalize and destroy any previous velocity correlations between the two reagants (you might want to see if hydrodynamical interactions exist; I doubt that they are of any relevance in 3D, but for diffusion on top of the cell membrane they might play a minor role. An in-depth analysis would probably even make a decent publication.). The spatial correlation, however, probably takes a longer time to vanish, and it is here that Gillespie's formulation is at its weakest (the Gillespie method can be supplanted by a true continuum correction, i.e. Green's Function Reaction Dynamics, or by dividing space into small boxes and having chemical constants for diffusing from one box to another). For a recent application where spatial heterogeneity is important see for example PNAS 110, 5927 (2013) and references contained therein.


Textbook recommendation for computational physics



Can anyone recommend some good textbooks for undergraduate level computational physics course? I am using numerical recipe but find it not a very good textbook.




homework and exercises - How long does it take for an electric car to go from 0 to 60 mph?


I found the freefall motion equation which describes terminal velocity of a falling body, but I can't find a similar equation for a vehicle subject to constant traction force, so I tried determining it by myself, but resulting equation is not plausible, as it shows dozens of seconds needed for a 1600 kg vehicle to go from 0 to 60 mph, so there must be something wrong. I'm using this equation: $$v(x) = v_f \cdot \tanh\left(\frac F {mv_f} \cdot x \right) = v_f \cdot\tanh\left(\frac {T/w } {m v_f} \cdot x \right)$$



  • $v_f$ = terminal velocity = $\sqrt {\frac F c} = \sqrt {\frac {T} {wc}}$

  • $c = \frac 1 2 \rho C_d A$


  • $\rho$ = air density = 1.225 $\frac {kg} {m^3}$

  • $C_d$ = air drag coefficient = 0.32

  • A = frontal area = 2.19 m$^2$

  • T = given torque = 220 Nm

  • w = wheel radius = 0.25 m

  • m = vehicle mass = 1762.5 kg


Freefall motion equation is:


$$ v(x) = v_f \tanh\left( {x\sqrt{\frac{gc}{m}}}\right)$$


with $v_f=\sqrt{\frac{mg}c}$



With above data for the car, I should get around 10s time for 0-60 mph, but I get 63 seconds!


What am I doing wrong?


With above data(1) for the car, I know (2) I should get around 10s time for 0-60 mph, but I get 63 seconds!


What am I doing wrong?


Other literature data:



  • Fiat Stilo - 255 Nm, 1488 kg, 11.2 s

  • BMW M3 - 400 Nm, 1885 kg, 5.3 s

  • Citroen C3 - 133 Nm, 1126 kg, 14.5 s



Literature data for electric cars:



  • kg W Nm sec-to-60mph

  • Chevrolet Volt 1715 63 130 9,0

  • smart fortwo electric drive 900 55 130 12,9

  • Mitsubishi i-MiEV 1185 47 180 13,5

  • Citroen zEro 1185 49 180 13,5

  • Peugeot iOn 1185 47 180 13,5

  • Toyota Prius Plug-in 1500 60 207 10,7

  • Renault Zoe 1392 65 220 8,0


  • Renault Fluence Z.E. 1543 70 226 9,9

  • Nissan leaf 1595 80 280 11,9

  • Toyota RAV4 EV (US only) 1560 115 296 8,0


(1) "Evaluation of 20000 km driven with a battery electric vehicle" - I.J.M. Besselink, J.A.J. Hereijgers, P.F. van Oorschot, H. Nijmeijer


(2) http://inhabitat.com/2015-volkswagen-e-golf-electric-car-arrives-in-the-u-s-next-fall/2015-vw-e-golf_0003-2/




homework and exercises - Bessel function representation of spacelike KG propagator


Preliminaries: In their QFT text, Peskin and Schroeder give the KG propagator (eq. 2.50)


$$ D(x-y)\equiv<0|\phi(x)\phi(y)|0> = \int\frac{d^3p}{(2\pi)^3}\frac{1}{2\omega_\vec{p}}e^{-ip\cdot(x-y)}, $$


where $\omega_\vec{p}\equiv\sqrt{|\vec{p}|^2+m^2}$. For lightlike separations, we can choose a frame where $x-y$ is purely in the time-direction and the propagator can be put into the form (2.51)



$$ D(x-y)=\frac{1}{4\pi^2}\int^\infty_m d\omega\sqrt{\omega^2-m^2}e^{-i\omega (y^0-x^0)} \tag{1}\label{timelike_prop}, $$


where I use the $\text{diag }\eta=(-,+,+,+)$ convention.


Now, one has the following integral representation of the modified Bessel function (http://dlmf.nist.gov/10.32.8)


\begin{align} K_1(z) &= z\int^\infty_1 dt \sqrt{t^2-1} e^{-zt} \\ &= \frac{z}{m^2} \int^\infty_m dt \sqrt{t^2-m^2} e^{-zt/m}, \tag{2}\label{int_rep} \end{align}


where we go to the second line by rescaling the integration variable $t \to t/m$. Comparing \eqref{timelike_prop} with \eqref{int_rep} suggests


$$ D(x-y)=\frac{m}{(2\pi)^2|y-x|}K_1(m|y-x|), $$


where we have written the time separation in terms of the Lorentz invariant $i (y^0-x^0)=|y-x|$. (Note: there is an issue in what I've written here in that the integral representation \eqref{int_rep} is only valid for $|arg z|<\pi/2$ and $|y-x|$ is on the imaginary axis ($|arg z|=\pi$), but I think one could infinitesimally displace $z$ off of the imaginary axis to get a convergent integral. Check me on that.)


Anyway, for spacelike separations, we can choose a frame where $y-x=\vec{y}-\vec{x}\equiv\vec{r}$. Performing the polar integrations yield


$$ D(x-y)=\frac{-i}{2(2\pi)^2 r}\int^\infty_{-\infty}dp\frac{p e^{ipr}}{\sqrt{p^2+m^2}}. $$


Finally, PS claim that taking a contour integral in the upper half plane (making sure to avoid the branch cut at +im) will give



$$ D(x-y)= \frac{1}{(2\pi)^2r}\int^\infty_m d\rho \frac{\rho e^{-\rho r}}{\sqrt{\rho^2-m^2}}, \tag{3}\label{spacelike_prop} $$ where $\rho\equiv-ip$.


Question: I know from plugging into Mathematica that the spacelike propagator \eqref{spacelike_prop} can also be expressed as a modified Bessel function $K_1$. Moreover, the integration bounds of \eqref{spacelike_prop} and \eqref{int_rep} are even the same. However, I don't see how to transform the spacelike propagator integral \eqref{spacelike_prop} into the form of \eqref{int_rep}. Any ideas?


(I'd prefer, if at all possible, to use the integral representation that I've quoted \eqref{int_rep} and used for the timelike case rather than some other representation of the modified Bessel function.)



Answer



This can be seen by partial integration


$$\frac{\partial}{\partial \rho}\sqrt{\rho^2-m^2}=\frac{\rho}{\sqrt{\rho^2-m^2}}$$


OP edit: More explicitly, we use this to write $(3)$ as


\begin{align} D(x-y) &= \frac{1}{(2\pi)^2r}\int^\infty_m d\rho \frac{\partial}{\partial \rho}\sqrt{\rho^2-m^2} e^{-\rho r} \\ &= \frac{1}{(2\pi)^2r}\left[\sqrt{\rho^2-m^2} e^{-\rho r}\right]^\infty_m-\frac{1}{(2\pi)^2r}\int^\infty_m d\rho \sqrt{\rho^2-m^2} \frac{\partial}{\partial \rho}e^{-\rho r}\\ &= \frac{1}{(2\pi)^2}\int^\infty_m d\rho \sqrt{\rho^2-m^2} e^{-\rho r}\\ &= \frac{m}{(2\pi)^2r}K_1(mr) \end{align}


buoyancy - Why does an object when filled with water sink, but without water inside float (in a body of water)?


Why does an object sink when filled with water, even if the same object would float without water inside?


For example, put an empty glass cup into water, and it floats.



But if you put a plastic container filled with water in water, it'll sink. Why is that?



Answer



The cup will sink if and only if the total downward force pushing on all its upward facing surfaces, plus its own weight, is stronger than the total upward force pushing on its downward facing surfaces.


If the cup is full, then the weight of the water inside it pushes down on the upward facing surface of the inside of the bottom of the cup (and possibly one or more internal side surfaces, if they are at an angle), in addition to the weight of all the air above it.


If the cup is empty then only the weight of the air inside the cup is pushing down on it, in addition again to the weight of all the air above it.


Since water is heaver than air, there is more force pushing down on a cup with water in it than a cup without water in it, and so it is more likely to sink. It's really the same reason that a cup of water feels heavier than a cup of air.


Sunday, 23 August 2020

differential geometry - How is Infinitesimal coordinate transformation related to Lie derivatives?


I am reading the book "Gravitaion and Cosmology" by S. Weinberg. In section 10.9, while discussing Lie derivatives of tensors of different ranks, he makes a general comment:



The effect of an infinitesimal coordinate transformation on any tensor $T$ is that the new tensor equals the old tensor at the same coordinate point, plus the Lie derivative of the tensor.



Is there a straightforward way to see this?




particle physics - Can mass develop without the Higgs mechanism?


In Vienna, about one year ago, researchers proposed that a previously-discovered meson is the glueball, a massive particle that consists of massless gluons (this is their published paper in Phys. Rev. Lett). Can't the same mechanism be responsible for the mass of quarks, leptons and other massive particles? If so, they have to be composites of massless particles, of course, so maybe this discovery is at the same time a hint that that's indeed the case (as in the Rishon theory of Harari).




quantum mechanics - Single particle operator in second quantization


I want to understand why we write in the formalism of second quantization for a single particle operator


\begin{equation} \hat H=\sum_i \varepsilon_i \hat a_i^{\dagger} \hat a_i \end{equation} where $\varepsilon_i$ is the eigenvalue of the solved Schroedinger equation. Is it just the fact that I know that $a_i^{\dagger} \hat a_i=\hat n_i$ and I associate the Hamiltonian with the energy of the system: $H=E=\sum_i n_i \varepsilon_i$, or how can one understand this?



Answer



In the second quantization (physics of many-body systems) language, the (physical) question is "How many particle in each state?". Suppose that there is $n_\alpha$ particles in state $\alpha$, each particle in this state has energy $\epsilon_\alpha$, then the total energy in this state is: $n_\alpha\epsilon_\alpha$. If we want to have the total energy of the system, we just simply add all possible state's energies: $$E = \sum_\alpha n_\alpha\epsilon_\alpha$$ We see that E and $n_\alpha$ are physical observables. In QM, each physical observable corresponds to a hermitian operator. Hence, naturally, E is corresponding to Hamiltonian, which is the energy operator; and $n_\alpha$ is corresponding to $\hat{n}_\alpha$, occupation number operator. It turns out that $n_\alpha$ and E are eigenvalues of $\hat{n}_\alpha$ and H, respectively. So we have: $$H = \sum_\alpha \hat{n}_\alpha\epsilon_\alpha$$


The question now is to find the number operator. We discuss here the bosonic case. In the many-body system, since the number of particle can be changed, we introduce the creation and annihilation operators: $a^\dagger$ and a. Their definitions are: $$a^\dagger(k)|0\rangle = |k\rangle$$ $$a^\dagger(k_{n+1})|k_1,k_2,...,k_n\rangle = \textrm{(constant)}_1|k_1,k_2,...,k_n,k_{n-1}\rangle$$


$$a|0\rangle = 0$$ $$a(k)|k_1,k_2,...,k_n\rangle = \textrm{(constant)}_2\sum_i\delta(k,k_i)|k_1,k_2,...,k_{i-1},k_{i+1},...,k_n\rangle$$ $\textrm{(constant)}_2$ and $\textrm{(constant)}_1$ can be obtained by permutation: $\textrm{(constant)}_1 = \sqrt{n+1}$, $\textrm{(constant)}_2 = 1/\sqrt{n}$. When acting $a^\dagger(k)a(k)$ on a n-particle state $|k_1,k_2,...,k_n\rangle$, we get: $$a^\dagger(k)a(k)|k_1,k_2,...,k_n\rangle = n|k_1,k_2,...,k_n\rangle$$ which is exactly the same as the action of the occupation number operator on the state $|k_1,k_2,...,k_n\rangle$. Then, $\hat{n} \equiv a^\dagger a$.


Is that what you want?



momentum - Intuitively Understanding Work and Energy


It is easy to understand the concepts of momentum and impulse. The formula $mv$ is simple, and easy to reason about. It has an obvious symmetry to it.


The same cannot be said for kinetic energy, work, and potential energy. I understand that a lightweight object moving at very high speed is going to do more damage than a heavy object moving at a slower speed (their momenta being equal) because $E_k=\frac{1}{2}mv^2$, but why is that? Most explanations I have read use circular logic to derive this equation, implementing the formula $W=Fd$. Even Samlan Khan's videos on energy and work use circular definitions to explain these two terms. I have three key questions:



  • What is a definition of energy that doesn't use this circular logic?

  • How is kinetic energy different from momentum?

  • Why does energy change according to $Fd$ and not $Ft$?




Answer



You may want to see Why does kinetic energy increase quadratically, not linearly, with speed? as well, it's quite related.


Mainly the answer to your questions is "it just is". Sort of.



What is a definition of energy that doesn't use this circular logic?



Let's look at Newton's second law: $\vec F=\frac{d\vec p}{dt}$. Multiplying(d0t product) both sides by $d\vec s$, we get $\vec F\cdot d\vec s=\frac{d\vec p}{dt}\cdot d\vec s $


$$\therefore \vec F\cdot d\vec s=\frac{d\vec s}{dt}\cdot d\vec p$$ $$\therefore \vec F\cdot d\vec s=m\vec v\cdot d\vec v$$ $$\therefore \int \vec F\cdot d\vec s=\int m\vec v\cdot d\vec v$$ $$\therefore \int\vec F\cdot d\vec s=\frac12 mv^2 +C$$


This is where you define the left hand side as work, and the right hand side (sans the C) as kinetic energy. So the logic seems circular, but the truth of it is that the two are defined simultaneously.




How is kinetic energy different from momentum?



It's just a different conserved quantity, that's all. Momentum is conserved as long as there are no external forces, kinetic energy is conserves as long as there is no work being done.


Generally it's better to look at these two as mathematical tools, and not attach them too much to our notion of motion to prevent such confusions.



Why does energy change according to $Fd$ and not $Ft$?



See answer to first question. "It just happens to be", is one way of looking at it.


electromagnetism - Can the center of charge and center of mass of an electron differ in quantum mechanics?


Traditionally for a free electron, we presume the expectation of its location (place of the center of mass) and the center of charge at the same place. Although this seemed to be reasonable for a classical approximation (see: Why isn't there a centre of charge? by Lagerbaer), I wasn't sure if it's appropriate for quantum models, and especially for some extreme cases, such as high energy and quark models.



My questions are:




  1. Is there any experimental evidence to support or suspect that the center of mass and charge of an electron must coincide?




  2. Is there any mathematical proof that says the center of mass and charge of an electron must coincide? Or are they permitted to be separated? (By electric field equation from EM, it didn't give enough evidence to separate $E$ field with $G$ field. But I don't think it's the same case in quantum or standard model, i.e. although electrons are leptons, consider $uud$ with $2/3,2/3,-1/3$ charges.)




  3. What's the implication for dynamics if the expectation of centers does not coincide?






Answer




Can the center of charge and center of mass of an electron differ in quantum mechanics?



They can. Particle physics does allow for electrons (and other point particles) to have their centers of mass and charge in different locations, which would give them an intrinsic electric dipole moment. For the electron, this is unsurprisingly known as the electron electric dipole moment (eEDM), and it is an important parameter in various theories.


The basic picture to keep in mind is something like this:



Image source



Now, because of complicated reasons caused by quantum mechanics, this dipole moment (the vector between the center of mass and the center of charge) needs to be aligned with the spin, though the question of which point the dipole moment uses as a reference isn't all that trivial. (Apologies for how technical that second answer is - I raised a bounty to attract more accessible responses but none came.) Still, complications aside, it is a perfectly standard concept.


That said, the presence of a nonzero electron electric dipole moment does have some important consequences, because this eEDM marks a violation of both parity and time-reversal symmetries. This is because the dipole moment $\mathbf d_e$ must be parallel to the spin $\mathbf S$, but the two behave differently under the two symmetries (i.e. $\mathbf d_e$ is a vector while $\mathbf S$ is a pseudovector; $\mathbf d_e$ is time-even while $\mathbf S$ is time-odd) which means that their projection $\mathbf d_e\cdot\mathbf S$ changes sign under both $P$ and $T$ symmetries, and that is only possible if the theory contains those symmetry violations from the outset.


As luck would have it, the Standard Model of particle physics does contain violations of both of those symmetries, coming from the weak interaction, and this means that the SM does predict a nonzero value for the eEDM, which falls in at about $d_e \sim 10^{-40} e\cdot\mathrm m$. For comparison, the proton sizes in at about $10^{-15}\:\mathrm m$, a full 25 orders of magnitude bigger than that separation, which should be a hint at just how small the SM's prediction for the eEDM is (i.e. it is absolutely tiny). Because of this small size, this SM prediction has yet to be measured.


On the other hand, there's multiple theories that extend the Standard Model in various directions, particularly to deal with things like baryogenesis where we observe the universe to have much more asymmetry (say, having much more matter than antimatter) than what the Standard Model predicts. And because things have consequences, those theories $-$ the various variants of supersymmetry, and their competitors $-$ generally predict much larger values for the eEDM than what the SM does: more on the order of $d_e \sim 10^{-30} e\cdot\mathrm m$, which do fall within the range that we can measure.


How do you actually measure them? Basically, by forgetting about high-energy particle colliders (which would need much higher collision energies than they can currently achieve to detect those dipole moments), and turning instead to the precision spectroscopy of atoms and molecules, and how they respond to external electric fields. The main physics at play here is that an electric dipole $\mathbf d$ in the presence of an external electric field $\mathbf E$ acquires an energy $$ U = -\mathbf d\cdot \mathbf E, $$ and this produces a (minuscule) shift in the energies of the various quantum states of the electrons in atoms and molecules, which can then be detected using spectroscopy. (For a basic introduction, see this video; for more technical material see e.g. this talk or this one.)


The bottom line, though, as regards this,



Is there any experimental evidence to support or suspect the center of mass and charge of an electron must coincide?



is that the current experimental results provide bounds for the eEDM, which has been demonstrated to be no larger than $|d_e|<8.7\times 10^{−31}\: e \cdot\mathrm{m}$ (i.e. the current experimental results are consistent with $d_e=0$), but the experimental search continues. We know that there must be some spatial separation between the electron's centers of mass and charge, and there are several huge experimental campaigns currently running to try and measure it, but (as is often the case) the only results so far are constraints on the values that it doesn't have.



Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...