Thursday, 28 February 2019

How do atoms in a solid "communicate" force to each other?


What is the mechanism that carries and communicates force in a solid, on the atomic level?


Is there some other mechanism besides atomic deformation and proximity?


That is, if I had an infinitely incompressible substance and put it on top of my hand and hit it with a hammer, would my hand feel anything, if there is no difference whatever in the movement, shape or location of the substance?


(If you think the substance will move down in response to the hammer, remember that it's incompressible, so the top of the substance can't move down faster than the bottom, and the whole thing can't immediately move down, or we would have sent information to the bottom instantly, i.e. faster than the speed of light. How, then, would the information be communicated throughout the substance that it's time to gain downward momentum?)


If you argue that "infinitely incompressible" is a ridiculous scenario to assume, :) because it violates the compressibility of all matter, then is it that basic compressibility of all matter that provides the mechanism by which larger-scale-than-nuclear force is transmitted from atom to atom? In other words, is all force on the larger-than-atomic scale the result of inter-atomic and infra-atomic compression/deformation/shifts in density?




quantum mechanics - Shimmering from heated air and the speed of light


A few months back, I was using binoculars to check if my friend was on his boat, which was around 2 to 3 km out to sea from the shoreline where I was standing.


The images from, say the sail of the boat, were travelling at the speed of light into my eye, that's ok, no problem there.



On the direct line of sight between the boat and myself was a region of sun heated rising air, which made the image of the boat shimmer, again normal enough.


My question is based on my misunderstanding of light refraction I know, but it is as follows:


Say the column of heated air was rising at, arbitrarily say 2 metres per second, and the speed of light is vastly faster than that, 300, 000 meters per second.


In still air, I assume (and this is where I go wrong I guess), where there is no shimmering effect, that photons travel either between the molecules of air straight to my eye, or sometimes some of them will be absorbed by the air molecules and then remitted, after a very short space of time, onwards to my eye.


So my question is, given the speed of light compared to the speed of the rising air, and given the short time interval between absorption and reemission of the photons, how does the shimmering effect occur?


Is it because of scattering, that is, the photons absorbed by the air molecules are deflected away from my eye centre, giving the impression that the boat is displaced from it's true position? In other words, the photons come "into" the molecule at one angle and are reemitted at another?


My reasoning is probably wrong, but I haven't covered optics, mirages and refraction for a long time, and I am trying to understand this on on a micro level.


Anybody feel like a quick basic refresher explanation in basic refraction, or just point me at a source for explaining shimmering effects at the atomic level.



Answer





In still air, I assume (and this is where I go wrong I guess), where there is no shimmering effect, that photons travel either between the molecules of air straight to my eye, or sometimes some of them will be absorbed by the air molecules and then remitted, after a very short space of time, onwards to my eye.



The confusion comes from mixing two physics frames/models in one sentence.


Photons belong to the quantum mechanical framework. Refraction belongs to the classical electrodynamics of electromagnetic waves. It is true that the classical emerges smoothly from the underlying quantum mechanical framework, as a meta level, and this can be shown mathematically.


In classical electrodynamics, refraction happens because of the changes in the index of refraction, and the velocity of light changes in the medium according to that index.


At the quantum mechanical underlying level, the popularization of absorption and reemission of photons model is not what is happening in transparent materials. Let us take a transparent crystal for its obvious quantum mechanical form.


A photon impinging on the crystal face sets up a quantum mechanical boundary value problem. The total state function of the crystal + photon has a probability for the photon to go through elastically, not losing energy, at all angles. In transparent materials, that probability is very large in the direction of refraction of the total wave built up by zillions of photons. In a sense, the classical wave behavior is the probability distribution, of the solution of the underlying quantum mechanical boundary condition problem, "measured" by the zillion photons.


The same is true for air when the photon meets a varying index of refraction (density etc.). The state function describing the region + impinging photon has probabilities for the photon to go through, scatter elastically, at certain angles, which is maximum in the direction of refraction.


quantum mechanics - Black body radiation curve


See images about black-body radiation.


How does Planck's quantum theory explain the low intensity of radiation for high frequencies?


I.e why does the black body curve become lower at the high frequency side?




homework and exercises - Based on newtons third law



If an elephant can apply 250N force and a rabbit can apply 25N force , and if they are pulling each other , then elephant will pull with 250N force and according to newtons third law the rabbit should also pull the elephant with 250N force.then from where do the rabbit got the extra 225N i.e.(250-25=225) force??? can anyone please explain.



Answer



To check your data image that a spring balance was connected to a wall and the elephant and the rabbit were asked to pull as hard as they could so that you could take a reading of their maximum pulling force.
What that reading will show is the maximum horizontal force that the elephant and the rabbit can exert on the ground with their feet.
This is because when the elephant pulls on the spring balance the spring balance pulls on her with an equal force. (N3L)

For the elephant not to move the ground must push the elephant with an equal force so that the net force on the elephant is zero. (Not N3L)
So the maximum pull is the maximum horizontal push that the feet can exert on the ground which is equal and opposite to the maximum push the ground can exert on the feet. (N3L)


So now we come to the tug of war.
The rabbit pulls with a force of 10 N and the elephant also pulls with a force of 10 N. There is not a lot of movement. The rabbit then pulls with a maximum force for him of 25 N and the elephant counters with a force of 25 N. Stalemate again.


The elephant now pulls with a force greater than 25 N, a force which is greater than that which the ground can exert on the rabbit.
There is thus a net force on the rabbit.
So the rabbit accelerates towards the elephant.


quantum electrodynamics - Where and how exactly does string theory and Q.E.D. use zeta function regularization?


In the video they mention it being used in many fields of physics inclusing String and QED theory.


https://www.youtube.com/watch?v=w-I6XTVZXww


But I remember reading somewhere that 1+2+3..=-1/12 is obviously a "mathematical trick" (something about stupidly equating incompatible sets), and if so, how does this turn out to be true for things that are real (like QED)?



Answer



Zeta function regularization is used in other fields, and even in pure mathematics to obtain finite answers from otherwise divergent integrals. In bosonic string theory, the mass of states in lightcone gauge is,


$$M^2 = \frac{4}{\alpha'} \left[ \sum_{n>0} \alpha^{i}_{-n}\alpha^{i}_n + \frac{D-2}{2}\left( \sum_{n>0} n\right) \right]$$


where $\alpha'$ is the universal Regge slope, $D$ is the spacetime dimension, and $\alpha^{i}_n$ may be interpreted as Fourier coefficients of the expanded form of the embedding functions $X^{\mu}(\sigma)$ in the Polyakov action which provide a map from the worldsheet to the target space. We use the fact that


$$\sum_{n>0} n = 1+2+3+...=\zeta(-1)=-\frac{1}{12}$$



to write the expression for the mass of states as,


$$M^2 = \frac{4}{\alpha'} \left(N - \frac{D-2}{24} \right)$$


If we look at the ground state, corresponding to $N=0$, we see


$$M^2 = -\frac{1}{\alpha'}\frac{D-2}{6}$$


which corresponds to a particle with an imaginary mass, known as a tachyon. The demand that we preserve $SO(1,D-1)$ Lorentz symmetry forces us to choose that the first excited state $(N=1)$ be massless, and so we must choose the spacetime to be $D=26$. In other string theories, the critical dimension of the string may be lower, e.g. $10$ or $11$. For further details, I recommend Prof. Tong's lectures notes on string theory available at: http://www.damtp.cam.ac.uk/user/tong/string.html.


quantum mechanics - How does many possible futures not mean many possible pasts?



The following link implies that quantum mechanics only violates the "one future" aspect of information conservation:
https://www.physicsforums.com/threads/is-information-always-conserved-in-quantum-mechanics.458985/
How is it possible that we can know that there can only be one past even though any state of any system isn't known from the previous states?




homework and exercises - Is the solution to a question regarding four smaller masses on a rotating hollow sphere accurate?


I was doing the question here:




A uniform 8.40-kg spherical shell, 50.0 cm in diameter has four small 2.00-kg masses attached to its outer surface and equally spaced around it. This combination is spinning about an axis running through the center of the sphere and two of the small masses (see figure). What friction torque is needed to reduce its angular speed from 75.0 rpm to 50.0 rpm in 30.0 s?



I understand how to get $\alpha$, and how to get $\tau$ from $I$ and $\alpha$. What I do not understand is their computation of the moment of inertia. The question did not provide any information regarding the shapes of the masses, but the solution (which writes $I=\frac{2}{3}MR^2+2mR^2$) seems to assume that the masses are point particles. Thus, the two masses along the rotational axis have a moment of inertia of 0 ($r=0$), and each of the other two masses has a moment of inertia of $I=mr^2 =mR^2$, where $R$ is the radius of the sphere and $r$ is the distance from the axis.


But why are the masses considered to be point particles? I wanted to use the parallel-axis theorem to determine the moment of inertia about the rotational axis; not knowing the shape of the masses made this impossible, since I couldn’t find the moment of inertia about the CM. If I knew $I_{CM}$, then couldn’t I use $I_{axis}=I_{CM}+mR^2$ for the two masses not on the rotational axis and $I_{CM}$ for the two that are? That would make $I_{total}=\frac{2}{3}MR^2+4I_{CM,\:mass}+2mR^2$ Would that not be a better way of solving this problem?




Wednesday, 27 February 2019

homework and exercises - What is the direction of the friction force on a rolling ball?


Suppose you have a solid ball on a horizontal table.




  1. What is the direction of friction force when the ball I pushed horizontally and starts rolling?

  2. Why is the direction of friction as it is?


  3. Which forces acts at the contact point between delta time t0 to t1? (If we divide friction force in sub forces)


    V=1Vx m/s


    Fx=?





Answer




First consider there is no friction. The point of contact between the ball and the table moves with the direction of the global motion.


Now introduce friction: you have kinematic friction slowing down this point thus make the ball roll due to the induced torque. You will have a motion in between the cases of pure sliding and pure rolling.


In this case the direction of the friction force is obvious (by definition of the friction).


Now if you do the things at the limit case, you will have a pure rolling. In that case the point of contact has zero instantaneous velocity and if the motion is horizontal, with constant and angular and linear motion, you don't need any friction, if you had friction, this would induce a torque and the angular momentum will change.


If you introduce acceleration or a non horizontal surface: in that case you have static friction: the point cannot move forward, friction is directed opposite to the "accelerated" direction, you introduce a torque.


homework and exercises - What would it take to cause lightning to jump between the Moon and the Earth?


This question comes from @Floris' speculation at the end of his excellent answer about what it would take to kill everyone on the Earth with electricity.



Doing all this in 1/10th of a second requires an instantaneous power of $7 \cdot 10^{27} W$ which is a bit larger than the power output of the sun (which is $4\cdot 10^{26}W$ according to Wolfram alpha


This being the case, I think we're pretty safe. The only way Dr Evil could get away with this plan is to do it in reverse: first pump the charge off the earth to the moon (slowly), then let it all flow back in a cosmic lightning strike. I am not absolutely sure that the moon would stay in orbit while we charge it up... electrostatic attraction would get pretty strong. But that might be the topic for another post.



Here's that other post!





  • Can you cause lightning to jump between the Earth and the Moon?




  • What scale of energy and charge would it take?




  • Before the lightning cancelled out the charge, how much would the electromagnetic attraction alter the orbit of the Moon?






Answer



Based on the calculation in my earlier answer, we were going to try to charge the earth with $10^{12}C$ and put that charge on the moon. Sending all the charge back in a giant lightning strike would then cause such a rapid change in electric field (not to mention that it dumps all the energy of twelve suns for a tenth of a second...) that it would electrocute every human being on the planet not in a well shielded cage (and those would probably just fry instead).


I then speculated about the forces and fields that would arise from that charge...


The force between two spheres each charged with $Q=10^{12}C$, distance $R = 4\cdot 10^8 m$ apart, is


$$F_e = \frac{Q^2}{4\pi \epsilon_0 R^2}\approx 5\cdot 10^{16}N$$


By comparison, the force of gravity is


$$F_g = \frac{GMm}{R^2} \approx 2\cdot 10^{20} N$$


So it won't make the moon crash - but it might speed up the orbit a little bit. More full moons. Werewolves, rejoice!


Now as for the electrical discharge. Earlier, I calculated that the field strength of earth was about 200 MV/m. The dielectric breakdown of air occurs at about 3 MV/m - see this source. More precisely, if we look at the Paschen curve for air, it is given by



$$V_b = \frac{apd}{\ln(pd)+b}$$


Where for air, $a=4\cdot 10^7 V/(atm\cdot m)$, $b=12.8$, and $p= 1 atm$. For $d = 4\cdot 10^8 m$, the breakdown voltage (using the ridiculous assumption that the air pressure is the same all the way) would be $5\cdot 10^{14} V$ - and that was a very generous estimate. More realistic would be that the potential difference reached is such that the field reaches 3 MV/m - 1/70th of the desired potential difference.


What will happen long before we reach the desired potential difference is this - the atmosphere will be ionized on the side of the moon (where the field strength is greatest) and ions will start to be attracted to the moon (assuming that the potential difference was set up with the earth net positive, and the moon net negative). These ions will arrive at the moon with tremendous energy - enough to vaporize bits of moon on impact and create a plasma which in turn will be ripped apart by the electric field and run towards the earth.


Previously we calculated the energy associated with the full potential difference as $10^{26} J$ - but that was when the full charge was reached. At 1/70th of the voltage we will have about 1/5000th of the energy, so $2\cdot 10^{22}J$. If half of that is used to burn a hole in the moon, you can melt a big hole. How big?


Heat capacity of lava roughly 1 kJ/(kg K); latent heat of fusion of rock 400 kJ/kg (source), and boiling point around 2500 K (2230 C for quartz). I could not find the latent heat of vaporization of rock, but based on other silicon based compounds, I will put it at 8000 kJ/kg (somewhere between the value for iron and silicon).


So taking one kg of moon and vaporizing it takes roughly


8,000 + 2,000 * 1 + 400 ~ 10,000 kJ


Update:


According to this reference the specific energy of granite is 26 kJ/cm3, and the density of granite is about 2.6 g/cm3. That makes my estimate of the power required to vaporize rock surprisingly accurate.


This means that this lightning strike will vaporize $10^{22-7}=10^{15} kg$ of moon. At a density of about $3.3 \cdot 10^3 kg/m^3$ that is a volume of 300 cubic kilometers of moon - a sphere of about 8 km diameter. And all that matter will be vaporized, ionized, and hurtling about in space. The most spectacular fireworks you will ever see - and not be able to tell the grandkids about.



A similar hole will be made on earth, of course. I think the fact that we'll be getting electrocuted is dropping lower down on the list of causes of death - of the planet.


resource recommendations - Textbook on the Geometry of Special Relativity



I am looking for a textbook that treats the subject of Special Relativity from a geometric point of view, i.e. a textbook that introduces the theory right from the start in terms of 4-vectors and Minkowski tensors, instead of the more traditional "beginners" approach. Would anyone have a recommendation for such a textbook ?


I already have decent knowledge of the physics and maths of both SR and GR ( including vector and tensor calculus ), but would like to take a step back and expand and broaden my intuition of the geometry underlying SR, as described by 4-vectors and tensors. What I do not need is another "and here is the formula for time dilation..." type of text, of which there are thousands out there, but something much more geometric and in-depth.


Thanks in advance.




newtonian gravity - Is $W=oint{vec{F}.dvec{r}}=0$ sufficient condition for conservative force?


I learned from my Physics textbooks that there is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place i.e.


$$W=\oint{\vec{F}.d\vec{r}}=0$$


Now, I need to verify whether the force $$\vec{F}=\dfrac{x \hat{i} + y \hat{j}}{(x^2+y^2)^{\frac{3}{2}}}$$ is conservative or not.


enter image description here


I substituted $x=r\cos(\theta)$ and $y=r\sin(\theta)$, in order to prove that if I move a body by applying the given force through a complete circle then the work done will be zero.


Now, suppose I start from $\theta=0$ and move anticlockwise then my unit vector for displacement at any angle $\theta$ should be $-\sin(\theta) \hat{i} + \cos(\theta) \hat {j}$.


So total work done in traversing the circular path is


$$W'=\int_{0}^{2\pi}{\dfrac{(r\cos(\theta)\hat{i}+r\sin(\theta)\hat{j})(-\sin(\theta)\hat{i}+\cos(\theta)\hat{j})(rd\theta)}{[r^2\cos^2(\theta)+r^2\sin^2(\theta)]^{\frac{3}{2}}}}=0$$


Now is showing $W'=0$ sufficient to prove that $\vec{F}$ is conservative ? Also, is their any easier way ?




Answer



Proving that


$\oint \vec{F}\cdot d\vec{r}=0$


is sufficient to establish that the force is conservative if it is true for all possible paths. You only proved it for a single path, that is the one on a circle of radius $r$ centered at the origin.


There can be a more efficient way to prove the same result, depending on the context, using the same equation in the differential form instead of the integral form. The idea is to use Stokes theorem to write


$\oint \vec{F}\cdot d\vec{r}=\int\limits_\Omega (\nabla\times\vec{F})\cdot d\vec{A},$


where $\Omega$ is the surface enclosed by the closed path of the left hand side. Now, observe that the right hand side equation will always be zero if


$\nabla\times\vec{F}=0$


everywhere. It is often much simpler to prove this instead.


For the sake of completeness, there is also a third option. A conservative force can be written as the gradient of a potential $\phi$, that is



$\vec{F}=-\nabla\phi$.


This follows from the preceding condition as the curl of a gradient is always zero (as long as the function $\phi$ is well behaved, which you can suppose through much of an undergrad physics curriculum.)


Tuesday, 26 February 2019

electrostatics - Gauss's law for cylinder with infinite height with a spherical cavity



Imagine there is a cylinder with a charge density of +Q per unit volume and of infinite length. Now place a spherical cavity inside it with a diameter equal to the cross-section diameter of the cylinder. Is there an electric field inside the sphere? If so, is it possible to calculate the E-field with Gauss's Law?



a spherical cavity inside cylinder



Answer



Yes, you can use Gauss's law, but I will leave you to work out the details. You use the principle of superposition.


Use Gauss's law (cylindrical symmetry) to work out the E-field inside the uniform cylinder, without the spherical hole in it.


Use Gauss's law (spherical symmetry) to work out what the E- field would be due to a sphere with a negative charge density $-Q$, in the position you have shown the spherical cavity.


Your situation is equivalent to the sum of these two fields.


homework and exercises - How to derive the period of spring pendulum?



So I wanted to find out how to (simply, if that's possible) derive the formula for a period of spring pendulum: $T=2\pi \sqrt{\frac{m}{k}}$. However, Google doesn't help me here as all I see is the ready-to-bake formula. Could you please point me some directions?



Answer



You need to know the equation of motion. The force for the pendulum is given by $F= - k x$. Newtons equation tell you $F=ma = m \ddot x$. So you need to solve $$\tag{1} m \ddot x = - k x.$$


You know that the solution will be of oscillatory form. So you set $x= A \cos(2\pi t/T)$ and you want to obtain $T$. Plugging this ansatz into the equation (1), you obtain $$ - m\frac{(2\pi)^2}{T^2} A \cos(2\pi t/T) = - k A \cos(2\pi t/T). $$ You see that the equation is fulfilled if $$ m\frac{(2\pi)^2}{T^2} = k.$$ Solving for $T$, you obtain the result.


optics - Can I calculate the size of a real object by just looking at the picture taken by a Camera?


Can I calculate the size of a real object by just looking at the picture taken by a Camera? (I think people do that) i dont understand how? (from physics point of view)



Answer



If you know the specifics of the camera (lens system, aperture settings, etc.), then you can make a direct relationship between the size of the image and the angular size.



But without a distance measurement (something the camera does not do), you can't turn that into an absolute size.


If there is other information in the photograph that gives the distance, then the size can be calculated.


If all you have is the image (and not the information about the specifics of the camera), then even the angular size cannot be calculated.


Monday, 25 February 2019

astronomy - The reason(s) of seasons on earth


This maybe simple and usual question, although there is a lot of confusion about it over internet and even in some books, so I want from an astronomer / astrophysicist to fill the gaps for me accurately.



Fixed axis tilt is the reason, this is the most common explanation, but with some research, you will find that it is not the only reason!


Actually, earth distance from the sun playing its role: considering that the received energy from sun is inverse proportional to the square of earth distance, one can calculate that it's contribute to a change by 5-7% of the received energy. Moreover, it's effect is opposite, in senses that because we have summer when we are at the furthest point from sun, this actually reduces summer temperature due to axis tilt.


Also I read a claim that mentions a third reason, that most of the texts ignores, it is the angle of earth orbit to the suns equator, anyway, I found no quantitative analysis.


Your comments please.



Answer



Orbital eccentricity is unlikely to be a significant component of seasonal temperature variations because this has the same effect in both north and south hemispheres at the same time since the position on the orbit is the same for the whole earth. If ecentricity was the primary cause of seasons, seasons would be the same both hemispheres. Axis tilt being the primary cause of seasons accounts for opposite seasons in north and south hemispheres.


The observable effect of eccentricity should be a difference between corresponding seasons between the hemispheres, eg between northern summer and southern summer temperatures at corresponding latitudes. How the effect manifests also depends on the angle in the orbital plane between the direction of the axis tilt and direction of maximum orbital distance.


probability - Problem with physical application of Dirac Delta


Consider the problem of projectile motion in 2 dimensions. Launch angle is constant. Range of projectile, $x$, then depends only on launch speed, $v$, and is given by \begin{equation} x=v^2, \quad v\in [0,1] \tag{1} \end{equation} Above equation has been non-dimensionalised (by taking maximum range as our length scale, and maximum launch speed as our velocity scale), so all quantities are dimensionless. Probability density function for launch speed is assumed uniform over the interval $[0,1]$: \begin{equation} f(v)=1, \quad \textrm{if}~v\in [0,1]\tag{2} \end{equation} and zero otherwise. I want to find p.d.f for range of projectile, $x$. An easy way of doing this \begin{equation} f(x)=\left| \frac{dv}{dx}\right|f(v)=\frac{1}{2\sqrt{x}}, \quad x\in [0,1]\tag{3} \end{equation}


However I wanted to solve the same problem using Dirac delta function: \begin{align} f(x) & =\int_0^1 dv~f(x|v)~f(v) \\ & = \int_0^1 dv~f(x|v) \\ & = \int_0^1 dv~\delta(v^2-x)\tag{4} \end{align} Here $f(~|~)$ denotes conditional p.d.f.. Last line was arrived at because for given value of $v$, it is certain that we shall obtain that value of $x$ that satisfies the equation $v^2-x=0$. Now I make use of the identity for delta function \begin{align} \delta(g(x))=\sum_i \frac{\delta(x-x_i)}{|g'(x_i)|}\tag{5} \end{align} Here $x_i$ are roots of function $g(x)$, and $g'\equiv \dfrac{dg}{dx}$. Now $g(v)=v^2-x$, whose roots are $\pm \sqrt{x}$. We reject the negative root because $v\geq 0$. $g'=2v$. Hence \begin{align} f(x) & =\int_0^1 dv~\delta(v^2-x) \\ & = \int_0^1 dv~\frac{1}{2\sqrt{x}}\delta(v-\sqrt{x}) \\ & = \frac{1}{2\sqrt{x}}\tag{6} \end{align} which is correct.


However instead of $f(x|v)=\delta(v^2-x)$, we could equally well have begun with the equation $f(x|v)=\delta(v-\sqrt{x})$, because at least according to me, physical content of both equations is identical. However the last choice yields a completely different p.d.f.: \begin{align} f(x) & =\int_0^1 dv~\delta(v-\sqrt{x})=1\tag{7} \end{align} I don't think I have done anything wrong mathematically (if I have, please point out). To a mathematician of course the two functions are different, and so the fact that they yielded different p.d.f.s is not surprising. But when the equations are put in their proper physical context, both have identical physical content (as far as I can see). This example makes me wonder if Dirac Delta function may be used unambiguously in solving physical problems. While this was a simple problem where a second method of solution was available and so we could compare, what does one do in more complicated situations where such a comparison is not possible?




cosmology - Redshift of mass-dominant universe



What was the value of redshift $z$ when matter started to dominate the universe?


Is there any way to calculate it without knowing the time?




electromagnetism - Can quantum communication really replace electromagnetic waves for telecommunication medium in future?


Currently I am planning to get masters degree. So I am thinking about a subject in which I have to get masters degree. Following are my questions to leading physicists..



  1. Which technology is the future of telecommunication? Currently electromagnetic waves only rules the world for telecommunication. But now a days there is very vast amount of research going on in quantum mechanics and particle physics. According my knowledge the quantum entanglement and delocalization is the base for teleportation and future of communication. I want from the experts to suggest for choosing the topic for masters degree. Is the quantum communication is possible in real world one day? Can it replace our way of telecommunication through electromagnetic waves?


If i will do masters in electromagnetism, will i stay behind in time?





Recombination rate/time calculation for plasma vs. solid state semiconductors


I've been able to find how to calculate the recombination rate for semiconductors as a function of the type of semiconductor (like silicon), the doping material, the concentration of the excess carrier and resistivity, and the model for recombination. However, I don't know about the recombination rate calculation for a gaseous plasma. Can a similar relationship be used to calculate the recombination rate of plasma, where the electrons would be more excited? I suppose that would have to assume that the plasma would be a semiconductor, while generally atmospheric gases are considered insulators, but otherwise I was wondering what calculation you apply for plasma of say, neon or nitrogen (not sure if it differs for thermally or electrically dominated plasma).




condensed matter - Vortex anti-vortex?


i'm studying Kosterlitz THouless transition and i have a doubt: what is a vortex anti-vortex configuration?


Is this thing?


enter image description here



or this one


enter image description here


I think that they are quite different !



Answer



They are different (but see below $^*$). For a vortex of strength $+1$, if you walk around the defect clockwise (at a safe distance from the core, so that the local spin direction is always well defined) then the spins complete one full turn, also in a clockwise direction. You can see that this is true for both the vortices in your top diagram (which needs a bit of artistic licence, because it looks like it comes from a fluid flow system rather than a spin system $^*$). If, when you look at the right hand vortex, you prefer to take the walk around it in the anticlockwise direction, that's fine, and you will notice that the spins complete a full turn in the same (anticlockwise) direction. So, it is still a strength $+1$ vortex.


In the lower picture, the right one is also a vortex, but the left one is an antivortex, of strength $-1$. Taking a clockwise walk around it, the spins rotate by one full turn in the anticlockwise sense.


It is worth remembering that the hamiltonian for the XY model is invariant to a global rotation of the spins, by the same angle in the same direction. This can change the appearance of snapshots of spin configurations quite dramatically! But it has no effect on the energy, or indeed on the topology of the defects. There are some very nice animations on this blog describing the Kosterlitz-Thouless transition. Look particularly at the section which poses "Puzzle 1" and "Puzzle 2". You'll see an animation showing a left-hand vortex changing smoothly into a right-hand vortex. There's also some interesting stuff in the comments section at the bottom of that page.




$^*$ EDIT. I belatedly realized that the top picture of the OP is most likely the gradient of the angle field, rather than an imperfect representation of the spins themselves. So, the two pictures are most likely different representations of the same configuration of a vortex and an antivortex. In the continuum representation, the angle field around a strength $+1$ vortex at the origin is $\theta^+(x,y)=\tan^{-1}(y/x)$, and the gradient is $\nabla\theta^+=(-y,x)/r^2$, where $r^2=x^2+y^2$. The spins may be represented as unit vectors $(\cos\theta^+,\sin\theta^+) = (x/r,y/r)$. Around a strength $-1$ antivortex at the origin, $\theta^-(x,y)=-\tan^{-1}(y/x)$, $\nabla\theta^-=(y,-x)/r^2$, and $(\cos\theta^-,\sin\theta^-) = (x/r,-y/r)$. Here is the combined gradient field for an antivortex (blue) and vortex (red).


gradient field of antivortex+vortex



This is similar to the top picture in the OP. Now here is the spin vector diagram for the same configuration, with an arbitrary angle $\theta_0=\pi/2$ added throughout.


enter image description here


This is similar to the bottom picture in the OP. The argument about following the rotation of the spins on taking a walk around the defect(s) applies to the spin picture, but clearly the gradient picture is consistent with it.


Calculations and plots created in Maple.


with(plots);
theta := arctan(y, x-.5)-arctan(y, x+.5)+(1/2)*Pi;
f := fieldplot([cos(theta), sin(theta)], x = -1 .. 1, y = -1 .. 1, arrows = medium, axes = none);
p := pointplot([-.5, 0, .5, 0], color = [blue, red], symbol = solidcircle, symbolsize = 30);
display(p, f);
g := gradplot(theta, x = -1 .. 1, y = -1 .. 1, fieldstrength = maximal(1.5), arrows = medium, axes = none);

display(p, g);

classical mechanics - Does the variation of the Lagrangian satisfy the product rule and chain rule of the derivative?


I have seen wikipedia use the product rule and maybe the chain rule for the variation of the Langragin as follows:


\begin{align} \dfrac{\delta [f(g(x,\dot{x}))h(x,\dot{x})] } {\delta x} = \left( \dfrac{\delta [f(g)] } {\delta g} \dfrac{\delta [g(x,\dot{x})] } {\delta x} \right) h(x,\dot{x}) + f(g(x,\dot{x})) \dfrac{\delta [h(x,\dot{x})] } {\delta x} \end{align} where the variation of the Lagrangian is defined \begin{align} \dfrac{\delta \mathcal{L} } {\delta x} = \dfrac{\partial \mathcal{L} } {\partial x} - \dfrac{d}{d \tau} \dfrac{\partial \mathcal{L} } {\partial \dot{x}} \end{align} and $\mathcal{L}=f(g(x,\dot{x}))h(x,\dot{x})$.


Does the variation of the Lagrangian satisfy the product rule and chain rule of the derivative?




Answer





  1. OP considers the 'same-time' functional derivative (FD) $$\tag{1} \frac{\delta f(t)}{\delta x(t)}~:=~\frac{\partial f(t)}{\partial x(t)} - \frac{d}{dt} \frac{\partial f(t)}{\partial \dot{x}(t)} +\ldots. $$ Here $f(t)$ is shorthand for the function $f(x(t), \dot{x}(t), \ldots;t)$. Although the 'same-time' FD (1) can be notationally useful, it has various fallacies, cf. my Phys.SE answer here.




  2. The Leibniz rule $$\tag{2} \frac{\delta (f(t)g(t))}{\delta x(t)} ~=~\frac{\delta f(t)}{\delta x(t)} g(t) +f(t)\frac{\delta g(t)}{\delta x(t)}\qquad(\leftarrow \text{Wrong!}) $$ for the 'same-time' FD (1) does not hold. Counterexample: Take $f(t)=g(t)=\dot{x}(t)$.




  3. The chain rule $$\tag{3} \frac{\delta f(t)}{\delta x(t)} ~=~\frac{\delta f(t)}{\delta y(t)}\frac{\delta y(t)}{\delta x(t)}\qquad\qquad(\leftarrow \text{Wrong!}) $$ for the 'same-time' FD (1) does not hold. Counterexample: Take $f(t)=y(t)^2$ and $y(t)=\dot{x}(t)$.





  4. However, the usual FD $\frac{\delta F}{\delta x(t)}$ (where $F[x]$ is a functional) does satisfy a Leibniz rule $$\tag{4} \frac{\delta (FG)}{\delta x(t)} ~=~\frac{\delta F}{\delta x(t)} G +F\frac{\delta G}{\delta x(t)}, $$ and a chain rule $$\tag{5} \frac{\delta F}{\delta x(t)}~=~ \int dt^{\prime} ~\frac{\delta F}{\delta y(t^{\prime})}\frac{\delta y(t^{\prime})}{\delta x(t)}.$$




resource recommendations - Need for a side book for E. Soper's Classical Theory Of Fields




I am reading now E. Soper, Classical Theory Of Fields, now and sometimes it is very hard to follow the equations. So I need a side book on classical field theory to read it comfortably. Landau & Lifshitz's book is not helping as its content and topics are very much different.




homework and exercises - Why current density of a point charge satisfies $vec{J}rm{dV}=qvec{v}$?


I read in a book that if a point charge $q$ at the position $\vec{x}$ is moving with the velocity $\vec{v}=\rm{d}\vec{x}/\rm{d}t$ and if the current density generated by the charge is $\vec{J}$, then the following relation holds \begin{align} \vec{J}\rm{dV}=q\vec{v} \end{align} why is it like this?


PS: it is from an exercise: if the electric dipole of a system consisting of electric charges is $\vec{p}$, prove $\rm{d}\vec{p} / \rm{d}t=\int _V \vec{J}(\vec{x}, t)\rm{dV}$. Solution: suppose the $i$-th charge is denoted by $q_i$ with the position $\vec{x}_i$, then $\vec{p}=\sum q_i \vec{x}_i$. the current element generated by every charge is $\vec{J}(\vec{x}, t)\rm{d}V=q_i \rm{d}\vec{x}_i/\rm{d}t$, so $\rm{d}\vec{p} / \rm{d}t=\sum q_i \rm{d}\vec{x}_i/\rm{d}t=\int _V \vec{J}(\vec{x}, t)\rm{dV}$




Sunday, 24 February 2019

statistical mechanics - Any open areas to work in non equilibrium thermodynamics for a Phd student?




I see that many papers written on fundamentals of thermodynamics(theory) nowadays are by some old professors somewhere(there may be exceptions). Most active young faculty don't seem to be seriously interested in reinterpreting thermodynamics like nonequlibrium thermodynamics i.e. continuing the work of Ilya Prigogine etc. So is this worth while for a graduate student starting his research career to work in this area? Any open problems a fresh graduate student can aim to solve theoretically?




Saturday, 23 February 2019

homework and exercises - What's the difference between shock waves and acoustic waves?


What's the difference between shock waves and acoustic waves? I tried and searched around this subject, but I could not find any relative article about it. Please help me find a proper answer.





quantum mechanics - Which side of wave-particle duality to choose in a given situation


How does one know whether, in treating a certain problem, one should consider particles as waves or as point-like objects? Are there certain guidelines regarding this?



Answer



It will depend on the results of your experiment, the results will tell you whether you are seeing the wave nature or the particle nature.


Take the scattering of an electron on a proton producing an electron and a proton and a pi0 meson. Your experiment measures "particle" interactions, in the form of classical particles, you can see the trajectories of the individual particles with your instruments.


If you measure a lot of scatters and plot the crossection versus energy, then the interpretation uses the quantum mechanical wavefunctions which by construction carry the wave nature of the "particles".



This two slit experiment of electrons one at a time


electron double slit



build up in time



shows clearly both natures. Each individual electron is a dot, i.e. a particle interacting with the screen. The accumulation though shows the probability distribution due to the wave function of the electron with the boundary condition of two slits, the wave form of the duality.


The classical type particle nature appears at a specific (x,y,z). The probability of appearing at the specific (x,y,z) has a wave nature.


optics - Optical explanation of images of stars?


Very often when viewing pictures of the cosmos taken by telescopes, one can observe that larger/brighter stars do not appear precisely as points/circles on the image. Indeed, the brighter the light from star, the more we see this effect of four perpendicular rays "shooting" out from the star.


Stars1


(Taken from this page.)


My question is: what are the optics responsible for this effect? I would suppose that both the hazy glow around the stars and the rays shooting outwards are optical effects created by the camera/imaging device, but really can't surmise any more than that. (The fact that the rays are all aligned supports this, for one.) A proper justification, in terms of (geometrical?) optics for both the glow and the rays would be much appreciated.


Here are a few other examples of such images:





Answer



This is, as Lubos mentioned, an effect of the wave nature of light, and cannot be explained using geometrical optics.


What you are seeing is called the Point Spread Function (PSF) of the imaging system. Because stars are so far away that they are effectively point sources of light (i.e. they are spatially coherent) their image will be the PSF of the imaging system. Up to a scale factor, the PSF is the Fourier transform of the pupil of the imaging system. For a lens system, the pupil is usually just a circle, so the PSF is the 2D Fourier transform of a circle:


$$ \frac{J_1(2 \pi \rho)} {2 \pi \rho} $$


Where $J_1$ is the order 1 bessel function of the first kind.


However, most modern telescopes are built with reflective optics, and there are various obscurations in the pupil due to the structures that support the secondary mirror. This more complicated pupil shape can produce a variety of artifacts in the PSF. The starburst pattern in your example images could be due to a simple "plus" shaped structure supporting the secondary mirror, but the effect is so strong that I suspect it was emphasized for creative effect. I'm not sure how the Hubble PSF looks, off the top of my head.


In general, an image can be represented by the convolution of the ideal image $g(x,y)$ with the PSF, usually denoted $h(x,y)$. In the case of a point source (so $g(x,y)$ is a delta function, $\delta(x,y)$) it is trivial that the image is a copy of the PSF:


$$h(x,y)=\int_{-\infty}^{\infty} \delta(\xi,\eta) h(x-\xi, y-\eta) d\xi d\eta $$


But in the case of a more complicated object, the convolution by the PSF acts to smooth or blur the image. This is why an out of focus camera produces blurry images. Although aberrations also degrade the image under a geometrical approximation, this is more accurate. The geometrical case and the wave optics (diffraction) result will become closer as the aberrations become large.



Sometimes this effect is produced intentionally. You can actually buy filters for commercial cameras that have a fine grid of wires to produce this starburst effect for creative purposes.


NB: This answer ignores any discussion of phase effects in diffraction (because I'm short on time, I may update later). If you would like to learn about diffraction and the wave optics approach to imaging, the leading text on the planet is "Introduction to Fourier Optics" by J. Goodman. It is an absolutely spectacular book.


electrostatics - How do I integrate the Poisson equation to determine the electric potential along a particular direction (e.g., $z$)?


This question is a sequel of sorts to my earlier (resolved) question about a recent paper. In the paper, the authors performed molecular dynamics (MD) simulations of parallel-plate supercapacitors, in which liquid resides between the parallel-plate electrodes. The system has a "slab" geometry, so the authors are only interested in variations of the liquid structure along the $z$ direction.


In my previous question, I asked about how particle number density is computed. In this question, I would like to ask about how the electric potential is computed, given the charge density distribution.


Recall that in CGS (Gaussian) units, the Poisson equation is


$$\nabla^2 \Phi = -4\pi \rho$$


where $\Phi$ is the electric potential and $\rho$ is the charge density. So the charge density $\rho$ is proportional to the Laplacian of the potential.


Now suppose I want to find the potential $\Phi(z)$ along $z$, by integrating the Poisson equation. How can I do this?


In the paper, on page 254, the authors write down the average charge density $\bar{\rho}_{\alpha}(z)$ at $z$:


$$\bar{\rho}_{\alpha}(z) = A_0^{-1} \int_{-x_0}^{x_0} \int_{-y_0}^{y_0} dx^{\prime} \; dy^{\prime} \; \rho_{\alpha}(x^{\prime}, y^{\prime}, z)$$


where $\rho_{\alpha}(x, y, z)$ is the local charge density arising from the atomic charge distribution of ionic species $\alpha$, $\bar{\rho}_{\alpha}(z)$ is the average charge density at $z$ obtained by averaging $\rho_{\alpha}(x, y, z)$ over $x$ and $y$, and $\sum_{\alpha}$ denotes sum over ionic species.



The authors then integrate the Poisson equation to obtain $\Phi(z)$:


$$\Phi(z) = -4\pi \sum_{\alpha} \int_{-z_0}^z (z - z^{\prime}) \bar{\rho}_{\alpha}(z^{\prime}) \; dz^{\prime} \; \; \; \; \textbf{(eq. 2)}$$


My question is, how do I "integrate the Poisson equation" to obtain equation (2)? How do I go from $\nabla^2 \Phi = -4\pi \rho$ to equation (2)? In paricular, where does the $(z - z^{\prime})$ factor come from?


Thanks for your time.



Answer



I don't know your level of knowledge, so let me start with the very basic fact that the electric field of a uniformly charged plate is $$ E=2\pi\sigma,\qquad\left( 1\right) $$ where $\sigma$ is the surface charge density. To derive this result you can utilize the Gauss formula: $$ \Phi=4\pi Q,\qquad\left( 2\right) $$ where $\Phi$ is the total flux of the electric field through a closed surface and $Q$ is the total charge in a space bounded by the surface. In the figure below I depicted charged plate as a blue plane and the closed surface as the box with green sides.


application of the Gauss formula


The flux is only non zero for these green rectangles $\Phi=2ES$, where $S$ is the area of the rectangles. The total charge inside the box is $Q=S\sigma$ hence $$ 2ES=4\pi S\sigma\quad\Rightarrow\quad E=2\pi\sigma. $$


Let's now approximate your system as the set of of plates with surface charge density $\sigma=\rho\left( z\right) \,dz$ where $\rho\left( z\right) $ is the $xy$-averaged charge density. Therefore, the total electric field in a point $z$ is the difference of the contributions of planes before $z$ and after $z$ (see figure below): $$ E\left( z\right) =E_{1}\left( z\right) -E_{2}\left( z\right) ,\qquad(3) $$ where $$ E_{1}\left( z\right) =2\pi\int_{-z_{0}}^{z}\rho\left( z^{\prime\prime }\right) \,dz^{\prime\prime},\qquad E_{2}\left( z\right) =2\pi\int _{z}^{z_{0}}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}. $$


the planes before and after $z$ contribute to the field with opposite signs



Thus, the potential $\phi\left( z\right) $ has the form: $$ \phi\left( z\right) =-\int_{-z_{0}}^{z}dz^{\prime}E\left( z^{\prime }\right) ,\qquad(4) $$ with the boundary value $\phi\left( -z_{0}\right) =0$. The expression (4) is the potential required. Let's now simplify it. First of all, I simplify the expression for the field: $$ E\left( z\right) =2\pi\int_{-z_{0}}^{z}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}-2\pi\int_{z}^{z_{0}}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}=4\pi\int_{-z_{0}}^{z}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}-2\pi\int_{-z_{0}}^{z_{0}}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}. $$ Therefore the potential takes the form: $$ \phi\left( z\right) =-\int_{-z_{0}}^{z}dz^{\prime}E\left( z^{\prime }\right) =-4\pi\int_{-z_{0}}^{z}dz^{\prime}\int_{-z_{0}}^{z^{\prime}} \rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}-2\pi\left( z+z_{0}\right) \int_{-z_{0}}^{z_{0}}\rho\left( z^{\prime}\right) \,dz^{\prime}. $$ To simplify the first term I change the order of integrations (integration domain is presented in the figure below): $$ \int_{-z_{0}}^{z}dz^{\prime}\int_{-z_{0}}^{z^{\prime}}\rho\left( z^{\prime\prime}\right) \,dz^{\prime\prime}=\int_{-z_{0}}^{z}dz^{\prime \prime}\int_{z^{\prime\prime}}^{z}\rho\left( z^{\prime\prime}\right) \,dz^{\prime}=\int_{-z_{0}}^{z}\left( z-z^{\prime\prime}\right) \rho\left( z^{\prime\prime}\right) dz^{\prime\prime}. $$


integration domain


Finally, we obtain the following result for the potential: $$ \phi\left( z\right) =-4\pi\int_{-z_{0}}^{z}\left( z-z^{\prime}\right) \rho\left( z^{\prime}\right) dz^{\prime}-2\pi\left( z+z_{0}\right) \int_{-z_{0}}^{z_{0}}\rho\left( z^{\prime}\right) \,dz^{\prime}. $$ One can see that the result you presented is valid only for a neutral liquid: $$ \int_{-z_{0}}^{z_{0}}\rho\left( z^{\prime}\right) \,dz^{\prime}=0. $$


everyday life - Explaining the diffraction pattern of a lightbulb filament


This really nice video from the Royal Institution channel on YouTube has a gorgeous shot of the diffraction pattern caused in laser-pointer light by the helically-coiled filament of a light bulb:






There's multiple interesting features going on here and I would be interested in a more in-depth explanation of what causes them, that goes beyond the simple "because diffraction". As explained in the video, a helical coil can be seen roughly as two collections of line segments, which explains the cross-like structure, but there is also



  • light on the outside of the cross but not inside the narrower angle,

  • a definite double-period structure on the lower-left arm of the cross,

  • a bi-periodic structure in the light on the obtuse angle of the cross,

  • an offset between the cross arms at the vertex, I should think, and

  • a global ring structure superposed on everything,


and that's just from a first glance. What causes these features, and what can they say about the structure of the filament?



(If that's too complicated, I would be happy with a good reference to a readable introduction to the diffraction patterns caused by helical structures.)




Friday, 22 February 2019

quantum mechanics - Evaluating propagator without the epsilon trick


Consider the Klein–Gordon equation and its propagator: $$G(x,y) = \frac{1}{(2\pi)^4}\int d^4 p \frac{e^{-i p.(x-y)}}{p^2 - m^2} \; .$$


I'd like to see a method of evaluating explicit form of $G$ which does not involve avoiding singularities by the $\varepsilon$ trick. Can you provide such a method?



Answer



Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant & Hilbert, Methods of Mathematical Physics, Vol I, Ch 3; Dispersion Theory in High Energy Physics, Queen & Violini; Eden et.al., The Analytic S-matrix. There is also a condensed discussion of `invariant functions' in Schweber, An Intro to Relativistic QFT Ch13d.


The quick answer is that, for $m^2 \in\mathbb{R}$, there's no "shortcut." One must choose a path around the singularities in the denominator. The appropriate choice is governed by the boundary conditions of the problem at hand. The $+i\epsilon$ "trick" (it's not a "trick") simply encodes the boundary conditions relevant for causal propagation of particles and antiparticles in field theory.


We briefly study the analytic form of $G(x-y;m)$ to demonstrate some of these features.



Note, first, that for real values of $p^2$, the singularity in the denominator of the integrand signals the presence of (a) branch point(s). In fact, [Huang, Quantum Field Theory: From Operators to Path Integrals, p29] the Feynman propagator for the scalar field (your equation) may be explicitly evaluated: \begin{align} G(x-y;m) &= \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip\cdot(x-y)}}{p^2 - m^2 + i\epsilon} \nonumber \\ &= \left \{ \begin{matrix} -\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & \textrm{ if }\, s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & \textrm{if }\, s < 0. \end{matrix} \right. \end{align} where $s=(x-y)^2$.


The first-order Hankel function of the first kind $H^{(1)}_1$ has a logarithmic branch point at $x=0$; so does the modified Bessel function of the second kind, $K_1$. (Look at the small $x$ behavior of these functions to see this.)


A branch point indicates that the Cauchy-Riemann conditions have broken down at $x=0$ (or $z=x+iy=0$). And the fact that these singularities are logarithmic is an indication that we have an endpoint singularity [eg. Eden et. al., Ch 2.1]. (To see this, consider $m=0$, then the integrand, $p^{-2}$, has a zero at the lower limit of integration in $dp^2$.)


Coming back to the question of boundary conditions, there is a good discussion in Sakurai, Advanced Quantum Mechanics, Ch4.4 [NB: "East Coast" metric]. You can see that for large values of $s>0$ from the above expression that we have an outgoing wave from the asymptotic form of the Hankel function.


Connecting it back to the original references I cited above, the $+i\epsilon$ form is a version of the Plemelj formula [Muskhelishvili]. And the expression for the propagator is a type of Cauchy integral [Musk.; Eden et.al.]. And this notions lead quickly to the topics I mentioned above -- certainly a rich landscape for research.


Is quantum uncertainty principle related to thermodynamics?


Would like to ask a question, but first i would like to say Hello Everybody in a way that plays the system, since some geniouses decided that one should not be able to say hello in a question.


The uncertainty principle in quantum mechanics is well known and considered one of most basic properties of natural reality. The 2nd Law of thermodynamics is also well known and also considered one of the most basic processes of natural reality.


The uncertainty principle uses and is related to Planck's constant. Planck's constant has the dimensions of action and in a statistical mechanics approach, also relates nicely with the partitioning of the phase-space providing the basic measure for the entropy functional (this answer provides a nice outline of this).


Apart from that, there are relatively recent papers which relate the Heisenberg Uncertainty Principle in quantum mechanics directly and intuitively to the 2nd Law of Thermodynamics.


Is this relation correct? And if so can we derive one from the other?



Thank you


PS. One can also check this question, which although not the same, is related in an interesting way.


UPDATE:


anna's answer is accepted since by mentioning the derivation of (part of) the 2nd law from unitary dynamics, answers the question at least in one way. Please consider this as still open so you can add another answer. There are more alternatives (and one of which is my stance, ie thermodynamics -> uncertainty)



Answer



You say your self :



The uncertainty principle in quantum mechanics is well known and considered one of most basic properties of natural reality.



In fact quantum mechanics and its postulates and laws are the underlying framework on which any classical theory is built.



The "laws" of classical theories emerge from the underlying quantum mechanical framework. In the paper you quote they claim that :



More precisely, we show that violating the uncertainty relations in quantum mechanics leads to a thermodynamic cycle with positive net work gain, which is very unlikely to exist in nature.



As an experimentalist I am in no position to check whether their conclusion is correct, this is the work of peer review in journals, and it has been accepted in Nature and , I hope, peer reviewed. Well done if it is correct, because it is one more validation of the underlying quantum mechanical framework.


I do not know whether it is related to the statement in the wiki article :



In statistical thermodynamics, the second law is a consequence of unitarity in quantum mechanics



It seems from the references to be connected to the many worlds interpretation , so this new derivation might be a more mainstream connection of the quantum mechanical framework to the second law.



fluid dynamics - Flames with no gravity?


I was watching "Solaris" (Tarkovsky) today, and noticed this: in some moment the space station changed orbit and the people inside experienced zero-gravity. At that moment, a candlestick passed floating in the air, with the candles burning (see).



But in our environment the flames go upward because the hot air is less dense, and that effect should disappear when no gravity is present, am I right? How would a candle flame look then?



Answer



Indeed, without gravity, and thus without buoyancy, there is no preferred direction for the candle flame. With gravity, like you said, the products of the combustion are much lighter than the unburned air and go upwards. Fresh air is convected from the surroundings at the bottom of the flame to react with the fuel. Hence why a longer wick will burn better than a one that just peeks out: it's easier to bring in the fresh air (and thus oxygen). Without gravity, the immediate surrounding oxygen would burn initially but there is no convection to get the products out of the way and fresh oxygen close to the wick where the fuel vapor is. Mass diffusion takes over, mixing the products with fresh oxygen coming from the outside but it is a lot slower than convection. Since there is no preferential direction (ie gravity field), the resulting flame is a weak, almost spherical flame.


Candle flame with or without gravity


If you want to know more about this, you can google "flame balls", which are NOT a candle flame without the wick: it is not a blob of fuel reacting with the surrounding oxygen, but instead how a combustible mixture of premixed fuel and air/oxygen reacts when ignited locally. The combustion products are inside the ball while the reactants diffuse to the flame. A good starting point is this old NASA page (the picture above comes from there).


gravity - Velocity required for Horizontal Rain



Related to my previous question - Change in appearance of liquid drop due to gravity...


Ok, I think we all have noticed this practical phenomena (a kind of illusion though)... During rain (while we're at rest), rain drops would definitely fall in the form of somewhat perfect vertical lines (assuming there's no wind). And When we're in motion, those "lines of rain" would be inclined to some $\theta$. That inclination depends on our velocity (what our eyes perceive at our speed). A friend of mine drove his car to some 120 km/h. Unfortunately, I didn't observe any horizontal line.


So, Is there any way we could observe "rain lines" horizontal..? What would be the velocity required for it? I think we don't have to break any barriers for that perception :-)


I googled it. But, I can't find anything regarding lines. Any explanation or perhaps a link would be good.


(As the climate would be rainy for atleast 2 months, I could test it in some grounds)



Answer



Let's say your eye integrates everything it sees over a time $\Delta t$. In that time, a raindrop will have fallen vertically a distance $v_\text{rain} \Delta t$. It will also appear, to you, to have moved horizontally a distance $v_\text{you} \Delta t$. You can draw a right triangle with these two distances as the legs. The hypotenuse is the perceived raindrop streak.


Let $\theta$ be the angle opposite the horizontal side. This measures deflection away from vertical. We have $$ \theta = \tan^{-1}\left(\frac{v_\text{you}}{v_\text{rain}}\right) $$ from basic trigonometry. This is independent of $\Delta t$. $\theta$ asymptotically approaches $90^\circ$ as your speed approaches infinity, while for low speeds, $\theta$ is approximately proportional to your velocity. The length $l$ of the streak follows from Pythagoras: $$ l = \Delta t \sqrt{v_\text{you}^2 + v_\text{rain}^2}.$$


Of course, both these formulas would need to be modified as your velocity approached the speed of light.


Does quantum entanglement arise from quantum theory or is it merely an experimental observation?


I assume that entanglement emerges from quantum mechanics because the idea was around before experimental verification (e.g the EPR paper). How then does entanglement emerge from the theory (please provide a less technical answer if possible)



Answer



Entanglement is simply a particular kind of quantum multiparticle state: it happens to be the "most common" kind of state in the sense that if you choose a random quantum superposition from a multiparticle state space, it will almost surely be (in the measure-theoretic sense) entangled, so it's a little curious why entanglement takes some effort to observe in the laboratory.


The technical details, through a simple example. We think of several quantum "particles", each with three-dimensional quantum state spaces: let's take two of them. Let's number each individual particle basis states $1,\,2,\,3$, so a general superposition for one particle is the vector $\alpha\left.\left|1\right.\right>+\beta\left.\left|2\right.\right>+\gamma\left.\left|3\right.\right>$.


The quantum state space of the combined system has nine, not six, basis states. Let $\left.\left|j,\,k\right.\right>$ stand for the basis state where the first particle is in basis state $j$, the second in basis state $k$. You should be able to see that there are nine such basis states: $\left.\left|1,\,1\right.\right>,\,\left.\left|1,\,2\right.\right>,\,\left.\left|1,\,3\right.\right>,\,\left.\left|2,\,1\right.\right>,\,\left.\left|2,\,2\right.\right>,\,\cdots,\,\left.\left|3,\,3\right.\right>$.


Some states are factorisable, that is they can be written in the form $\psi_1\otimes\psi_2$ where $\psi_1$ and $\psi_2$ are individual particle quantum states. So, let $\psi_1=\alpha\left.\left|1\right.\right>+\beta\left.\left|2\right.\right>+\gamma\left.\left|3\right.\right>$ and $\psi_2 = a\left.\left|1\right.\right>+b\left.\left|2\right.\right>+c\left.\left|3\right.\right>$. Then, on noting that in our notation above we have $\left.\left|j\right.\right>\otimes\left.\left|k\right.\right>\stackrel{def}{=}\left.\left|j,\,k\right.\right>$


$$\psi_1\otimes\psi_2 = \alpha\,a\,\left.\left|1,\,1\right.\right>+\alpha\,b\,\left.\left|1,\,2\right.\right>+\cdots+\gamma\,b\,\left.\left|2,\,3\right.\right>+\gamma\,c\,\left.\left|3,\,3\right.\right>\tag{1}$$


The point about this state is that if we measure particle 2 and force it into its base state say $\left.\left|2\right.\right>$, then we know that the particle 1 must be determined by the part of the superposition in (1) that contains only basis vectors of the form $\left.\left|j,2\right.\right>$, because we know particle 2 is in state 2. So, from (1), the system must be in state $\psi_1\otimes\left.\left|2\right.\right>$, i.e. our knowledge about particle one has not changed with our measurement. Our measurement tells us nothing about particle 1, so particle 1 is independent of particle 2.



Now let's choose any old superposition: let's choose:


$$\frac{1}{\sqrt{2}}\left.\left|1,\,1\right.\right>+\frac{1}{\sqrt{2}}\left.\left|2,\,2\right.\right>\tag{2}$$


and now let's measure particle 2. If our measurement forces particle 2 into state $\left.\left|2\right.\right>$, then from (2) we know particle 1 is in state $\left.\left|2\right.\right>$, because the only term in (2) with particle 2 in state 2 has particle 1 in state 2. Likewise, if our measurement forces particle 2 into state $\left.\left|1\right.\right>$, then we know for the same reasons that particle 1 must be in state $\left.\left|1\right.\right>$. Our measurement of particle 2 influences particle 1.


I would advise you to work through this example yourself in detail. You will then understand the following: measurement of particle 2 influences the state of particle 1 if and only if the initial state of the two particle system is not factorisable in the sense above.


So you can see that entanglement is a natural theoretical consequence of the tensor product, which in turn is really the only plausible way one would expect many particle systems to behave. Experiment has reproduced and confirmed this theoretical behaviour.


You are right insofar that that entanglement was theoretically foretold and discussed in the EPR paper and also by Schrödinger shortly afterwards. Our word "entanglement" was Schrödinger's own translation of his name for the phenomenon, "Verschränkung".


NOTE: educators who use two two-dimensional spaces to illustrate the tensor product deserve to be cut up into teeny-tiny little bits and be forced to do the total time that all their students they have wasted in dead-end understanding in purgatory, or at least some way bad place if you don't believe in purgatory


'schrodinger' picture in measurement based topological quantum computation


I am looking at the measurement processes in topological quantum computation (TQC) as mentioned here http://arxiv.org/abs/1210.7929 and in other measurement based TQC papers. Let's say I start with pairs of Majorana fermions 1+2 and 3+4 and both pairs have zero topological charge to begin with such that I can write the state $\left|0\right\rangle _{12}\left|0\right\rangle _{34}$. Suppose now I want to write this in a different basis where 1 and 3 form one pair and 2 and 4 one pair. I think I could write this as $\alpha \left|0\right\rangle _{13}\left|0\right\rangle _{24} +\beta \left|1\right\rangle _{13}\left|1\right\rangle _{24}$ but how do I determine $\alpha$ and $\beta$ ? I want to work in this picture because it looks simpler instead of following anyonic rules.



Answer



For four Majorana zero modes, if the total topological charge is $1$ there are two states $|0\rangle_{12}0\rangle_{34}$ and $|1\rangle_{12}|1\rangle_{34}$ ($i\gamma_1\gamma_2\cdot i\gamma_3\gamma_4=1$. So this system can be mapped to a qubit, with $i\gamma_1\gamma_2=\sigma_z, i\gamma_1\gamma_3=\sigma_x, i\gamma_2\gamma_3=\sigma_y$ (I did not check the signs carefully). Now what you want is just to do a basis transformation and rewrite the state in a basis which diagonalizes $i\gamma_1\gamma_3=\sigma_x$, which should be rather straightforward.



More generally, this kind of basis transformation is encoded in the $F$ symbols of the anyon model.


Thursday, 21 February 2019

electric circuits - What causes a resistor to heat up?


In the following video clip at 2:10,


http://www.youtube.com/v/YslOUw5oueQ ,


Professor Walter Lewin talks about a misconception people have that the energy going through a wire to a resistor is in the form of kinetic energy of electrons. He proves this cannot be so as follows. The current density is J = I/A = Vne where V is the drift velocity (or average velocity), n is the number of electrons per volume, and e is the charge of an electron. A is a cross-sectional area of the wire.


We can make A as large as we want (keeping the current constant), and therefore V will have to become very small, and the electrons will have very little kinetic energy. Yet the resistor (say a light bulb) dissipates the same amount of power P=(I^2)*R. Therefore, it must be that the form of energy is not the kinetic energy of electrons.


My first question is, if we make A larger why does it have to be that V goes down? Perhaps n goes down - we increased the volume (by increasing the cross-sectional area), so there should be fewer electrons per volume?


My second question, my main question is, if the energy is not the kinetic energy of the electrons, what does in fact bring energy to the resistor and how does it heat up?



Answer




my main question is, if the energy is not the kinetic energy of the electrons, what does in fact bring energy to the resistor and how does it heat up?




We assume steady state operation.


The drift velocity of the electrons entering the resistor must equal the drift velocity of the electrons leaving the resistor. This follows from the fact that the current into the resistor equals the current out of the resistor.


However, the electrons leaving the resistor have less potential energy than those entering the resistor. This follows from the fact that there is an electric field through the structure of the resistor and, thus, there is a potential difference between the ends of the resistor.


The electric field within the resistor structure accelerates (does work on) the electrons increasing their kinetic energy however, this energy is quickly given up to the structure of the resistor via collisions; the resistor gets hotter.


On a more fundamental level, the energy flows from the battery to the resistor through the space around the conductors via the electromagnetic field. See, for example, William Beaty's description of energy flow in a simple circuit here.


cosmology - Has the speed of light changed over time?


Could someone judge my (stoner) hypothesis that the speed of light has changed over time -- i.e. as the universe has expanded in volume light has slowed down, perhaps going so far as back to the big bang when it was infinitely fast and there was no time because everything happened at once etc. Thinking that the speed at which information can propagate through the universe is linked to the size of it seems intuitive to me. My question -- is there an easy disproof of this? Would Einstein have to be wrong? Does it violate anything supposedly more fundamental such as quantum or string theories? Do any current experiments invalidate it? If not can you show me in any case why you think its unlikely.



I'm accepting Mark M.'s answer but will post this here because there is a character limit on comments


@Mark M thanks, good answer but as someone who has only read some popular physics, and should leave this to the experts, I'm still muddled in my personal theory. I don't see why you should need two units to measure the speed of light. The thing I have a hard time wrapping my head around is the relation of time and distance. They seem like they could fundamentally be the same thing. If you say time is measured fundamentally by the vibration of so and so quantum object in space... why can't we just measure that vibration distance as the constant... I'll repeat myself to try to be clear... there is a certain minimum distance that particles have to go to interact with each other... if it wasn't vibrating there wouldn't be time,its what creates the illusion of time...so instead of talking about speed or c as distance/time.... can't we simply talk about that distance a quantum object vibrates....I'll lead up to my point.....perhaps there can need be only one constant here, and that is the physical size of the universe. a tiny metal tuning fork doesn't appear to be vibrating at all but if you blew it up to the size of the empire state building the metal rods would move from window to window. perhaps as our universe expanded in size the length of that minimum vibration (perhaps infinite at point zero) would have expanded, and therefore created the illusion of time and the speed of light, which, as the universe expands, will continue slowing down. perhaps we are like a big balloon and we have been blown up and all the fields/particles-without-size are vibrating more and more in that space.. am i missing something obvious here?




Answer



There is no meaningful way to test if the speed of light varies - that's because it's dimensionful, i.e. it's measured in units.


To see why, let's say we use units in which distance is measured in terms of multiples of the circumference of the electron's orbit in the ground state of Bohr's hydrogen atom, and the unit of time is it's orbital period. This will give you roughly 137, which is the inverse of the fine structure constant, which is defined as $e^2 \over \hbar c$. So, we can see that it isn't possible to determine whether the value of the speed of light was different, since one of the other constants in the FSC (the electron charge or the reduced Planck constant) could have changed.


However, it is meaningful to ask whether a dimensionless constant has changed, one that isn't measured in units. Some examples are the above mentioned fine structure constant, and the cosmological constant. Also, particle masses are fundamental constants - changing another constant doesn't affect them.


So, rather than asking if the speed of light varies, a better question is to ask if the fine structure constant varies (since it is dimensionless, it has no units). There have been claims that the fine structure constant may vary (here and here, among many others). However, this certainly isn't an accepted result.


For more, see the Usenet FAQ on dimensionless constants:


http://math.ucr.edu/home/baez/constants.html


Addition


Rather than varying over time, let's think of the case in which c varies over space. So, a group of scientists ventures on a rocket to a distance part of the galaxy to determine if the speed of light is different. They will need to use the same units that the earth scientists are using - we could use the above units, the vibrations of an atom for time, whatever you want. Let's say they measure a different value using the agreed units.


Now, imagine that a different group of scientists was going to test if the length of some particular rod was different in that same region of the galaxy. They decide to see how many vibrations of the cesium atom it takes light to travel the rod. Based on their experiment, they come to the conclusion that the length of the rod is larger in this other region, or that the cesium atom vibrates slightly faster.



When both groups publish their findings, they disagree - the first group tells the second group they're wrong because they based their measurements on the speed of light, which they found varies. However, group two asserts that the first group is mistaken, since they found that the length of the measuring rod and frequency of the vibrations of the cesium atom were both different.


So, you can see that asserting that a dimensionful constant has varied is meaningless - since they're ratios of other constants, it is 100 percent equally valid to say those constants varied. Not only is it impossible to determine if they have changed, but the question itself doesn't have an answer. Finding different values for dimensionful constants can be interpreted in a variety of ways. For example, you can claim that the constants in the fine structure constant had varied, not the speed of light.


newtonian gravity - Why is the gravitational constant so difficult to measure?


The gravitational constant seems to be very low precision. For example, in the Wikipedia article recent measurements are given as having the significands of 6.67 and 6.69, a difference of 2 parts in 1000. I don't understand why astronomical measurements cannot be used to gain a much more accurate value. The explanation in the Wikipedia, that the force is "weak" seems like a vague answer to me.


This imprecision is a problem for me because I would like to make a simulation model of the solar system based on gravitational attraction, but with a such an imprecise constant, I don't see how I can do this to any degree of useful accuracy.




classical mechanics - D'Alembert's Principle and the term containing the reversed effective force



For our Classical Mechanics class, I'm reading Chapter 1 of Goldstein, et al. Now I come across Eq. (1.50). To put it in context:



$$\begin{align*} \sum_i{\dot{\mathbf{p}_i} \cdot \delta\mathbf{r}_i}&=\sum_i{m_i\ddot{\mathbf{r}}_i \cdot \delta{\mathbf{r}_i}}\\ &=\sum_{i,j}{m_i\ddot{\mathbf{r}}_i} \cdot \frac{\partial\mathbf{r}_i}{\partial q_j} \delta q_j \end{align*}$$


Consider now the relation Eq. (1.50): $$\begin{align*} \sum_{i,j}{m_i\ddot{\mathbf{r}}_i} \cdot \frac{\partial\mathbf{r}_i}{\partial q_j}&= \sum_i{\left[ \frac{d}{dt} \left( m_i\dot{\mathbf{r}}_i \cdot \frac{\partial\mathbf{r}_i}{\partial q_j} \right) - m_i\dot{\mathbf{r}}_i \frac{d}{dt} \left( \frac{\partial \mathbf{r}_i}{\partial q_j} \right) \right]} \end{align*}$$



I'm at a loss for how he resolved it that way. He goes on to explain that we can interchange the differentiation with respect to $t$ and $q_j$. My question is: Why is there a subtraction in Eq. (1.50)?



Answer




Why is there a subtraction in Eq. (1.50)?




Goldstein is using the Leibniz rule for differentiation of a product


$$ \frac{d (fg)}{dt}~=~\frac{d f}{dt}g + f\frac{d g}{dt} $$


with


$$f=m_i\dot{\mathbf{r}}_i $$


and


$$g=\frac{\partial \mathbf{r}_i}{\partial q_j}. $$


The minus is caused by moving a term to the other side of the equation.


Wednesday, 20 February 2019

general relativity - What is a manifold?


For complete dummies when it comes to space-time, what is a manifold and how can space-time be modelled using these concepts?




quantum field theory - Do virtual particles actually physically exist?


I have heard virtual particles pop in and out of existence all the time, most notable being the pairs that pop out beside black holes and while one gets pulled away. But wouldn't this actually violate the conservation of energy?



Answer



Ever since Newton and the use of mathematics in physics, physics can be defined as a discipline where nature is modeled by mathematics. One should have clear in mind what nature means and what mathematics is.


Nature we know by measurements and observations. Mathematics is a self consistent discipline with axioms, theorems and statements having absolute proofs, mathematically deduced from the axioms. "Existence" for physics means "measurable", for mathematics "possible to be included in the self consistent theory.


Modern physics has used mathematical models to describe the measurements and observations in the microcosm of atoms, molecules, elementary particles, adding postulates that connect the mathematical calculations with the physical observables


The dominant mathematical model is the field theoretical model that simplifies the mathematics using Feynman diagrams


These diagrams represent terms in an expansion of the desired solution, each term has a diminishing contribution to the cross section of the interaction. The diagram below would be the dominant term, as the next one would be more complicated and therefore smaller by orders of magnitude.


feynman diagram



To each component of the diagram there corresponds one to one a mathematical formula twhich integrated properly will give a prediction for a measurable quantity. In this case the probability of repulsion when one electron scatters on another.


This diagram for example, has as measurable quantities the incoming energy and momentum of the electrons ( four vectors) and of outgoing four vectors . The line in between is not measurable, because it represents a mathematical term that is integrated over the limits of integration, and within the integral energy and momentum are independent variables. The line has the quantum numbers of the photon though not its mass , and so it is called a "virtual photon". It does not obey the energy momentum rule which says that :


$$\sqrt{P\cdot P} = \sqrt{E^2 - (pc)^2} = m_0 c^2$$


The photon has mass zero.


Through the above relation which connects energy and momentum through the rest mass, the un-physical mass of the virtual line depends on one variable, which will be integrated over the diagram; it is often taken as the momentum transfer.


Quantum number conservation is a strong rule and is the only rule that virtual particles have to obey.


There are innumerable Feynman diagrams one can write, and the internal lines considered as particles would not conserve energy and momentum rules if they were on mass shell. These diagrams include vacuum fluctuations that you are asking about, where by construction there are no outgoing measurable lines in the Feynman diagrams describing them. They are useful/necessary in summing up higher order calculations in order to get the final numbers that will predict a measurable value for some interaction.


Thus virtual particles exist only in the mathematics of the model used to describe the measurements of real particles . To coin a word virtual particles are particlemorphic ( :) ), having a form like particle but not a particle.


cosmology - What is the theoretical limit for farthest we can see back in time and distance?


13.2 billion years ago the universe was rather small, having started only half a billion years ago. Today, with the help of Hubble Space Telescope, we are able to capture the light of galaxies emitted at that time.


The point at which Earth exists now must have been quite close to those galaxies back then. If so, why is it that it is only now, after 13.2 billion years later the light from those galaxies has reached us? Or in other words, are we sure that the light we are seeing from those galaxies indeed travelled for 13.2 billion years?


It looks like as if there was a race between our point running away from those galaxies (with the expansion of universe and space) and the light that was emitted at that time. And only now that light has reached and overtaken us. But if that is so, then wouldn't it put a limit on the oldest light we can see, no matter how powerful the telescope is (even it is more powerful than the James Webb Space Telescope)? This should be expected, because at the time just after the Big Bang, the light emitted by all objects must have already overtaken all other objects, including the location of earth. Therefore we will never see the light that old (close to the time of Big Bang) no matter how powerful the telescope. If this is so, what is the theoretical limit we can see far back in the past?




quantum field theory - What is $phi(x)|0rangle$?


Suppose for instance that $\phi$ is the real Klein-Gordon field. As I understand it, $a^\dagger(k)|0\rangle=|k\rangle$ represents the state of a particle with momentum $k\,.$ I also learned that $\phi^\dagger(x)$ acts on the vacuum $\phi(x)^\dagger|0\rangle\,,$ creating a particle at $x\,.$ But it seems that $\phi^\dagger(x)|0\rangle\,,\phi^\dagger(y)|0\rangle$ are not even orthogonal at equal times, so I don't see how this is possible. So what is it exactly? And what about for fields that aren't Klein-Gordon, ie. electromagnetic potential.



Edit: As I now understand it, $\phi(x)|0\rangle$ doesn't represent a particle at $x$, but can be interpreted as a particle most likely to be found at $x$ upon measurement and which is unlikely to be found outside of a radius of one Compton wavelength (by analyzing $\langle 0|\phi(y)\phi(x)|0\rangle)$. So taking $c\to\infty\,,$ $\phi(x)|0\rangle$ represents a particle located at $x\,,$ and I suppose generally experiments are carried over distances much longer than the Compton wavelength so for experimental purposes we can regard $\phi(x)|0\rangle$ as a particle located at $x\,.$ Is this the case? If so it's interesting that this doesn't seem to be explained in any QFT books I've seen.




electromagnetism - Electromagnetic tensor in cylindrical coordinates from scratch


I want to calculate the electromagnetic tensor components in cylindrical coordinates. Suppose I did not know that those components are given in Cartesian coordinates by $$(F^{\mu \nu})= \begin{pmatrix} 0 & E_x & E_y & E_z \\ -E_x & 0 & B_z & -B_y \\ -E_y & -B_z & 0 & B_x \\ -E_z & B_y & -B_x & 0 \end{pmatrix}.$$


I want to derive the result in the same manner I did in the Cartesian coordinates case, i.e., using that $F^{ \mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$, where $A^\alpha=(V,\vec{A})$, $\vec{B} = \nabla \times \vec{A}$ and $\vec{E} = -\nabla V - \partial \vec{A} / \partial t$. Using the formulas for curl and gradient in cylindrical coordinates, we find $$ \vec{E} = - \left( \frac{\partial V}{\partial r} + \frac{\partial A_r}{\partial t} \right)\hat{r} \ - \left( \frac{1}{r}\frac{\partial V}{\partial \phi} + \frac{\partial A_\phi}{\partial t} \right)\hat{\phi} - \left( \frac{\partial V}{\partial z} + \frac{\partial A_z}{\partial t} \right)\hat{z} $$ and $$ \vec{B} = \left( \frac{1}{r}\frac{\partial A_z}{\partial \phi} - \frac{\partial A_\phi}{\partial z} \right)\hat{r} \ +\left(\frac{\partial A_r}{\partial z} - \frac{\partial A_z}{\partial r} \right)\hat{\phi} \ +\frac{1}{r}\left(\frac{\partial (r A_\phi)}{\partial r} - \frac{\partial A_z}{\partial r} \right)\hat{z}. \ $$ The invariant interval is given by $ds^2 = -dt^2 + dr^2 + r^2 d\phi^2 + dz^2$, (with $c=1$). Therefore, the metric tensor reads $$(g_{\mu \nu})= \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & r^2 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix},$$ and its inverse is $$(g^{\mu \nu})= \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1/r^2 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}.$$


Which implies $\partial^0 = -\partial_0$, $\partial^1 = \partial_1$, $\partial^2 = \frac{1}{r^2}\partial_2$ and $\partial^3 = \partial_3$.


So, for example, $$ F^{ 01} = \partial^0 A^1 - \partial^1 A^0 = -\partial_0 A^1 - \partial_1 A^0 = -\frac{\partial A_r}{\partial t}-\frac{\partial V}{\partial r} = E_r, $$ which is reassuring. Now, $$ F^{02} = \partial^0 A^2 - \partial^2 A^0 = -\partial_0 A^2 - \frac{1}{r^2}\partial_2 A^0 = -\frac{\partial A_\phi}{\partial t}-\frac{1}{r^2}\frac{\partial V}{\partial \phi}. $$ However, I cannot identify this quantity with any component of the electric field. This last expression looks almost like $E_\phi$, except for an extra $\frac{1}{r}$ multiplying $\partial V / \partial \phi$. What went wrong here?



Answer



The problem is that there is a mismatch between the vector basis that you are using to write the 4-vector potential and the ones you are considering for your metric, which are not unitary. The standard cylindrical coordinate basis should have a Minkwoski metric, since we don't really have curvature in this case. The only difference is then a $1/r$ factor in the $\phi$ component.


Therefore, in order to be consistent, you need to replace $A^2 \rightarrow \frac{A_{\phi}}{r}$. Then, you will see that $F^{02}=\frac{E_{\phi}}{r}$, which, again, is consistent with your metric.


Tuesday, 19 February 2019

quantum field theory - The physicality of the photon propagator


The equation for the photon propagator is straightforward $$ D_{ij} = \langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle $$ However, $A_{i}(x)$ is gauge-dependent and therefore unphysical (in the arguable sense). Then, since the propagator is dependent on the vector potential, the propagator is unphysical. Sadly, my whole understanding of what amplitudes mean may be skewed, but I would assume the probability amplitude for a photon to propagate between $x$ and $x'$ is something we would want to be gauge-independent.


Edit:


I guess I wasn't clear enough. By computing the probability amplitude for a process, we obtain a complex number that when multiplied by it's complex conjugate we obtain a probability for such a process to occur (when normalized). Here, the physical process is propagation, and the probability is $|\langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle|^2$. However, this probability is gauge dependent, and hence, the usual physical interpretation of $|\langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle|^2$ is questionable to me. Where has my interpretation gone astray?



Answer



The photon propagator $D_{\mu\nu}(x,y) = \langle 0 | A_\mu(x) A_\nu(y)|0\rangle$ is a building block for amplitudes, but it isn't necessarily an amplitude itself. The source for an electromagnetic field has to be a conserved current, which basically means that you create states from the vacuum using linear combinations of $A_\mu(x)$ operators whose coefficients are conserved currents. $$ |J\rangle = \int J^\mu(x) A_\mu(x) dx |0\rangle $$ where $\partial_\mu J^\mu = 0$.



You can show by direct computation that the amplitude $\langle J_1 | J_2 \rangle = \int\int J_1^\mu(x) D_{\mu\nu}(x,y)J_2^\nu(y)dxdy$ is gauge invariant if the currents $J_1$ and $J_2$ are conserved.


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...