Tuesday 30 April 2019

The 1st law of thermodynamics


If a gas expands adiabatically will the work done be positive or negative?


I think it will be positive as $\delta W=p dV$ and we have a positive sign due to the work done by the gas is positive as it pushes to expand. But this site has the negative of the answer I obtain.



http://physics.bu.edu/~duffy/py105/Firstlaw.html (See adiabatic processes section)



Answer



Generations of physics students, including me, have got mixed up about the sign of work done. That's because the phrase work done can mean work done on the gas or work done by the gas, and these are equal but with opposite signs. I don't think there is any perfect way to deal with this except by using your common sense. If an expanding gas does work then that work must come from the internal energy of the gas, and therefore the internal energy of the gas must decrease.


Having said all this, I think you've misinterpreted the article you quote. In the Adiabatic processes section it starts the equation with:


$$ W = -\Delta U $$


But in the expansion $\Delta U$ is a negative quantity because the internal energy decreases. That means $-\Delta U$ is a positive number and therefore that the work is positive.


material science - Glass Hardness and Pressure


How is glass hardness defined? I understand that ordinary kitchen knives cannot scratch most toughened glass, such as the ones found on cell phones. However (I once tried) - with enough pressure, one can actually scratch the glass.


So when testing the hardness of glass, is pressure standardized?



Answer



Hardness is the resistance to plastic deformation. What you're observing is brittle fracture.


For example lead is softer than glass, but if I hit a piece of glass with a lead hammer the glass will break. This is because even the relatively soft lead is able to increase the stress on the glass to the point where brittle fracture occurs.


When you press the kitchen knife onto the glass the question is whether the steel will deform before the glass does, which indeed it will. Even so it's possible to raise the local stress under the knife to the point where the glass fails by brittle fracture. If you looked at the resulting scratch with an SEM you'd see it was a trail of microfractures not a plastic flow.


particle physics - Feynman diagram for $overline{K},!^0$ antimeson production on the quark-level


I've recently stumbled upon a physics problem concerning $\overline{K}\,\!^0$ antimeson production. In this particular example, colliding a $\pi^-$ meson with a stationary proton yields a $K^0$ meson and a $\Lambda^0$ hyperon:


$$\pi^-\,[\overline{u}d] + p\,[uud]\rightarrow K^0\,[d\overline{s}] + \Lambda^0\,[uds]$$


This can be expressed in a Feynman diagram by letting the $u$ and $\overline{u}$ quarks annihilate to a gluon, out of which a pair of $s$-$\overline{s}$-quarks is generated.


However, if a $\overline{K}\,\!^0$ particle would be generated by the same method, in order to conserve the baryon number and the strangeness, more than just a particle must be produced. For example, the following reaction could take place, so that every quantum number is conserved:


$$\pi^-\,[\overline{u}d] + p\,[uud]\rightarrow \overline{K}\,\!^0\,[s\overline{d}] + K^0\,[d\overline{s}] + n\,[udd]$$


However, I can't seem to find a corresponding Feynman diagram for the reaction. I am guessing that the $\Lambda^0$ hyperon decays weakly and somehow yields the antikaon and the neutron, but I can't figure out how... Does anyone have a clue what the Feynman diagram could be?



Answer



There are no very simple diagrams. You need at least one pair production and some kind of flavor changing reaction.


This



enter image description here


includes one pair production and a Drell-Yan flavor change.


There will be others but they will presumably all be equally complicated and therefore unlikely. This will be a low rate event in such systems even when the energy available.


quantum mechanics - Proof that if expectation of an operator is zero for all vectors, then the operator itself must be zero


I was attending a Quantum Mechanics lecture when the instructor casually mentioned the following theorem:



$\langle \alpha \rvert A \rvert \alpha \rangle = 0 ~\forall \alpha \implies A=0$, where $A$ is an operator and $\rvert\alpha\rangle$ is an arbitrary ket in the complex Hilbert space.



I have always assumed that the above theorem was 'obvious', but on second thought, it doesn't seem to be easy or trivial to prove. I tried looking at various sources for the theorem, but it seems to be surprisingly difficult to find this theorem or proof anywhere.


I would be very glad if someone would point me towards the proof of the theorem, and provide a small outline of it if possible.



Answer




Pick any orthonormal basis $\lvert \psi_i\rangle$ of our Hilbert space. Then $\langle \psi_i\vert A \vert \psi_i \rangle = 0$ for all $i$ by assumption, and for $\lvert \phi_{ij}(a,b)\rangle := a\lvert \psi_i\rangle + b\lvert \psi_j\rangle$ we find $$ \langle \phi_{ij} \vert A \vert \phi_{ij}\rangle = a^\ast b \langle \psi_i \vert A \vert \psi_j\rangle + ab^\ast \langle \psi_j \vert A \vert \psi_i\rangle = 0,$$ which for $a,b = 1$ implies $$ \langle \psi_i \vert A \vert \psi_j \rangle = - (\langle \psi_j \vert A \vert \psi_i\rangle )^\dagger = - \langle \psi_i \vert A^\dagger \vert \psi_j \rangle $$ which means that $A = -A^\dagger$, i.e. $A$ is anti-Hermitian. Since anti-Hermitian operators are in particular normal, they are diagonalizable by the spectral theorem, and therefore $\langle \alpha \vert A \vert \alpha\rangle = 0$ means that all eigenvalues are 0, i.e. the diagonalization of $A$ is the zero matrix, which also means $A = 0$.


Note that the application of the spectral theorem relies on the space being a complex vector space, and that the assertion would be false over a real vector space - $\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$ is a counterexample on $\mathbb{R}^2$ (but not on $\mathbb{C}^2$, since its expectations values do not vanish for all vectors there).


quantum field theory - Is microcausality *necessary* for no-signaling?


There are proofs in the literature that QFT including microcausality is sufficient for it not to be possible to send signals by making quantum mechanical measurements associated with regions of space-time that are space-like separated, but is there a proof that microcausality is necessary for no-signaling?


I take microcausality to be trivial commutativity of measurement operators associated with space-like separated regions of space-time. In terms of operator-valued distributions $\hat\phi(x)$ and $\hat\phi(y)$ at space-like separated points $x$ and $y$, $\left[\hat\phi(x),\hat\phi(y)\right]=0$.


An elementary observation is that classical ideal measurements satisfy microcausality just insofar as all ideal measurements are mutually commutative, irrespective of space-like, light-like, or time-like separation. For a classical theory, ideal measurements have no effect either on the measured system or on other ideal measurements, so ideal measurements cannot be used to send messages to other ideal experimenters. Whether a classical theory admits space-like signaling within the measured system, in contrast to signaling by the use of ideal measurements applied by observers who are essentially outside the measured system, is determined by the dynamics. Applying this observation to QFT, I note that microcausality is not sufficient on its own —without any dynamics, which I take to be provided in QFT by the spectrum condition— to prove no-signaling at space-like separation.


As an auxiliary question, what proofs of sufficiency do people most often cite and/or think most well-stated and/or robust?



Answer



I will try to give a series of counterexamples, of decreasing triviality:


No signalling plus little external agents implies microcausility


As you point out in the question, when you have arbitrarily tiny external agents capable of measuring any bosonic field in an arbitrarily tiny region, then microcausality is obviously necessary for no signaling, since if you have two noncommuting operators A and B associated with two tiny spacelike separated regions, and two external agents wants to transmit information from A's region to B's, the agent can either measure A repeatedly or not, while another agent measures B a few times to see if A is being measured. The B measurements will have a probability of giving different answers, which will inform the B agent about the A measurement.



This is the motivation for microcausility, and for the purposes of physics, the existence of semiclassical black holes means that you have classical point probes at any distance significantly larger than the Planck length, and microcausality is necessary at least for these scales.


This point is adressed in your question. From now on, I will ask the intrinsic question--- can observers in the theory signal using devices built up out of the fields in the theory, not using external probes, so that the question is nontrivial.


Two space no-gravity QFT


Consider a quantum field theory with a bad localization. The theory is defined using a spacelike shift vector $\Delta$ and the Lagrangian gets with a displaced interaction


$$ S= \int d^4x L_1(\phi) + L_2(\chi) + \phi(x)\chi(x+\Delta)$$


Where the L's are some translationally invariant local actions for $\phi$ and $\chi$, and the interaction mixes $\phi$ and $\chi$ at displaced points. This is clearly ridiculous-- the field $\chi$ has been misplaced, the correct local field associated with a given point x is $\chi(x+\Delta)$, not $\chi(x)$, but the point is that you can define an algebra of observables using this completely wrong localization, and then microcausality obviously fails, because $\chi$ and $\phi$ are at the wrong point. But because there is a change of variables which makes microcausality work, there is no signalling for objects in the theory, intrinsically (although for an external agent capable of making local measurements of $\phi(x)+\chi(x)$, no signalling would fail). So the question should be better stated "Does there have to be some collection of field variables which obey microcausility for no-signalling to work".


Curved Extra dimensions


Suppose you have a warped extra dimension, so that in 5+1 dimensions you have fields which are local, but the background is not a product. Then you can consider the theory as a 4 dimensional quantum field theory, and in this framework, try to identify mutually local four-dimensional fields. This doesn't work, not in a way consistent with Lorentz invariance, because time ticks at a different rate at different positions in the extra dimension, so that if you choose the fields to obey 4 dimensional Lorentz invariance.


But if the shortest distance between two points on your brane-world is a straight line on the brane-world, then no signalling still holds. Gubser examines this situation in a recent preprint (http://arxiv.org/PS_cache/arxiv/pdf/1109/1109.5687v2.pdf) with an eye to reproducing the OPERA neutrino no-signalling claimed violations, and he says that the effective 4d theory only violates no signalling when the 5d theory violates of the weak energy condition. But the 5d theory will violate any attempted identification of a 4d microcausality generically.


Emergent dimensions



I think that the best examples of where microcausality can fail, and still there is no intrinsic signalling, are within string theory. This is not quantum field theory, so it might not be included, but it is the starkest example of a nonlocal theory where no-signalling (presumably) works, but there is no microcausility, because there aren't local fields in the bulk.


In AdS/CFT The bulk theory is defined by a holographic projection of the boundary fields, and if you have N=4 gauge theory on the boundary, you only have boundary microcausality. You can define effective local fields in the bulk, which create a string excitation, but these fields will not commute at the string scale, since the strings are extended, and they are not fundamental things anyway, their localization is at a center of mass.


So in my opinion, the best answer is no, although the answer might as well be yes outside of the quantum gravity regime, because at larger scales, tiny black holes can be used as point probes to make local measurements of fields.


Monday 29 April 2019

How does the force of tension really work?


I am currently studying high school physics (I'm in the first year of high school).


The force of tension initially seemed to be a simple concept, but unfortunately has proved rather challenging to fully understand, impairing my ability to understand problems such as the one I will discuss here.


My question here revolves more around the "why" than the "how"; that is, I could probably solve tension problems on a test, but that doesn't mean I'd understand why things worked the way they did.


I hope that someone may be able to provide me with an explanation of the following problem that goes back to the basics of tension.


Here's a diagram of the problem. It's based on an experiment we did in class:



enter image description here


Note that the surface is assumed to be frictionless. Also note that I have assigned a positive and negative direction using that arrow and the plus sign. There are two masses, A and B (A on the track, B hanging). There is a rope tying them together that passes over a pulley. I have added in the free-body force diagrams for each mass that I was told were correct.


When we did the experiment in class, the blocks were only motionless when someone was holding block A back. But when it was let go (and that is the situation represented by the diagram I displayed) the whole system accelerated in the positive direction (I have assigned that positive and negative direction to make discussing the problem easier).


The question is, "What is the net force in the entire system (both blocks)?" In other words, what is causing the movement?


(I have also been told that the blocks will have the same magnitude of acceleration. Why is this?)


Anyway, I've been told that the net force is the force of gravity on mass B (the hanging one). But I don't really understand why, even after extensive discussion with various people.


Looking back at those force diagrams: I understand that the force of normal and weight of mass A cancel out, so the only remaining force is tension. And I understand that mass B has tension and weight acting on it.


But here's where it gets tricky:


The tension on A is apparently pulling it in the positive direction, while the tension on B is pulling it in the negative direction. Do those opposite tensions cancel one another, causing force of gravity to become the net force? There must be some sort of cancellation in play, I figure, because I've been told that when all the forces are summed, you just end up with force of gravity propelling the system.


Some questions I've been asking myself about this have been:





  • How does the relationship between the force of gravity on mass B and the tension in the rope play into this? Isn't the tension caused by that force of gravity? Doesn't that mean that if tensions cancel, the force of gravity's effect is canceled as well?




  • Does the pulley affect tensions? For example, we know that there's a positive tension affecting mass A. Is there still a positive tension in existence on the other side of the pulley, or just the negative tension that's acting on B? Might there be some sort of effect whereby two sets of opposite tensions, one set on each side of the pulley, cancel each other out?




  • If you pick any and all points on the rope, would there be two opposing tensions at every one of those points?





  • Is tension uniform throughout the rope?




  • How might differences in mass between object A and object B (which, sorry if the diagram was misleading in the sizes, can have any mass) play into the tension?




Etc.


As you can see, my basic tension understanding is really rather weak. I've been told that ropes can only pull, not push, etc., and simple things like that have guided me this far, but I've run into some roadblocks in my understanding.


I know this is a very long question but any help would be greatly appreciated. Thank you very much.





quantum field theory - Significance of total divergence anomaly term


What is the significance of the fact that the anomany term (calculated from the triangle diagram) is a total divergence? Or, in other words, what is the significance of $$\partial_\mu j^\mu_A\sim Tr(W\tilde{W}) =\text{a total divergence}$$ for global anomalies. I think this fact is related to why baryon number violation in standard model cannot be a perturbative process. Perhaps someone can illuminate.




Sunday 28 April 2019

quantum field theory - Sign in front of QFT kinetic terms


I'd like to know if the sign in front of a kinetic term in QFT important. For the scalar field we conventionally write (in the $ + --- $ metric), \begin{equation} {\cal L} _{ kin} = \frac{1}{2} \partial _\mu \phi \partial ^\mu \phi \end{equation} Based on the answer given here, this makes perfect sense since we want to have positive kinetic energy $\propto \dot{\phi}^2$. So would the Hamiltonian with a negative in front of the kinetic term be unbounded?


Does this logic extend to the Dirac Lagrangian typically given by, \begin{equation} \bar{\psi} i \partial _\mu \gamma ^\mu \psi \quad ? \end{equation} i.e., would having a negative in front of the Dirac Lagrangian make the Hamiltonian unbounded?



Answer




Yes. Though the energy will not be unbounded, but bounded from above, if my calculation is correct.


For real scalar field under $(+---)$ metric, besides the negative classical kinetic energy for the Lagrangian $$\mathcal{L}=-\frac{1}{2} \partial^{\mu} \phi \partial_{\mu} \phi - \frac{1}{2} m^2 \phi^2 \tag{1} $$, the classical equation of motion will be $$ (\square - m^2 )\phi=0 . \tag{2}$$ For plane wave $\phi ~\sim e^{ipx} $, it gives $p^2+m^2 = (p^0)^2 - \mathbf{p}^2+m^2=0$ which is inconsistent with relativistic energy momentum relation. I am not sure if it is necessary to quantize it.


Though the energy-momentum-relation argument will not work for the Dirac field, we can quantize it to see the energy will be negative definite. $$\mathcal{L} = \bar{\psi}( -i \gamma^{\mu} \partial_{\mu} - m ) \psi \tag{3}$$


The classical equation of motion is $$ (i \gamma^{\mu} \partial_{\mu} +m) \psi=0 \tag{4}$$


To preserve all properties of $u(\mathbf{p})$ and $v(\mathbf{p})$, we define $$ \psi =: u(\mathbf{p}) e^{ipx} $$ $$ \psi =: v(\mathbf{p}) e^{-ipx} $$
Thus we can replace the $u(\mathbf{p})$ as $v(\mathbf{p})$ and $v(\mathbf{p})$ as $u(\mathbf{p})$ in the expansion of $\psi$ and $\bar{\psi}$. By $$\pi = -i \bar{\psi} \gamma^0 $$ then $$H= \int d^3 x \bar{\psi} ( i \gamma^i \partial_i \psi + m ) \psi $$


Plug in expansions of spinors in the Schrodinger picture $$ \psi = \int \frac{ d^3 p }{ (2\pi)^3} \frac{1}{ \sqrt{2 E_{\mathbf{p}}}} \sum_s \left( a_{\mathbf{p}}^s v^s (\mathbf{p}) e^{-i\mathbf{p} \cdot \mathbf{x} } + b_{\mathbf{p}}^{s\dagger} u^s(\mathbf{p}) e^{i \mathbf{p} \cdot \mathbf{x} } \right) $$ $$ \bar{\psi} = \int \frac{ d^3 p }{ (2\pi)^3} \frac{1}{ \sqrt{2 E_{\mathbf{p}}}} \sum_s \left( b_{\mathbf{p}}^s \bar{u}^s (\mathbf{p}) e^{-i\mathbf{p} \cdot \mathbf{x}} + a_{\mathbf{p}}^{s\dagger} \bar{v}^s(\mathbf{p}) e^{i\mathbf{p} \cdot \mathbf{x}} \right) $$ we have


$$ H = \sum_{ss'} \int \frac{d^3p}{ (2\pi)^3 2E_{\mathbf{p}} } b_{\mathbf{p}}^{s'} b_{\mathbf{p}}^{s\dagger} \bar{u}^{s'}(\mathbf{p}) ( - \gamma^i p_i +m) u^s(\mathbf{p}) + a_{\mathbf{p}}^{s'\dagger} a_{\mathbf{p}}^{s} \bar{v}^{s'}(\mathbf{p}) ( \gamma^i p_i +m) v^s(\mathbf{p}) $$ $$ = \sum_{ss'} \int \frac{d^3p}{ (2\pi)^3 2E_{\mathbf{p}} } b_{\mathbf{p}}^{s'} b_{\mathbf{p}}^{s\dagger} \bar{u}^{s'}(\mathbf{p}) ( \gamma^0 p_0 ) u^s(\mathbf{p}) + a_{\mathbf{p}}^{s'\dagger} a_{\mathbf{p}}^{s} \bar{v}^{s'}(\mathbf{p}) ( - \gamma^0 p_0 ) v^s(\mathbf{p}) $$ $$ = \sum_s \int \frac{ d^3p}{ (2\pi)^3} E_{\mathbf{p}} ( b_{\mathbf{p}}^{s} b_{\mathbf{p}}^{s\dagger} - a_{\mathbf{p}}^{s\dagger} a_{\mathbf{p}}^{s} ) $$ $$ = \sum_s \int \frac{ d^3p}{ (2\pi)^3} - E_{\mathbf{p}} (b_{\mathbf{p}}^{s\dagger} b_{\mathbf{p}}^{s} + a_{\mathbf{p}}^{s\dagger} a_{\mathbf{p}}^{s} ) - \infty $$


Changing anticommutator into commutator will make the spectrum unbounded.


classical mechanics - Why closed in the definition of a symplectic structure?


Why do we want the 2-form $\omega $ to be closed? What if it is not?



Answer



First some terminology:





  1. A non-degenerate 2-form $\omega$ is called an almost symplectic structure.




  2. A closed 2-form $\omega$ is often called a presymplectic structure.




  3. If the 2-form $\omega$ is both non-degenerate and closed, it becomes a symplectic structure.




In the non-degenerate case, the closedness condition $$\mathrm{d}\omega~=~0\tag{C}$$ is equivalent to the Jacobi identity (JI) for the corresponding Poisson bracket (PB). In other words, conversely, a violation of the closedness condition (C) would mean a violation of the JI.



Moreover in the non-degenerate case, the closedness condition (C) (or equivalently, the JI) is the integrability condition that ensures the local existence of Darboux coordinates (aka. canonical coordinates), cf. Darboux' theorem. Conversely, the existence of Darboux coordinates in a local neighborhood $U$ implies the closedness condition (C) in that neighborhood.


For further information, see also e.g. Wikipedia$^1$; this, this, and this related SE posts; and links therein.


--


$^1$ Wikipedia (August, 2015) has a concise section about motivations arising from Hamiltonian mechanics, cf. above comment by ACuriousMind. Wikipedia argues that $$\mathrm{d}H(V_H)~\equiv~\omega(V_H,V_H)~=~0\qquad\text{and}\qquad {\cal L}_{V_H}\omega~\equiv~i_{V_H}\mathrm{d}\omega~=~0 .$$ To complete Wikipedia's argument and deduce (pointwise) that $\omega$ is (i) alternating and (ii) closed, note that the Hamiltonian vector field $V_H$ needs to probe all directions in the tangent space of the point. This can be achieved by choosing the Hamiltonian generator $H$ in $2n$ different ways (because $\omega$ is non-degenerate).


electromagnetism - Does special relativity explains working of an electromagnet?


I heard that special relativity could be used to explain the working of electromagnet, but couldn't dig anything out of it. Can somebody give some explanation of the above?


I also heard that it is based on the principle that-electric field in one frame of reference is magnetic field in other.


How is it? (I am still in high school, so I don't know much advanced maths and physics.)



Answer



The laws of the EM (electromagnetic) fields contradict the classical Newtonian mechanics. For example, switching reference frames would change the speed of the EM waves, while Maxwells' Equations (and the experimental evidence) result the speed of the EM waves are constant $c$.


This was one of the reasons of the development of the Special Relativity (SR).


In the CM, there are different laws to describe the interaction of moving things and the electric/magnetic forces (f.e. Faraday's law of induction). In the SR framework, the Lorentz-tranformation transforms electric fields to magnetic and vice versa.



Electric engineers study the theory of the electromagnetic fields in non-relativistic approximation belonging to practical scenarios (EM field around a high-voltage wire, EM field in an electric motor, transformator or electromagnets). But they can do this efficiently only after they learned its SR background.


In the practical electromagnet design, the engineers use the classical EM laws and classical mechanics - and a lot of highly complex technical experience collected since centuries.


Thus, SR explains much better, how an electromagnet works, but the CM version is used in the daily design practice.


curvature - Simple check for the global shape of the Earth


I have been on a date recently, and everything went fine until the moment the girl has told me that the Earth is flat. After realizing she was not trolling me, and trying to provide her with a couple of suggestions why that may not be the case, I've faced arguments of the like "well, you have not been to the space yourself".


That made me think of the following: I myself am certain that the Earth is ball-shaped, and I trust the school physics, but being a kind of a scientist, I could not help but agree with her that some of the arguments that I had in mind were taken by me for granted. Hence, I have asked myself - how can I prove to myself that the earth is indeed ball-shaped, as opposed to being a flat circle (around which the moon and the sun rotate in a convenient for this girl manner).


Question: Ideally I want to have a proof that would not require travelling more than a couple of kilometers, but I am fine with using any convenient day (if e.g. we need to wait for some eclipse or a moon phase). For example, "jump an a plane and fly around the Earth" would not work for me, whereas "look at the moon what it is in phase X, and check the shape of the shade" would.


Trick is, I know that it is rather easy to verify local curvature of the Earth by moving away from a tall object in the field/sitting on the beach and watching some big ship going to the horizon. However, to me that does not prove immediately that globally the Earth has same/similar curvature. For example, maybe it's just a shape of a hemisphere. So, I want to prove to myself that the Earth is ball-shaped globally, and I don't want to move much to do this. Help me, or tell me that this is not possible and why, please. As an example, most of the answers in this popular thread only focus on showing the local curvature.


P.S. I think, asking how to use physics to derive global characteristics of an object from observing things only locally (with the help of the Sun and the Moon, of course) is a valid question, but if something can be improved in it, feel free to tell me. Thanks.


Update: I have not expected such a great and strong feedback when asking this question, even though it is indeed different from the linked ones. Them are still very similar, which was not grasped by all those who replied. I will thoroughly go over all the answers to make sure which one fits the best, but in the meantime if you would like to contribute, please let me clarify a couple of things regarding this question: they were in the OP, but perhaps can be made more obvious.




  1. I do not have a goal of proving something to this date. I see that mentioning her might have been confusing. Yet, before this meeting I was certain about the shape of the earth - but her words (even though I think she's incorrect in her beliefs) made me realize that my certainty was based on the assumption I have not really questioned. So sitting on a beach with another friend of mine (both being ball-believers) we thought of a simple check to confirm our certainty, rather than to convince anyone else in us being right.





  2. I am only looking for the check that would confirm the GLOBAL shape of the earth being ball-like. There were several brilliant answers to another question that worked as a local curvature proof, and I am not interested in them.




  3. I am looking for the the answer that will show that the Earth is ball-shaped (or rather an ellipsoid), not that it is not flat. There are many other great shapes being neither ball/ellipsoid nor flat. I do still have an assumption that this shape is convex, otherwise things can go too wild and e.g. projections on the Moon would not help us.




I think point 1. shows why is that a valid physics/astronomy question, rather than playing a devil's advocate defending the flat Earth hypothesis, and I would also happily accept the answer like you cannot show this by not moving for 20k kilometers because A, B, C if there's indeed no simple proof. At the same time, points 2 and 3 should distinguish this question from the linked ones.



Answer




Look at the Moon during a lunar eclipse. Or during any other Moon phase for that matter, where part of the Moon is shaded.


As ironically exemplified by Neil deGrasse Tyson in a Twitter tweet:



Neil deGrasse Tyson @neiltyson
November 26, 2017


A Lunar Eclipse flat-Earther's have never seen.


enter image description here



We always only see a circular shade as if the Earth really is a sphere. Sure, it is possible even with a flat Earth that the arrangement of Sun-Earth-Moon just happens to give a circular shade by coincidence. But then we would only sometimes see a circular shade - other times we should see something like the above image.


Just wait for the next lunar ecplise or Moon phase. Or for the next 100 ones. Surely, at some point you must be seeing the shade from another angle giving an elliptic shade. Or a thin shape as the picture.



These observations can be done with our own naked eyes. And we have never, ever observed any sign of a flat-Earth shadow.


fluid dynamics - Pressure vs wind speed, on a rectangular surface


How do I go about finding the pressure exerted on a rectangular surface in a free flowing air stream?



I wouldn't imagine that this is directly related to the airspeed / surface area, but have no idea where to start. Is there even an equation, or does one need to do some kind of FEA?


For instance a 1.2m x 2.4m metal sheet suspended some distance above ground level, if I have a gust of wind at 8m/s (directly perpendicular to the sheet), what is the average pressure across the face of sheet?



Answer



Wind Load Formula:


$F_d = \frac{1}{2} \rho v^2 A C_d$


where
$F_d$ is the force of drag (or in this case Force Against the flat plate)
$\rho$ is the density of the air
$v$ is the speed of the air against the object
$A$ is the area of the object which the air is blowing against

$C_d$ is the drag coefficient


fluid dynamics - Stokes law in 2-dimensions


Stokes' law states that force on slow moving sphere (i.e. $Re\ll1$) in liquid is $$ F_d = 6 \pi \mu R V $$



In two dimensions we are in trouble (flow around disk in 2d or around cylinder in 3d), because there is no solution to the Stokes' problem (known as Stokes' paradox), but from dimensional analysis we can still conclude that


$$ F_d = C \mu V $$


I did some numerical tests of Navier-Stokes equations for small Reynolds numbers and found that $F_d$ really does not depend on $R$ and $C\approx 4\pi$.


I find it quite counter-intuitive that the force in 2D does not depend on the disk radius. Have I done something wrong? Or it really does not depend on radius of the disk?


Only thing which depends on the disk radius is the admissible range of input velocities. If you increase $R$ than you have to lower the max $V$ to ensure the condition $Re \ll 1$.




homework and exercises - Moment of Inertia of a Ring about an axis inclined at $frac {pi}{4} $ radians with normal to plane of ring



I have a thin Ring of mass $M$ and radius $R$, I have to find it's moment of Inertia about an axis passing through it's centre and at an angle of $\frac{\pi}{4}$ radians with the normal to the plane of the ring.




I am trying to use perpendicular axes theorem. Suppose I place three mutually perpendicular axes on the centre so that one is the diameter, another one is perpendicular to it (also diameter), but on the plane of the ring, while the third is parallel to the normal, now I rotate the axes such that the one of them remains the diameter while the other two are mutually inclined at $\frac{\pi}{4}$ to the normal.


Now, as I know the moment of Inertia about a diameter ($\frac{MR^2}{2}$) So the required moment of Inertia (say $I$) must be :


$ I+I = \frac{MR^2}{2}$ from perpendicular axes theorem, So, $I = \frac{Mr^2}{4}$.


Is my way of thinking correct in this case?



Answer



I don't think it works that way unfortunately.


Let me propose a more general approach: let $\hat{x}$, $\hat{y}$, $\hat{z}$ be the normal axes to the ring (two diameters and normal respectively), now let $\hat{x}'$, $\hat{y}'$, $\hat{z}'$ be the rotated axes. We want to compute $I_{z'}$. Now write down the change of co-ordinates matrix, which is simply a rotation about the $\hat{y}$ axis:


$$ \Lambda = \left[ \begin{matrix} \cos\theta && 0 && -\sin\theta \\ 0 && 1 && 0 \\ \sin\theta && 0 && \cos\theta \end{matrix} \right] $$


This matrix is such that $\vec{x}' = \Lambda \vec{x}$


Now we can easily build the inertial tensor for in the normal co-ordinates because it is diagonal:



$$ I = \left[ \begin{matrix} \frac{1}{2}MR^2 && 0 && 0 \\ 0 && \frac{1}{2}MR^2 && 0 \\ 0 && 0 && MR^2 \end{matrix} \right] $$


Now since I is a tensor, it transforms as a tensor, so in the new co-ordinates (the "prime" ones) it is given by $I'_{ij} = \Lambda_{ik}\Lambda_{jl} I_{kl} \rightarrow I' = \Lambda I \Lambda^T$. A simple calculation shows:


$$ I' = \left[ \begin{matrix} MR^2(\frac{\cos^2{\theta}}{2}+\sin^2{\theta}) && 0 && \frac{MR^2}{2}\cos{\theta}\sin{\theta} \\ 0 && \frac{MR^2}{2} && 0 \\ \frac{MR^2}{2}\cos{\theta}\sin{\theta} && 0 && MR^2(\frac{\sin^2{\theta}}{2}+\cos^2{\theta}) \end{matrix} \right] $$


So as you can see, when $\theta=\pi/4$ you have $I_{x'x'} = I_{z'z'} = \frac{3}{4}MR^2$.


Simple rule The trace of the inertial tensor is invariant under change of co-ordinates. In normal co-ordinates it is $2MR^2$ (just the sum of the three diagonal components). In our rotated co-ordinates, since when $\theta = \pi/4$ symmetries suggest $I_{x'x'}=I_{z'z'}$ and $I_{y'y'}=I_{yy}=MR^2/2$ (the $y$ axis is the rotation axis and so it doesn't change), you can impose trace invariance and get the result.


fourier transform - Interacting Fields in QFT


I am trying to work through Peskin and Schröder and am a little stuck in Chapter 4 [section 4.2 p. 83 below eq. (4.13)], when he first treats interacting fields. The subject is the quartic interaction in Klein-Gordon Theory. They claim that:



"At any fixed time, we can of course expand the field (in the interacting theory) as before (in the free theory) in terms of ladder operators."



I don't see why this should be possible in general. Their argument for the ladder operators and the expansion in plane waves in the case of the free theory was that from the Klein-Gordon equation we get Fourier modes independently satisfying harmonic oscillator equations. However as far as I can see, once I add an interacting term I don't get these equations anymore.



Answer



You are always free to define $$ a_{\boldsymbol k}\equiv \int\mathrm d\boldsymbol x\ \mathrm e^{ikx}(\omega_{\boldsymbol k}\phi(x)+i\pi(x)) \tag{1} $$ where $\pi=\dot\phi(x)$. If you take the time derivative of this definition, you get $$ \dot a_{\boldsymbol k}= \int\mathrm d x\ \mathrm e^{ikx}(\partial^2+m^2)\phi(x) \tag{2} $$ which is non-zero for an interacting field. Therefore, $a=a(t)$, where $t$ is the time slice you chose in $(1)$; in other words, our definition of $a$ is not in general independent of the time slice, so $a$ depends parametrically on the value of $t$ at that slice.


Now, inverting the Fourier transform in $(1)$, you get $$ \phi(x)=\int\frac{\mathrm d\boldsymbol k}{(2\pi)^32\omega_{\boldsymbol k}}\ \mathrm e^{-ikx}a_{\boldsymbol k}(t)+\text{h.c.} $$ which is essentially P&S's statement. Note that, in general, this statemenet is mostly devoid of any practical meaning; its just a trivial consequence of the inversion theorem of the Fourier transform.



Saturday 27 April 2019

definition - What is meant by potential energy for a particle in a field?


Potential energy is usually defined using a field and a particle that experiences the field force, as the work down in moving a unit particle from infinity to a position in that field.


But some physics text books describe the particle placed there as possessing potential energy, others that the potential energy is "stored" in the field itself, which appear to conflict with one another. So what is the modern meaning of potential energy for a particle in a field?




thermodynamics - Is it possible to start fire using moonlight?


You can start fire by focusing the sunlight using the magnifying glass.


I searched the web whether you can do the same using moonlight. And found this and this - the first two in Google search results.



What I found is the thermodynamics argument: you cannot heat anything to a higher temperature using black body radiation than the black body itself, and Moon isn't hot enough.


It may be true, but my gut feelings protest... The larger your aperture is, the more light you collect, also you have better focus because the airy disk is smaller. So if you have a really huge lens with a really short focus (to keep Moon's picture small), or in the extreme case you build a Dyson-sphere around the Moon (letting a small hole to the let the sunlight enter), and focusing all reflected light into a point it should be more than enough to ingnite a piece of paper isn't it?


I'm confused. So can you start fires using the Moon?



Answer



Moonlight has a spectral peak around $650\ \mathrm{nm}$ (the sun peaks at around $550\ \mathrm{nm}$). Ordinary solar cells will work just fine to convert it into electricity. The power of moonlight is about $500\,000$ times less than that of sunlight, which for a solar constant of $1000\ \mathrm{W/m^2}$ leaves us with about $2\ \mathrm{mW/m^2}$. After accounting for optical losses and a typical solar cell efficiency of roughly $20\ \%$, we can probably hope to extract approx. $0.1\ \mathrm{mW}$ with a fairly simple foil mirror of $1\ \mathrm{m^2}$ surface area. Accumulated over the course of a whole night with a full moon, this leaves us with around $6\ \mathrm h\times3600\ \mathrm{s/h}\times0.1\ \mathrm{mW}\approx2\ \mathrm J$ of energy. That's plenty of energy to ignite a fire using the right chemicals and a thin filament as a heater.


How does Higgs Boson get the rest mass?



Higgs Boson detected at LHC is massive. It has high relativistic mass means it has non-zero rest mass.


Higgs Boson gives other things rest mass. But, how does it get rest mass by itself?



Answer



Forget about relativistic mass; it's an outdated and, in this case, irrelevant concept. The Higgs boson has a rest mass of about $125\ \mathrm{GeV}/c^2$ assuming it is in fact what the LHC has found.


Anyway, I would say that the Higgs boson does not actually give other particles mass directly; instead, it's a side effect of the mechanism by which those other particles become massive. It just naturally turns out that the particle produced by this mechanism has to be a massive particle itself.



Or to put it another way, the Higgs field would not be able to give other particles mass if it were not itself massive. Take a look at the "Mexican hat" potential shown in this site's logo. The bump in the middle arises because the Higgs field has an associated mass, the mass of the Higgs boson. That bump pushes the "natural" state of the Higgs field off center, which means the field has a nonzero "default" value, called the vacuum expectation value. It's that vacuum expectation value that gives other particles mass. Without the bump, the minimum of the potential would be in the center, which means the vacuum expectation value of the Higgs field would be zero, which in turn would render it incapable of giving other particles mass.


I'll refer you to another answer of mine for some of the mathematical detail.


Where is the potential energy saved?



If you increase the h (=height), potential energy will be increased given by U=mgh.


Where does the energy go, into atoms?




From Number Theory to Physics.


I have asked a question here:




  • I want to see an example which is related to (integral) quadratic forms or theta series.








@Kiryl Pesotski answered me in some comments as following:




  • For example, you may want to compute the partition function for the pure point spectrum of the Hamiltonian where the eigenenergies are quadratic in the quantum numbers. This is e.g. particle in a box or the leading order correction due to the $x^3$ and $x^4$ powers to the spectrum of the harmonic oscillator. These series can be written in terms of the Jacobi theta functions.




  • I mean you have the energies being something like $E_{n}=\alpha+\beta{n}+\gamma{n^{2}}$, where $n \in \mathbb{Z}$, or $\mathbb{N}$. The partition function is given by the seres $Z=\sum_{\forall{n}}e^{-\beta{E_{n}}}$, such sreis are used to represent Jacobi theta functions.





  • The other example is the heat equation. The Jacobi theta function solves it. E.g. $\partial_{t}u=\frac{1}{4\pi}\partial^{2}_{x}u$ is solved by $u(x, t)=\theta(x, it)$, where $\theta(x, \tau)$ is the jacobi theta function.




I have not any physical knowledge;



Can any one explain his answer for me in more details?








Also I have list other related questions from the highest vote to the lowest:
[None of them answers my question; except the $6^{\text{th}}$ question; which I feel a connection.]




  1. Number theory in Physics




  2. Examples of number theory showing up in physics





  3. Why is there a deep mysterious relation between string theory and number theory, elliptic curves, $E_8$ and the Monster group




  4. $p$-Adic String Theory and the String-orientation of Topological Modular Forms (tmf)




  5. Where do theta functions and canonical Green functions appear in physics





  6. Number of dimensions in string theory and possible link with number theory




  7. Are there any applications of elementary number theory to science?




  8. Algebraic number theory and physics






condensed matter - A physical understanding of fractionalization


all! Is there a physical understanding of fractionalization in condensed matter physics? The textbook approach is theoretical, not physical. I'm thinking of spin-charge separation for electrons, the fractional quantum hall effect, and things like that. The theoretical approach is to introduce an auxiliary gauge field with no kinetic term at the bare level so that it is apparently confining and nondynamical at the bare level, but somehow, dynamics intervenes, and it becomes deconfining, and somehow, there's some mixture between the emergent gauge symmetry and the original symmetries, and somehow, fractionalization comes in in the diagonalization.


What is the physical interpretation without introducing theoretical auxiliary gauge symmetries right from the onset?




kinematics - Why the photon can't produce electron and positron in space or in vacuum?


$$\frac{hc}{\lambda} = K_e + K_p + 2m_e c^2$$ could be the energy conservation equation for a photon of wavelength $\lambda$ decaying into a electron and positron with kinetic energies $K_e$ and $K_p$ and rest mass energy $m_e c^2$.


Why does this decay not occur in space or vacuum?



Answer



You can't simultaneously conserve energy and linear momentum.


Let the photon have energy $E_{\gamma} = p_{\gamma} c$ and the electron have energy $E_{-}^{2} = p_{e}^{2}c^2 + m_{e}^{2}c^4$ and an analogous expression for the positron. Suppose the electron and positron depart from the interaction site with an angle $2\theta$ between them.


Conservation of energy.


$$ p_{\gamma} c = \sqrt{(p_{e}^{2}c^2 +m_e^{2}c^4} + \sqrt{(p_{p}^{2}c^2 +m_e^{2}c^4},$$ but we know that $p_{p} = p_{e}$ from conservation of momentum perpendicular to the original photon direction. So $$ p_{\gamma} = 2\sqrt{p_{e}^2 + m_e^{2}c^2}$$


Now conserving linear momentum in the original direction of the photon. $$p_{\gamma} = p_e \cos{\theta} + p_p \cos\theta = 2p_e \cos\theta$$


Equating these two expression for the photon momentum we have $$p_e \cos{\theta} = \sqrt{p_{e}^2 + m_e^{2}c^2}$$ $$\cos \theta = \sqrt{1 + m_e^{2}c^2/p_e^{2}}$$ As $\cos \theta$ cannot exceed 1 we see that this impossible.



Friday 26 April 2019

cosmology - How precisely can we date the recombination?



The early universe was hot and opaque. Once it cooled enough, protons and electrons were able to form hydrogen atoms. This made the universe transparent, and was known as recombination. We can see the effects of this as the cosmic microwave background radiation.


How precisely can we date the recombination? Wikipedia tells me that it was 377,000 years after the Big Bang, and that the universe is now 13.z billion years old. A little math tells me that it was 13,699,623,000 years ago. But how precisely can we measure that? Can we say it was 13,699,623,003 years ago? Can I get that in seconds?



Answer



The first thing to say is that the recombination didn't happen at an instant of time. Before recombination most hydrogen atoms were ionised but there were a few neutral atoms. After recombination most hydrogen atoms were neutral but there were a few that were ionised. During recombination the ratio of ionised to neutral hydrogen atoms changed smoothly. I'd guess the figure of 377,000 years corresponds to a temperature of around 3740K when 50% of the hydrogen atoms are neutral, but the temperature had to fall to around 3100K to get 99% recombination.


The calculation is described in details in this document. In brief, the evolution of the early universe is described by a solution to the equations of General Relativity called the FLRW metric. Using this equation, and observations of the current universe, we can calculate the properties of the early universe and in particular we can calculate it's temperature.


The reason that temperature matters is because it's the temperature that determines whether a hydrogen gas is ionised. If you take hydrogen at room temperature it's obviously neutral, and as you heat it the collisions between hydrogen atoms get increasingly energetic until around 3100K they get energetic enough to start ionising atoms and form a plasma. So as we look back in time, and the universe gets hotter, there's a point where the temperature reaches 3100K and the hydrogen starts being ionised.


The calculation of the ionisation as a function of temperature is complicated because the early universe wasn't in equilibrium, but physicists are good at this sort of thing (not me - I have no idea how to calculate it! :-)


Re your last question, the age of the universe is 13.75 ± 0.11 billion years i.e. the error in the calculated age is 110,000,000 years. So you would have to say the time since recombination is 13,699,623,003 ± 110,000,000 years, and obvious it's silly to give the number to more than 4 digits.


relativity - What is the speed of time?



When we measure the speed of a moving element we do it with the help of a reference frame. Now if we need to measure the speed of time, is it possible? Does time actually have a speed?



Answer



I'm going to dare to give a very brief answer that's likely not what most folks would expect, but is deeply rooted in experiment:


The speed of time is just the speed of a clock -- that is, of how fast some kind of a repeated cycle can be done.


Clocks thus only have meaning relative to each other. You can set one as a standard, then measure other by it, but you can never really define "the" time standard.



That is actually a very Einstein way of defining time -- which is to say, it's a very Mach way of defining time, since Einstein got much of his insistence on hyper-realism in defining physics quantities from Mach.


Now, most likely you thought I was going to answer that there is some kind of velocity of an object along a time axis $t$ that has "length" in much the same fashion as X or Y or Z, not in terms of cycles. That is certainly what comes to mind for me, in fact!


While viewing $t$ as having ordinary XYZ style length turns out to be an incredibly useful abstraction, it's difficult experimentally to make $t$ to behave fully like a length. The main reason is that the clock with its cycles keeps sticking in its nose and requiring that at some point, you sort of "borrow" a space-like axis from XYZ space and use that to write out a sequence of clock cycles (called proper time or $\tau$) on paper. As a result, it's not really $t$ you are drawing in those diagrams. You are instead borrowing a bit of ordinary space and mapping clock cycles onto it, making them seem like a length more through the way you represent order them than in how they actually work.


Fortunately, there is a different and more satisfying approach to the question of whether time has length, one that is suggested by special relativity, or SR. SR says in effect that XYZ space and $t$ are interchangeable, and in a very specific way. So, even though there's always a need to write out some cycles in diagrams -- proper time happens! -- you can argue that there is nonetheless a limit at which objects traveling closer and closer to the speed of light look more and more as if their time axis has been changed into a static length along some regular XYZ direction of travel.


So, by this take-it-to-the-limit kind of thinking, you can construct a more explicit concept of $t$ as an axis with XYZ-style length.


It also provides a pretty good answer to you question. Since proper time comes to an almost complete stop as an object nears the speed of light, you can say that you have in effect "stolen" the velocity of that object or spaceship through time (from your perspective or frame, not hers!) and converted it fully into a velocity through space (from your perspective).


So there is your answer: That "stolen" velocity along $t$ appears to correspond most closely with the velocity of light $c$ in ordinary space, since that is the real-space velocity at which proper time $\tau$ comes (at the limit) to a complete halt. This idea that objects "move" at the speed of light along the $t$ axis is in fact a very common assumption in relativity diagrams. It shows up for example whenever you see a light-cone diagram whose cone angle is $45^\circ$. Why $45^\circ$? Because that's the angle you get if you assume that the "velocity" of light along the $t$ axis is identical to its velocity $c$ in ordinary XYZ space.


Now, is there some slop in how that could be interpreted? You bet there is! The idea of a "velocity" in time is for example problematic in a number of ways -- just try to write it out as a derivative and you'll see what I mean. But taking such a perspective at least in terms of how to think of the issue gives a really nice simplicity to the units involved, as well as that conceptual simplicity in how to think of it. More importantly, where such simplicity keeps popping up in the representations of something in physics, it's almost certainly reflecting some kind of deeper reality that really is there.


homework and exercises - Electric field of a finite, conducting plate



Let us assume a finite, conducting plate of dimension: $10\mathrm{m} \times 10\mathrm{m} \times 1\mathrm{m}$. I want to determine the electric field at the middle of one of the plates $10\mathrm{m} \times 10\mathrm{m}$ surfaces. Using Gauss's law one finds the electric field to be:
$$E= \frac{\rho}{\epsilon_0}$$ and we see that the electric field is not depend on the distance from the surface. I know that's the solution for a infinite plate. For a finite plate that doesn't seem very realistic. I assume the electric field to be not orthogonal onto the surface but to diverge - am I right with that assumption? Somehow the field has to decrease with distance. How do I modify my approach with Gauss's law so that I find the right solution for a FINITE plate?



Answer



You can't do this problem with Gauss's law, because you don't have the symmetry needed to assume the direction of the electric field. You have to break the square down into differential bits with area $dxdy$, and then integrate coulomb's law.


thermodynamics - Why most distribution curves are bell shaped? Is there any physical law that leads the curves to take that shape?



All the graphs shown below come from completely different fields of studies and still, they share a similar distribution pattern.




  1. Why most distribution curves Bell Shaped? Is there any physical law that leads the curve to take that shape?




  2. Is there any explanation in Quantum Mechanics for these various graphs to take that shape?




  3. Is there any intuitive explanation behind why these graphs are Bell Shaped?





Following is Maxwell’s Distribution of Velocity Curve, in Kinetic Theory of Gases.


enter image description here


Following is the Wein’s Displacement Law, in Thermal Radiations.


enter image description here


Following is the Distribution of Kinetic Energy of Beta Particles in Radioactive Decays.


enter image description here



Answer



First, distributions are not always bell-shaped. A very important set of distributions decrease from a maximum at $x=0$, such as the exponential distribution (delay times until a random event such as a radioactive decay) or power-laws (size distributions of randomly fragmenting objects, earthquakes, ore grade, and many other things).



Stable distributions


Still, there is a suspicious similarity between many distributions. These come about because of statistical laws that make them "attractors": various very different random processes go on, but their results tend to combine to form similar distributions. As Bob mentioned, the central limit theorem makes addition of independent random factors (of finite variance!) approach a Gaussian distribution (since it is so common it is called the normal distribution). Strictly speaking, there are a few other possibilities. If random factors are instead multiplied, the result is the log-normal distribution. If we take the maximum of some random things, the distribution will approach a Weibull distribution (or, a few others). Basically, many repeated or complex processes tend to produce the same distributions over and over again, and many of those look like bell-shapes.


Maximum entropy distributions


Why is that? The deep answer is entropy maximization. These stable distributions tend to maximize the entropy of the random values they produce, subject to some constraint. If you have something positive and with a specified mean, you get the exponential distribution. If it is positive but there is no preferred scale, you get a power-law. Specified mean and variance: Gaussian. Maximal entropy in phase space for given mean energy: Maxwell-Boltzmann.


Statistical mechanics


This is where we get back to physics. A lot of physical processes obey statistical mechanics, which runs by the equal a priori probability postulate:



For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.



If we know the energy and number of particles exactly each allowed microstate is equally likely (maximizes entropy), but anything macroscopic we calculate or measure will be a function of these random microstates - so its distribution will be bunched up if there are a lot of microstates that can generate that macrostate. If it has fixed particles but we only know the average energy, each state has probability $(1/Z)e^{-E/k_B T}$ where $E$ is their energy, $Z$ is a normalizing constant and $T$ the temperature: this distribution, the Boltzmann distribution, maximizes entropy with the constraint that the average energy is fixed. Similar distributions work when the number of particles can change.



Quantum mechanics


Finally, this links to quantum mechanics: QM describes the set of possible microstates, and from that plus statistical mechanics one can calculate the statistical distributions of macroscopic things like emitted photons of different wavelengths, gas molecule speeds, or kinetic energy distributions. The number of states available affect what curves we get, and the constraints of the experiment fix parameters like energy or temperature, but since nature is entropy-maximizing we get the entropy-maximizing distributions that fit these inputs.


They are often loosely bell-shaped since there are more states available for high energies (the curve grows from low values at low energy) but the system cannot put all particles into high energy states while keeping the (average) energy constant (the curve has to decline beyond a certain point). But this is the average of a myriad micro-events that all have more complex or discrete distributions.


thermodynamics - Should I heat my room when I'm not here, energy-efficiently speaking?



I was wondering as it's getting cold : is it better for my electricity bill to shut down completely my (electric) heater during day, and to turn it on again when I come home (then it will have to heat the room from something like 5°C to around 20°C), or should I keep the temperature around, say, 15°C while I'm away ?


I would naively guess it's better off, but I'm also wondering about inertia of the walls for instance...



Answer



This problem is very simple, but it's easy to overcomplicate by looking at too small a scale. At every second – no matter what the heater does – you waste money by heating the outside of your house. The rate of heating – and thus the rate at which you waste money – is given by Newton's Law of Cooling. So


$$ \text{Wasted money} \propto \int (T_\text{in}-T_\text{out})\;dt $$


The lower your house's temperature, the less money you waste – no matter what. So set the thermostat to the lowest practical temperature when you're away.


Thursday 25 April 2019

electrostatics - Is charge quantization correctly interpreted in classical electrodynamics?


As I tried to answer the SE question Electric fields in continuous charge distribution, I faced many ambiguities regarding this matter. In classical electrodynamics, it is claimed that the electric field near a vast plate is constant (uniform) considering that the charges are continuously distributed (uniformly) all over an infinite plane. The electric field is thus calculated to be:


$$E=\frac{\sigma}{2\epsilon_0} \space ,$$



where $\sigma$ is the surface charge density. However, in reality, the charge is quantized, and the electric field very close to the radius of these particles (a proton for instance) is near to the order of $E\approx 10^{22} \space V/m$ which is extremely high. When I considered $\frac{L}{d}×\frac{L}{d}=\frac{L^2}{d^2}$ number of these particles building an $L×L$ mesh of charges, so that the distance between every two successive charges is $d$ along both $x$ and $y$ axes, I deduced:


$$E=\frac{e}{\pi\epsilon_0} \sum_{n=0}^{\frac{L/2}{d}}\sum_{m=0}^{\frac{L/2}{d}}\space \frac{z}{(n^2d^2+m^2d^2+z^2)^{3/2}} \space$$


where $e$ is the charge of each particle (proton). Moreover, I assumed $L\gg d \gg z$. The above equation shows the field of the mesh, a distance $z$ exactly above the charge located at the center of the plate. However, when $z$ approaches zero, I numerically calculated that $E$ tends to infinity. This obviously violates the classical result of $E=\sigma/(2\epsilon_0)$, which indicates a constant field for a large uniformly charged plane. Does my calculation show that the electric field very close to, say, a plate capacitor is no longer constant but rather extremely great?


If this deduction is correct, the field component of a single particle (proton) parallel to the plate can also get high values very close to each proton, whereas we know that the E-field is zero inside the plate, otherwise we would have current. Where is the problem?




newtonian mechanics - How can the contact point of rolling body have zero velocity?


They say that for a rolling body, the velocity of the contact point is zero. I'm not getting this. How can it be zero when it's in continuous motion?




group theory - How to get result $3 otimes 3 = 6 oplus bar{3}$ for $SU(3)$ irreducible representations?


Let's have $SU(3)$ irreducible representations $3, \bar{3}$. How to get result that $$ 3\otimes 3 =6 \oplus \bar{3}~? $$ I'm interested in $\bar{3}$ part. It's clear that for $3 \otimes 3$ we can use tensor rules by expanding corresponding matrix on symmetric $6$ and antisymmetric parts. But why we have $\bar{3}$, not $3$, for antisymmetric part?




particle physics - Photocurrent's dependence on frequency



Sounds like a rookie question, this, but could someone please explain to me why doesn't photocurrent increase when we increase the frequency of the incident radiation? I mean, an increase in frequency would mean that the photons would have higher energy (E=hf) and this increased energy should correspond to the emitted photoelectron. Now since photoelectrons have higher kinetic energy, they would obviously have higher velocity and since they'd have a higher velocity, then acc. to this formula, I=nea(vd), the current should increase but it doesn't (at least, that's what's written in my school textbook). It'd be great if someone could please explain this to me!



Edit: I didn't assume that a higher frequency would knock off multiple electrons. What I am asking can be explained like this. Let's say, for simplicity, that the photodetector is a kilometer away from the photoelectrons, and it shows the current as the number of electrons which reach it each second. Now, let's take a case in which, say, the radiated light emits a total of 10 electrons from a given photosensitive material at a given frequency and intensity, 5 with a velocity of 1 km/s and the other 5 with a velocity of 500 m/s. Obviously, after a second, only 5 electrons would have reached the photodetector and it'd show the current as 5 electrons per second. Now, in another case with the same apparatus, let's increase the frequency of the light without changing its intensity such that the velocities of the emitted electrons roughly double. Even though the emitted electrons are still the same, he velocities of the electrons would now be 2 km/s for 5 electrons and 1 km/s for the other 5 electrons. Now, obviously after a second, all the 10 electrons would have reached the photodetector as compared to only 5 when the frequency was low, and the photodetector would show the current as 10 electrons per second. This certainly contradicts the fact that photocurrent is independent of frequency (given the frequency is above the work function of the photosensitive material) so what I'm asking is simply how to explain this contradiction.


Thank you!




Definitions in thermodynamics: temperature, thermal equilibrium, heat


I'm currently reading Fermi's "Thermodynamics" and I'm trying to grasp the (possibly different) right definitions for temperature, thermal equilibrium, heat.


To clarify, I'm looking for definitions from a purely thermodynamical point of view, which is also the line followed by the book.




Let's start with the latter. We can define heat through the 1° principle of Thermodynamics:$$Q=\Delta U + L ,$$that is, “Heat is the quantity of energy that a system absorbs from the ambient in a form that's not mechanical work.”. OK, I see no problem with this, apart from the fact that we need to define the energy $U$ of a thermodynamical system. Let's ignore it.




Now thermal equilibrium. This is the one I'm finding more troubles with, because in all definitions I've come across (maybe not very good ones, or maybe it's my interpretation) there's some reference to temperature, while in the definition of the latter there's reference to the concept of thermal equilibrium, but one has to start somewhere. For example, from (IT) Wikipedia, I read:



Thermal equilibrium: there is no flux of heat, temperature is constant in time and is the same in every point of the system.






The way that temperature is defined in the book is, first of all, the empirical (operational) one:



Temperature can be measured by putting a thermometer in contact with the system, for a sufficient time interval so that thermal equilibrium is established.



Some pages later there's also mention to the gas-thermometer. Finally, in the “Second principle of Thermodynamics” chapter, it's said:



Until now, we've only made use of an empirical scale of temperature. [...]If we put in thermal contact two bodies at a different temperature, heat will flow spontaneously by conduction from one body to the other. Now, by definition, we will say that the body from which the heat flows is the one with the higher temperature.




Now, clearly the definition in the first blockquote requires thermal equilibrium (beetween a body itself and beetween to bodies, I suppose) to be independently defined. Regarding the second, how can one tell what's the direction of the heat flux? Also, the second definition doesn't give a method to measure temperature, but only a way to tell which body is the hotter, right?






As I put them above, those definitions seem to me random pieces of a puzzle, I need to get a clearer picture. So any help is appreciated.



Answer



I agree with you that most books do not follow a logical path when defining thermodynamics terms. Even great books such as Fermi's and Pauli's.


The first thing you need to define is the concept of thermodynamic variables.



Thermodynamic variables are macroscopic quantities whose values depend only on the current state of thermodynamic equilibrium of the system.




By thermodynamic equilibrium we mean that those variables do not change with time. Their values on the equilibrium cannot depend on the process by which the system achieved the equilibrium. Example of thermodynamic variables are: Volume, pressure, surface tension, magnetization... The equilibrium values of these quantities define the thermodynamic state of a system.


When a thermodynamic system is not isolated, its thermodynamic variables can change under influence of the surrounding. We say the system and the surrounding are in thermal contact. When the system is not in thermal contact with the surrounding we say the system is adiabatically isolated. We can define that,



Two bodies are in thermal equilibrium when they - in thermal contact with each other - have constant thermodynamic variables.



Now we are able to define temperature. From a purely thermodynamic point of view this is done through the Zeroth Law. A detailed explanation can be found in this post. Basically,



We say that two bodies have the same temperature if and only if they are in thermal equilibrium.



Borrowing the mechanical definition of work one can - by way of experiments - observe that the work needed to achieve a given change in the thermodynamic state of an adiabatically isolated system is always the same. It allows us to define this value as an internal energy change, $$W=-\Delta U.$$



By removing the adiabatic isolation we notice that the equation above is no longer valid and we correct it by adding a new term, $$\Delta U=Q-W,$$ so



The heat $Q$ is the energy the system exchange with the surrounding in a form that is not work.



Notice that I have skipped more basic definitions such as thermodynamic system and isolated system but this can be easily and logically defined in this construction.


thermodynamics - Is osmosis a kind of thermal machine?


The hyperphysics article defines osmosis as a process driven by internal energy and internal energy is defined as energy associated with random,disordered motion of molecules. Osmosis develops pressure and lifts weight so it has the output of a machine.


But what drives osmosis? Is osmosis driven at the expense of thermal energy although there is no temperature gradient?




Answer



Perhaps surprisingly the osmotic pressure is related to the vapour pressure of the solvent. To see why, consider this though experiment:


Vapour pressure


The vapour pressure of the pure solvent, $P$, is greater than the vapour pressure of the solvent with some solute added, $P'$. This is simply because the mole fraction of the solvent is higher when it's pure than when something else is present - see Raoult's Law for more details. So if we put both beakers inside a sealed box there will be a net transfer of solvent from the pure beaker into the beaker with the solute.


Obviously there is no vapour present in an osmosis cell, but the theromdynamics are the same i.e. the change in free energy of the solvent moving from the pure side to the solution side is the same as if we evaporated some of the pure solvent then condensed it into the solution.


The molar change in free energy in evaporating then condensing the solvent is:


$$ \Delta G = -RT \ln\frac{P'}{P} $$


and the work done in moving one mole of solvent against an osmotic pressure $\Pi$ is:


$$ W = \int_0^\Pi VdP = V\Pi $$


where $V$ is the molar volume and we assume $V$ is constant i.e. not dependant on the pressure. At equilbrium the free energy change will be equal to the work done, so we equate the two equations above to get:



$$ \Pi = \frac{RT}{V} \ln \frac{P}{P'} $$


and since $P \gt P'$ that means $\Pi \gt 0$.


All very well, but none have this has addressed the microscopic origin of the pressure and I would guess that's what you're really after i.e. how do we explain the pressure by considering single solvent molecules. Well, the simplest case is when the solvent and solution are ideal fluids so the only effects are due to entropy. There are more solvent molecules per cubic metre in the pure solvent than in the solution, so assuming the solvent molecules move randomly it is more probable that a molecule will move from the pure side to the solution side than vice versa.


Wednesday 24 April 2019

lagrangian formalism - Boundary terms and Symmetries


Consider Maxwell-Chern-Simons theory in 2+1 dimension, with Lagrangian $$L = -(1/4)F_{\mu v}F^{\mu v} + (m^2/4) \epsilon_{\mu v \rho}A^\mu F^{v \rho},$$ when I make a gauge transformation $A_\mu \to A_\mu + d\lambda$, the lagrangian changes by a total derivative, which we can change it to surface integral. We assume the total derivative term, which can be converted to surface integral, vanishes at large distances. If we don't assume this does this change the symmetries or does it change the conserved quantities (i.e change the momentum, angular momentum operator etc.)




supersymmetry - How can the mass of Higgs give preference to SUSY vs multiverse?


According to the documentary Particle Fever, the precise value of the Higgs boson's mass could give more credence to either SUSY or multiverse theories. If the mass had been 115 GeV or below SUSY would have been favored, whereas a mass above 140 GeV would have given preference to multiverse.


Is there a way to understand this connection? How can the mass of a particle give input to the likelihood of a particular physical theory, (and/or in particular the two discussed here}?



I'm especially interested in a graduate-level but qualitative explanation, though any level would be great.


Note: a Physics.SE question discusses Higgs, SUSY, and multiverse, but does not give an explanatory answer for the connection.



Answer



I have not seen the film. But this was not "supersymmetry versus multiverse". It was "supersymmetry without multiverse" versus "supersymmetry with multiverse".


According to quantum field theory, a light Higgs boson (light compared to "grand unification" energies) should still look heavy because of virtual particle effects, unless these effects mostly cancel each other out. This is a feature of traditional supersymmetric models, and 115 GeV was a value coming from that sort of theory.


However, it can be difficult to make such a model that gets everything right, experimentally. You may have to suppose that the various parameters of the theory assume values that are "just right", e.g. that one parameter is very small, or that two parameters are almost the same - and there will be nothing in your theory which implies this. You will just be "fine-tuning" it, in order to have certain undesirable effects not show up.


In the past, the need for such parametric fine-tuning might be regarded as reason to reject a theory, if a causal mechanism for the fine-tuning couldn't be found. But "the anthropic principle" or "environmental selection" gives us a potential new reason why physics might look fine-tuned: perhaps other values of the parameters are inconsistent with the existence of life/atoms/etc. There might be a "multiverse" in which the parameters take different values in different places, but life is only possible in those places where the parameters take values which allow, e.g., something like complex stable chemistry to develop.


140 GeV was a prediction coming from one of those arguments. Here is the paper. But you'll see that this is still a theory with supersymmetry! It's just that it's a supersymmetric model which contains some anthropic fine-tuning too.


I want to very strongly emphasize that 115 GeV and 140 GeV are in no way the predictions coming from these two approaches - they are just examples. They may have been discussed in the film, because there were some experimental false alarms (in the search for the Higgs boson) at those energies. But we are talking about two types of theory - a supersymmetric theory with untuned parameters, and a supersymmetric theory with parameters tuned by anthropic selection - and if the details are different, the predictions are different.


Indeed, go to pages 25-26 of the multiverse paper, and you will see no less than four special values of the Higgs boson mass listed, each of which they think might be indicative of anthropic selection within a multiverse. The reason is that they don't have an exact model of how physics works throughout their multiverse - they are just guessing at the principles governing what variations are allowed from place to place. In the paper they favor 140 GeV, but here they are saying that if the truly fundamental physics works in some other way, then maybe one of these other values would be favored.



They list 128 GeV, which is sort of close to the value that was ultimately found, and say (page 26) "a Higgs mass near 128 GeV would provide strong evidence for the multiverse, although not quite as strong as might occur for a value near 141 GeV". In this regard, one should consider a "secretly famous" paper by Shaposhnikov and Wetterich, which actually did predict the right value - 126 GeV - several years in advance, and which didn't use the multiverse or supersymmetry. Instead, they assumed that quantum gravity has the property of "asymptotic safety" at high energies. This is an unfashionable assumption because it seems to contradict standard ideas about black hole entropy... However, my real point is that the right mass for the Higgs boson can possibly be obtained without the use of anthropic effects. And indeed, there are now some string-theory models in which the right value is produced by a physical cause rather than an anthropic tuning.


optics - Why is the sun brighter in Australia compared to parts of Asia?


Background:



I've lived in Philippines for several years, and visited other parts of Asia occasionally (Singapore, Indonesia, Hongkong).


I just moved to Western Australia a few months ago and I realized that the sun is brighter here, in the sense that every after sunrise and every nearing sunset, the sun shines too bright that it is blinding. This happens almost everyday, so this isn't just some one off thing.


In Asia, this never occurred to me. The sun was always bearable to the eyes.


Why is this so?



Answer



Clean dry air lets sunlight through; dirty moist air scatters it. Aerosols (small air borne particulate contamination) are more prominent near areas of dense population - due to power plants, cars, fires, ... These particles form nucleation sites for moisture - and these small water drops become very effective scatterers of sunlight.


The humidity is high in the Philippines, and it's low in Western Australia (Perth).


A map of the nitrogen dioxide concentrations in the earth's atmosphere (a proxy for 'man made pollution') shows that the region around Western Australia is quite low in pollution, while a lot of South East Asia is quite high (map from http://www.esa.int - European Space Agency):


enter image description here


A map of the particulate pollution (PM2.5 - particulate matter less than 2.5 micron) confirms the picture (credit: Aaron van Donkelaar, Dalhousie University. Source at http://www.nasa.gov/images/content/483910main1_Global-PM2.5-map-670.jpg):



enter image description here


Although it's not terribly easy to see on this map, the air in Western Australia is quite clear - so there will be less "stuff" for light to travel through / scatter off.


This is especially noticeable near sunrise/sunset, when the length of the path through the atmosphere is longest. This amplifies the difference.


A bit more data to back this up:


Map of typical humidity distribution in Manila (source: http://weatherspark.com/averages/33313/Metro-Manila-Philippines):


enter image description here


And for Perth (source: http://weatherspark.com/averages/34080/Redcliffe-Western-Australia):


enter image description here


These plots show the distribution of the "average daily high and low" values of humidity as a function of date, for both locations. Thus, you can see that average high for humidity is lowest on April 23 - at which point it's still 89%. The inner (darker colored) band represents the 25 - 75 percentile of the distribution, and the outer (lighter colored) band represents the 10-90 percentile. In other word - on April 23, maximum humidity in Manila might be at or below 82% one day in four; but on August 17 it is above 95% more than half the time.


Note that the vertical scale on the two plots is different - the minimum values in Perth are considerably lower than for Manila...



Here is a link to a very interesting and unusual photo sequence of a setting sun showing the phenomenon of the "green flash". This particular sequence was taken in Libya, and the photographer states:



The air was so clean and dry that it was difficult to look directly at the Sun even when it was only a sliver above the horizon. I have never seen the sky quite like this before. As the sun was going down, you could not look at it at all naked-eye; even to the very last moment it was too bright.



That supports my understanding that dry, clean air == bright sunsets.


UPDATE in the comments, somebody asked the question: "what is this stuff that is doing the absorbing?". As was pointed out, water vapor is not a very good absorber of light in the optical regime - the vibration modes of water molecules are excited in the infrared. However, on page 12 of http://www.learner.org/courses/envsci/unit/pdfs/unit11.pdf we read:



Air molecules are inefficient scatterers because their sizes are orders of magnitude smaller than the wavelengths of visible radiation (0.4 to 0.7 micrometers). Aerosol particles, by contrast, are efficient scatterers. When relative humidity is high, aerosols absorb water, which causes them to swell and increases their cross-sectional area for scattering, creating haze. Without aerosol pollution our visual range would typically be about 200 miles, but haze can reduce visibility significantly



This agrees with @WhatRoughBeast's observation that haze aerosols are ultimately the "stuff" that scatters the light - a combination of particles in the air (many of which are man made, and will be present in higher concentrations near densely populated regions - especially ones where coal fired power plants operate) and humidity which causes the aerosols to increase in size, making them more effective scatterers.



Tuesday 23 April 2019

special relativity - Is $E^2=(mc^2)^2+(pc)^2$ correct, or is $E=mc^2$ the correct one?


I have been having trouble distinguishing these two equations and figuring out which one is correct. I have watched a video that says that $E^2=(mc^2)^2+(pc)^2$ is correct, but I do not know why. It says that $E=mc^2$ is the equation for objects that are not moving and that$ E^2=(mc^2)^2+(pc)^2$is for objects that are moving. Here is the link to the video: http://www.youtube.com/watch?v=NnMIhxWRGNw




homework and exercises - Effective resistance of a weird looking electric circuit



enter image description here


A electric circuit (in the picture) is given where all the resistance are of 1 ohm. I have to find its equivalent resistance. (9,10, 11th points are conncections)


My attempt: I think electron flow will follow 2 paths: 1 2 9 3 10 4 11 5 6 and 1 8 7 6. So the resistances of these 2 paths are in series combination. So, equiavalent resistance of path 1 and 2 respictively are 2 and 3 ohm. As these 2 paths are in parallel the equivalent resistance would be 6/5 ohm.


Am I right? I am assuming that 3 short-circuits are present (2 9 3, 3 10 4, 4 11 5) in those 3 subcycles of the whole circuit.


I think my attempt is wrong. Any hint?



Answer



Current will take all possible paths. There are more than 2 possible paths, and they are not connected in series or parallel.


First try simplifying the circuit. The resistors on the top row are all shorted out, so they can be removed without affecting the circuit. The diagonal resistors are connected at the top and bottom to the vertical resistors, so these are in parallel; there are 2 resistors in parallel at the LH and RH branches, and 3 in the middle two branches.



You can then apply Kirchhoff's Rules to the simplified circuit, making use of symmetry. Alternatively, a combination of 3 resistors connected to the same node can be replaced by 3 resistors arranged in a triangle, using the $Y-\Delta$ Transformation. All of the resistors are then in series or parallel.


electromagnetism - Time dilation only on electromagnetic force?


We've seen by experiment that the speed of light c appears to be constant for each observer (leading to all well-known consequences of relativity).


I'm wondering if this appearance of constancy of c might be due to the observer's way of measuring it: All observers are bound to compare c to something else which itself is also based on c. A clock based on a photon bouncing between two mirrors (and taking the time it takes to bounce) for instance uses that speed of the photon to measure everything. A clock like a watch based on springs uses tension forces buried in the spring material (electromagnetic forces are based on c). Quartz crystal oszillators, sand clocks (hourglasses), water clocks — all facilitate some mechanism like friction or piezoelectricity which fundamentally are electromagnetism.


Nevertheless it is said that the time appears to be going slower, not just all clocks we can build.


My questions now are:


Is there a reasoning (which I just didn't find in my research) why the time as a whole is supposed to be influenced by relativity, not just all events based on the forces based on c? Maybe there even is a word or a term to google for in order to find more about this?



I understand that physicists managed to unite three of the four basic forces, wrapping up electromagnetism with the strong and the weak force. I guess then that these additional two forces also are based on c. Is there any such connection of c to the remaining force, the gravitation?


I could understand that if all existing forces are hinged on c then there is no real difference between saying "all clocks we can build are going slower" and "the time itself is going slower".




Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...