Saturday, 31 March 2018

quantum field theory - Non-Linear Behavior of Iterated Functional Maps


The universal behavior of certain iterated nonlinear function maps (ie period doubling bifurcation route to chaos): $$x_{i+1}=f(x_i)$$ have been known since Feigenbaum: (see http://theworldismysterious.wordpress.com/2013/10/03/441/)


The usual method of solving the Hartree-Fock equations for interacting fermions is a nonlinear iterated functional mapping: $$f_{i+1}(x)=F[f_{i+1}(x),V_i[f_i(x)]]$$ where $F$ is a non-linear functional of the function $f(x)$. Here $f(x)$ represents the fermion orbitals and $F$ is an integro-differential operator that is non-linear in $f$ because it contains both $f$ and an effective potential function $V$ that also depends upon $f$. The usual method of solution is to guess an effective potential $V_0$ that is used to generate a set of orbitals $f_0(x)$. These orbitals are then used to generate a new effective potential $V_1$ and together they are used to generate a new set or orbitals via the functional iteration above. A solution is obtained when the iteration yields an equivalent set of orbitals on two successive iterations.


Since this mapping is nonlinear it is conceivable that convergence of the iteration may not occur and that something related to the period doubling bifurcations of nonlinear iterated function maps may result. In fact, I found such a situation during research I conducted in 1975.




  1. My question is: has anyone else encountered such situations either in Hartree-Fock calculations or any other physics calculations that employ nonlinear iterated functional mappings?





  2. A secondary question is: are there published mathematical investigations of nonlinear behavior in iterated functional maps?






quantum information - Qubit (Qdit) equivalence with bits/bytes/Kbytes/



What is the conversion factor for qubits (qudits) to bits/bytes in classical information theory/computation theory?


I mean, how can we know how many "bits/bytes" process, e.g., a 60 qubit quantum computer (quamputer), are equivalent to classical bits (dits)?What about memories and velocities in "attaniable" quantum computers versus classical current memories and computer velocities(GHz)?




general relativity - Time Reversal in a Black Hole


I had a lively discussion with a person about black holes recently, and was making the point about gravitational acceleration in GR being paralleled by speed in SR. One thing that I know people talk about with special relativity, is that if you could reach a speed faster than the speed of light then time would reverse (I don't get this as the Lorentz factor gives an imaginary number, and who knows what that means). This is impractical for things with mass due to mass increasing asymptotically with velocity. However, since the event horizon for black hole behaves exactly like asymptotic barrier to reaching the speed of light (we'll say for a stationary non-charged black hole to keep things simple), shouldn't the effects translate to a black hole? Ie once you're in a black hole (ie the mass that was there when the black hole formed), wouldn't the mass be essentially traveling backwards in time, fastest the further it was from the event horizon?



For clarity sake, for matter, approaching the speed of light and approaching the event horizon of black hole are essentially the same with respect to redshift/blueshift, space and time dilation. There for approaching an event horizon from the inside of a black hole would be like de-accelerating toward the speed of light ($v>c$). So matter in a black hole should be falling outward toward the event horizon.


All of this I am of course imagining from the reference frame of a distant observer.


Just curious if I'm way off base on this.




thermodynamics - How to define heat and work?


In textbooks, heat is usually defined as the energy transfer due to temperature difference.


However, we don't know what temperature is in the first place. I think it's better to define heat first and then define temperature $1/T$ as the integrating factor so that one has $$\oint\frac{\delta Q}{T}=0$$ and hence define entropy as the potential $$\Delta S=\int\frac{\delta Q}{T}$$


So my problem is I cannot use temperature or entropy when defining heat.


So what is the definition of heat?



I think my problem can be solved if one can define either heat or work without mentioning temperature and entropy. For instance, if we can somehow define work, then heat can be define as the energy change not in the form of work. And vice versa.


So in summary, my question is how to define work or heat without mentioning temperature and entropy in thermodynamics (without referring to statistical mechanics)?




thermodynamics - which is more effective chilling for metal casting, thick copper die or water-cooled thin copper mold?



It is will known that increasing cooling of a metal casting will lead to finer micro-structure which mean higher mechanical properties. If I have 2 molds one is thick copper die ,& second is just thin-walled copper cup or mold cooled by water. which one of them excepted to cool the molten metal i.e. aluminium faster ?


by calculation it seems that copper die have higher cooling power as it has higher thermal diffusivity , but in real life it seems water have more cooling power. I am so confused.




Friday, 30 March 2018

quantum mechanics - Energy is actually the momentum in the direction of time?



By comparatively examining the operators



a student concludes that `Energy is actually the momentum in the direction of time.' Is this student right? Could he be wrong?



Answer



The student is right in that energy is the analog of momentum in the "time direction" but I wouldn't go so far as to call it "momentum in the direction of time".


It's analogous in two ways that I can think of off the top of my head:



  1. It is the time-component of the 4-dimensional energy-momentum vector in special relativity.

  2. Noether's theorem relates a symmetry in the laws of physics with respect to a coordinate to a conserved quantity. Momentum is conserved because the laws of physics are invariant with respect to translations in space, and Energy is conserved because the laws of physics are unchanging in time.



I wouldn't call it "momentum in the direction of time" because that phrase, at least to me, implies that energy is more momentum-like than it really is.


quantum mechanics - How do probabilities emerge in the many-worlds interpretation?


My understanding is that at each quantized unit of time that a split occurs, every possible recombination of particles occurs in the 'objective' universe. If this is the case, what relevance to probabilities hold to the behavior of the objective universe, and why do we observe these probabilities in the subjective universe?


I'm way beyond my educational depth here to understand the technical explanations available, and since I couldn't find a non-technical explanation online to help with developing some intuitive grasp, I was hoping someone here might be able to provide an explanation of this question.


It seems like if every moment every recombination occurs at each quantized branch split in the objective universe, then probability would be meaningless to the objective unfoldment of the universe, as every possible combination should occur exactly once, right? Why then, would distinct probability models hold from one quantized moment to the next.



For example, why is it anymore likely that from one moment to the next my computer continues to exist and I can type this post, rather than my computer turning into a purple elephant than my body being transported to mars and then to the andromedas galaxy and then to Bangladesh, then split into a billion pieces and reform as another creature, etc. etc. if all of these possibilities have already unfolded in the objective universe?


If all possible universes occur and are equally likely, how could probability emerge in a single branch? If they are not objectively all equally likely and probability does apply, how many times does the most likely possibility outcome occur relative to the least likely but quantifiable possibility in a single split?


Reiterated another way, how does probability dictate subjective reality so apparently if it is non-existent in a many-worlds interpretation of quantum mechanics that fulfills every possible particle combination of the universe? Alternatively, what am I failing to understand?


'How do probabilities emerge within many-worlds?' @ http://www.hedweb.com/manworld.htm#probabilities is unfortunately beyond me at this time, but perhaps holds the answer.




cosmology - The first $10^{-35}$ seconds



I am a rank amateur, so please forgive me if the answer to this is well-known.


The following quote can in a weekly update for an EdX course I am following in astrophysics:



"And what a week it's been with the recent discovery of primordial gravitational waves from the epoch of inflation! This one observation has pushed back the earliest direct measurements of the Universe from about 1 second (the epoch of nucleosynthesis) to $10^{-35}$ seconds (the epoch of inflation)."




I would appreciate any comments as to what it will take to close the gap.


EDIT: Well, initially I asked this question with a few specific alternatives. Perhaps - respectfully and with moderators' approval -I should revert to that (as best as I can remember), in that any one of them could be answered with a "yes" or "no" and a few qualifying, suggestive sentences.


-- Would a viable GUT do the job.


-- Are there currently specifically directed experiments whose intended results will answer the question.


-- Can the gap be closed by correct extrapolation of know experimental and theoretical results.


If this is yet deemed unacceptable, I understand, and please feel free to restrict my question as deemed appropriate.



Answer



There exists a huge gap in the strength of the four forces that we have observed in nature between gravity and the other three:


strong



electromagnetic


weak


gravity


In the following image we see that the radiation decouples from the "soup" at energy densities of 0.25eV. That is the snapshot of CMB, Cosmic Microwave Background radiation .


BB evolution


CBR in this plot is the CMB, when we have a snapshot of what happened before then.


The gap you are talking about is covered by known physics up to 10^-11 seconds . The unification of the three stronger forces takes us to the 10^-35 seconds of your question. A unified theory of the three forces with gravity will fill the in between times up to 10^-43 seconds, a huge gap due to the smallness of the gravitational constant. Speculations are listed in the image, and from what I have read the axions are gaining points after the discovery of the imprint of gravitational waves on the CMB.


A lot of observational and theoretical research is still to be done before any standard model for the universe can be proposed with some confidence.


gravity - What is the Current Status of Measurement of the Gravitational Mass of Antimatter?


My current understanding is that it's generally expected (and has been predicted) that antimatter will fall down and not up in earth's gravity. But I haven't been able to locate any definitive experimental results, much less independent verification.


What is the current status of reported experimental results, and of ongoing experiments? Is there currently one particular aspect of the measurement that is currently the limiting factor? Also, which is the most likely to give unambiguous results - antiprotons, atomic anti-hydrogen, or molecular anti-hydrogen?



edit: Based on the comment, I've looked at these questions:


Has the gravitational interaction of antimatter ever been examined experimentally?,


Do particles and anti-particles attract each other?, and


Why would Antimatter behave differently via Gravity?,


saw this


Description and first application of a new technique to measure the gravitational mass of antihydrogen, and this


The GBAR antimatter gravity experiment paper,


and in this table found websites for the AEGIS experiment and the GBAR experiment as well as this post and video about ALPHA-2.


But I'm a bit overwhelmed by all of this. I get the feeling that there is great interest, but no conclusive measurement of even the sign of the gravitational mass of (atomic?) antihydrogen, much less any independent verification, but I am not sure I'm interpreting this correctly.


The Nature paper is dated January 2013 and includes this figure - the red circles are data - measured decays, while the green dots and both black line are simulations. Thus my question in March, 2016: "What is the Current Status of Measurement of the Gravitational Mass of Antimatter?"



enter image description here


Figure 2 from: "Description and first application of a new technique to measure the gravitational mass of antihydrogen" The ALPHA Collaboration & A. E. Charman, Nature Communications 4, Article number: 1785 doi:10.1038/ncomms2787



Answer



The current status of measurement can be found in the list of publications at the document server at CERN by requesting "antihydrogen gravitational mass" in the search. There are proposals with different methods but no announcement of a measurement.


The Aegis experiment is in no position to give any measurement yet so the status is still undefined. Here is their status report for 2014, published in 2015. One has to wait for the 2015 report.


The ALPHA results you show is the most recent announcement on the matter from them dated in 2013.


From their site:



Today,(30 April 2013) the ALPHA Collaboration has published results in Nature Communications placing the first experimental limits on the ratio of the graviational and inertial masses of antihydrogen (the ratio is very close to one for hydrogen). We observed the times and positions at which 434 trapped antihydrogen atoms escaped our magnetic trap, and searched for the influence of a gravitational force. Based on our data, we can exclude the possibility that the gravitiational mass of antihydrogen is more than 110 times its inertial mass, or that it falls upwards with a gravitational mass more than 65 times its inertial mass.


Our results far from settle the question of antimatter gravity. But they open the way towards higher-precision measurments in the future, using the same technique, but more, and colder trapped antihydrogen atoms, and a better understanding of the systematic effects in our apparatus.




Note the number of antihydrogen used is 434.


From the dates of their last publications (2014) it seems they must be waiting for data, to be given antiproton beams, or working or recent runs.


You have to keep in mind that experiments with accelerators take years and decades. (the Higgs experiments were being designed end of the 90's).Patience.


conservation laws - Diffracted electron - where does it gain additional momentum?


When an electron is diffracted, the momentum after the diffraction has different direction than before.


enter image description here


Where does the electron gain this momentum?


This is related to this question, but it's different enough to be posted separately: Does the diffracted electron radiate photons?



Answer




It all depends on the size of the slit with respect to the energy of the electron.


If the electron has very high energy and the slit is large, one can talk of momentum conservation in a classical way, as : "if a ball changes direction hitting an edge where does it get the momentum?" it gets it by hitting the edge and transferring a bit of its energy to the bulk of the edge.


The electron though is a quantum mechanical entity, and with a slit size commensurate to the magnitude of $\hbar$, the Heisenberg uncertainty principle allows an indeterminacy in the momentum if the position is known:$\Delta x \times \Delta p \gt \hbar$.


Alternatively one can think of solving the quantum mechanical problem "electron of momentum p + slit" and see that the solution is a probability distribution, that will give a probability for the electron to come out at an angle without violating classical conservation laws.


electromagnetism - Is there energy stored when iron is magnetized?


When a piece of iron is magnetized, and the domains are aligned, Is there energy stored? If so, how much energy is stored? If there is an attraction between that same iron and the source of the exterior magnetic field,where work is done, and there is energy that is transferred. Is the energy equal to that of which is stored in the alignment of the domains?


And how much energy is stored or needed to align the domains?




Thursday, 29 March 2018

Newton's third law and General relativity


Is Newton's third law valid at the General Relativity?


Newton's second law, the force exerted by body 2 on body 1 is: $$F_{12}$$ The force exerted by body 1 on body 2 is: $$F_{21}$$


According to Newton's third law, the force that body 2 exerts on body 1 is equal and opposite to the force that body 1 exerts on body 2: $$F_{12}=-F_{21}$$



Answer



First, let's note that newton's third law is really equivalent to conservation of momentum, by example of object one exerting a force on object two, and vice versa, and these two forces being the only forces in the universe:



$$\begin{align} F_{12} &= -F_{21}\\ m_{2}a_{2} &= -m_{1}a_{1}\\ \int m_{2}a_{2} dt &= -\int m_{1}a_{1} dt\\ m_{2}v_{2f}-m_{2}v_{2i} &= m_{1}v_{1i}-m_{1}v_{1f}\\ m_{1}v_{1f} + m_{2}v_{2f} &= m_{1}v_{1i} + m_{2}v_{2i}\\ \sum p_{f} &= \sum p_{i} \end{align}$$


Now, we know that we are looking for conservation of momentum, rather than just Newton's third law (and conservation of momentum is a more general concept anyway--Newton's third law will come up false in a variety of electromagnetic applications, but conservation of momentum will still be true). How do we get conservation of momentum? Well, the motion of a particle can be found by looking for the minium of something known as the Lagrangian:


$$L = KE - PE$$


It turns out that there is a result called Noether's theorem that says that if the Lagrangian is doesn't change when you modify your variables in a certain way, then the dynamics defined by that Lagrangian will necessarily have a conserved quantity associated with that transformation. It turns out that conservation of momentum arises when the invariance is a translation of the coordinates: $x^{a^\prime} = x^{a} + \delta^{a}$. Now, let's go back to general relativity. Here, the motion of a particle is the one that maximizes the length of:


$$\int ds^{2} =\int g_{ab}{\dot x^{a}}{\dot x^{b}}$$


If the metric tensor $g_{ab}$ has a translation invariance, this motion will necessarily have a conserved momentum associated with it, and will not otherwise. Note: common solutions, like the Schwarzschild solution of GR are NOT translation invariant--that's because the model assumes that the central black hole does not move. A more general solution that included the back-reaction of the test particle's motion WOULD have a conserved momentum (and would end with a moving black hole after some orbiting was completed).


electromagnetism - Using Ampere's circuital law for an infinitely long wire & wire of given length


According to Ampere's Ciruital Law:


enter image description here


Now consider two straight wires, each carrying current I, one of infinite length and another of finite length l. If you need to find out magnetic field because of each, at a point (X) whose perpendicular distance from wire is d.



You get magnetic field as $\frac{\mu I}{2 \pi d}$. Same for both.


But,


Magnetic field due to infinitely long wire is : $\frac{\mu I}{2 \pi d}$


Magnetic field due to wire of finite length l : $\frac{\mu I (\sin(P)+\sin(Q)) }{2 \pi d}$, where P & Q are the angles subtended at the point by the ends of the wire.


Why are we getting wrong value for using Ampere's circuit law?



Answer



There are two things to notice here.



  1. You can only make the assignment $\oint \mathbf{B} \cdot dl = 2 \pi d B(d)$ if the situation is radially symmetric.

  2. In the case of a finite wire you either have charge building up at the ends or you have not specified the whole current distribution yet. Question: can you specify a radially symmetric return path and if so do you expected it make up the difference?



How gravity can give energy for unlimited time?


F = gravitational force between the earth and the moon, 
G = Universal gravitational constant = 6.67 x 10^(-11) Nm^2/(kg)^2,
m = mass of the moon = 7.36 × 10^(22) kg

M = mass of the earth = 5.9742 × 10^(24) and
r = distance between the earth and the moon = 384,402 km

F = 6.67 x 10^(-11) * (7.36 × 10^(22) * 5.9742 × 10^(24) / (384,402 )^2
F = 1.985 x 10^(26) N

The above all i took from http://in.answers.yahoo.com/question/index?qid=20071109190734AATk6NV


Age of Moon 4.53 billion years (From google)


So Earth has been producing a force of 1.985 x 10^(26) N for 4.53 billion years I do not know how to calculate the total energy spent so far(Please some one do the favor) from force and time but it must be very huge. if E = mc^2 then atleast some of the mass of earth must have disappeared in producing this energy or where did the energy come from ?


Somewhere some one has answered if we place a ball above earth then during that time we did the work and when it fell down to the earth that work changed into kinetic energy ok accepted but the moon is being pulled towards earth for unlimited time where does the earth get the energy from to do so ?



@Emilio Pisanty Thanks for your reply.


"it needs to move the object in the direction that the force acts in"


Even though the force applied by earth is perpendicular to the direction of motion the moon's direction is definitely changed because of the force applied by the earth and the moon is moving a little towards earth from the straight line otherwise the motion would be a straight line. If earth did no work on moon why would the moon not go in straight line escaping earth's gravity. Even if we accept that earth does no work and spends no energy in making the moon orbit around it,By newton's law any body which accelerates needs force and since the moon keeps accelerating all the time(changing its direction of movement)where does moon gets this force continuously for such a long time.




thermodynamics - Fan Speed Formula


Is there any formula for computing fan speed by using air mass,air flux, air density or specific heat? I have computed air mass and air flux, and found the values for air density and specific heat, but now I am stuck at finding a correlation between these and fan speed.



Answer



From an engineering perspective, there are many different fan designs including axial and centrifugal configurations, along with various blade designs including forward curved, backward curved, and radial. There is no formula to calculate the required fan speed, but if a specific fan configuration is known then fan similarity laws can be used to calculate performance based on known performance of a similar fan. For example, for a given fan design the flow varies linearly with speed, static pressure varies with the square of the speed, and power consumption varies with the cube of the speed. Similarly, at the same speed flow varies with the cube of the impeller diameter, and static pressure varies as the square of the diameter.


These formulas are described here for example.



fluid dynamics - Why textbooks use geometric center/centerline of the pipe when calculating/measuring pressure?


Studying Fluid Mechanics, I started to notice that almost every textbook/website uses a specific point to make calculations about the pressure in a liquid at a given depth (hydrostatic pressure): the geometric center (as shown in the images below), when presenting pressure gauges/manometers/piezometers.


Note: This happens regardless of the field to which the book is directed (I looked in textbooks of Fluid Mechanics for Civil, for Electrical, for Mechanical...).


Pressure gauges/Manometers/Piezometers         Sources: Introduction to Fluid Mechanics - Nakayama & Boucher/Mecânica dos Fluidos - Noções e Aplicações - Sylvio R. Bistafa/Chegg





One of the textbooks I looked at even draws attention to this fact, but it doesn't explain the reason for the choice:



Note the origin of the measurement of h, in the center of the tube


                                
                                                          Source: Mecânica dos Fluidos - Franco Brunetti



A similar behavior can be identified when textbooks present liquids in motion: they use the centerline of the pipe to make calculations/measurements. Here's an example:


                                    
                                                     Source: Fluid Mechanics for Civil Engineers - N.B. Webber





So why is the choice of geometric center/centerline of the pipe so common when measuring/calculating pressure? Some hypotheses:



  • Maybe all the textbooks/websites are unconsciously copying each other?

  • Maybe is this some kind of "convention"?




Note to the off-topic warning: "Questions about the physical reasoning and analysis that lead to design decisions are on topic". That's the core of my question: what is the physical reason for choosing the geometric center in fluid mechanics books. In other words, what is the physical reasoning that leads to the design decision commonly adopted by almost every textbook of choosing the geometric center/centerline of the pipe when doing pressure-related calculations/measurements.



Answer



I'll divide my answer in two cases. First, I'll talk about liquids in motion (assuming incompressible flow). Then, I'll talk about liquids at rest.





Liquid Flow:


Reading the comments of this YouTube video about piezometers made by Donald Elger, I found the answer for this case:



Why is it [the pressure measurement with piezometer] taken from the middle of the pipe?


Elger's answer: The pressure variation across a section of a pipe is hydrostatic; thus, the pressure will vary linearly with radius and the pressure at the center of the pipe is the average pressure. If you use this value of pressure in your calculations, this will be give you the most accurate results. Thus, engineers nearly always apply or measure the pressure at the center of the pipe.



The question that came to me as soon as I read this was: "Why using average pressure in calculations gives the most accurate results?".


(Note: I recommend reading my answer to this question before proceeding)


Briefly, in general, the average pressure gives the most accurate results if used in calculations because there are many applications/cases in which the locations with $P=P_{average}$ are the best places for experimental data collection.



In the case of a pipe, this location is its centerline. So, I believe that this is why textbooks generally choose this location in case of liquids in motion: the centerline is associated with $P_{average}$ that, in its turn, is associated with the best places for experimental data collection for many applications.




Liquids at rest:


For this case, firstly I would like to quote part of the answer written by David White to my question "Where is the right place to put the pressure gauge to measure the pressure of a tank?":



The location depends on why you are measuring the pressure. There will be a process reason for the pressure measurement, and that will determine the location of the pressure measuring device.



When textbooks present pressure gauges/manometers/piezometers for the first time, the presentation is usually "application neutral" (i.e., there's no process reason), the diagrams/sketches/figures are only to illustrate the concepts/formulas. Therefore there are no best points as in the liquid flow case, for two reasons:



  • There is no process reason that determines the location of measurement;


  • Since the liquid is at rest, there are no points that lead to most accurate results, they all provide the same accuracy.


But the authors need to choose a point to do the pressure-related calculations...


After everything I've researched, my hypothesis is that the "point choice" of hydrostatics was imported from hydrodynamics. So, instead of choosing a random point to pressure-related calculations, they choose one that at least has importance/meaning for other areas of Fluid Mechanics.


quantum field theory - Bound State of Only Massless Particles? Follows a Time-Like Trajectory?


Is there any way in which a bound state could consist only of massless particles? If yes, would this "atom" of massless particles travel on a light-like trajectory, or would the interaction energy cause it to travel on a time-like trajectory?




Answer



John Rennie has answered the first part of the question. The second part was this:



If yes, would this "atom" of massless particles travel on a light-like trajectory, or would the interaction energy cause it to travel on a time-like trajectory?



The answer is that it would have a timelike world-line, and this is independent of any (probably uncertain) details of the system's dynamics or binding energy.


Mass is not additive. Mass is defined (in units with $c=1$) by $m^2=E^2-p^2$, where $E$ is the mass-energy and $p$ is the momentum. $(E,p)$ is the momentum four-vector, and the squared mass is its squared norm. For a massless particle, the momentum four-vector is lightlike. If four-vectors $p$ and $p'$ are both lightlike and future-directed, but not parallel, then $p+p'$ is timelike. Therefore a system of interacting, massless particles is guaranteed to have a nonzero mass.


electromagnetic radiation - Does an atom recoil when photon radiate?


Consider an atom in the excited state radiating a photon and goes to the lower energy state. But photons have a certain angular momentum, the momentum itself is not defined. In this case, will there be a recoil of the atom due to photon emitted?



Answer



The momentum of a photon is not only defined, but is defined very well in the famous Einstein's quation:


$$ E = \sqrt {(mc^2)^2 + (pc)^2 } $$


that leads for massless photons to


$$p=\frac Ec =\frac h{\lambda}$$


Therefore, atoms recoil when emitting photons.


The opposite phenomena, an atom recoiling during photon absorption, is used by laser cooling near $0 \mathrm{K}$.



The laser frequency is set just below a chosen atomic absorption line. Due the Doppler effect, absorption occurs only for those atoms with a particular velocity component toward the laser.


Absorption of a photon and it's momentum decreases this velocity component, what means decreasing the atom kinetic energy. That leads in large scale to decreasing of temperature.


Effectively, the thermal energy is spent to be added to otherwise insufficient energy of photons.If the gained energy is released by emission of other photon, it has in average the nominal absorption line energy with negative net energy outcome.




As @dmckee has noted, recoilless scenario can be achieved in solid matrices, if the momentum is distributed within the whole solid matrix.


Mossbauer effect (mentioned in Laser cooling page)



The Mössbauer effect, or recoilless nuclear resonance fluorescence, is a physical phenomenon discovered by Rudolf Mössbauer in 1958. It involves the resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid. Its main application is in Mössbauer spectroscopy.



Wednesday, 28 March 2018

Higgs mechanism and neutral fields


Consider a Lagrangian $L(\phi,A_{\mu})$ with $\phi$ being some scalar field and $A_{\mu}$ some dynamical U(1) gauge field that minimally couples to $\phi$. Under a global U(1) symmetry the field $\phi$ transforms as $$ \delta\phi=i\epsilon q \phi. $$ The field $\phi$ is said to be charged (with charge q) under the gauge field $A$.


In a Higgs phase we have that $|\phi(x,t)|\neq 0$. In particular we can fix a gauge so that $|\phi(x,t)|=\Phi(x,t)$ is real. Then we consider small fluctuations $\Phi(x,t)=\Phi_{0}+\delta\Phi$ and integrate them out to obtain an effective theory in which the gauge field A is massive.


My question: It seems to me as though the requirement that $\phi$ is charged enters when integrating out the small flunctuations, because if $\phi$ were neutral (i.e. q=0) there wouldn't be any flucutations that one can integrate out and hence one wouldn't obtain a massive term for the gauge field in the Lagrangian. Is this correct? If not where does the requirement for $\phi$ to be charged enter in the argument? And: Does the requirement for the matter field to be charged with respect to the corresponding gauge field carry over without difficulties to the non-abelian case?


I am looking forward to your responses!




Pressure in a fluid


If a fluid is flowing along a vertical line with a constant velocity, will the pressure at every point be the same and irrespective of height?



Answer



The pressure must be different because the fluid is in equilibrium (moving at a constant velocity), but the force of gravity is acting on it downwards. This can only be balanced by a pressure difference:


Cube equilibrium


Resolving forces vertically for a cube of fluid with cross-sectional area $A$ and height $\Delta z$: $$(p + \Delta P)A = pA + \rho g A \, \Delta z $$


The areas cancel, and so do the main pressure terms, leaving:


$$\Delta p = \rho g \Delta z$$



which is pascal's law.


Schrödinger's equation, time reversal, negative energy and antimatter


You know how there are no antiparticles for the Schrödinger equation, I've been pushing around the equation and have found a solution that seems to indicate there are - I've probably missed something obvious, so please read on and tell me the error of my ways...


Schrödinger's equation from Princeton Guide to Advanced Physics p200, write $\hbar$ = 1, then for free particle


$$i \psi \frac{\partial T}{\partial t} = \frac{1}{2m}\frac{\partial ^2\psi }{\partial x^2}T$$


rearrange


$$i \frac{1}{T} \frac{\partial T}{\partial t} = \frac{i^2}{2m}\frac{1}{\psi }\frac{\partial ^2\psi }{\partial x^2}$$


this is true iff both sides equal $\alpha$


it can be shown there is a general solution (1)



$$\psi (x,t) \text{:=} \psi (x) e^{-i E t}$$


But if I break time into two sets, past -t and future +t and allow energy to have only negative values for -t, and positive values for +t, then the above general solution can be written as (2)


$$\psi (x,t) \text{:=} \psi (x) e^{-i (-E) (-t)}$$


and it can be seen that (2) is the same as (1), diagrammatically


energy time diagram


And now if I describe the time as monotonically decreasing for t < 0, it appears as if matter(read antimatter) is moving backwards in time. Its as if matter and antimatter are created at time zero (read the rest frame) which matches an interpretation of the Dirac equation.


This violates Hamilton's principle that energy can never be negative, however, I think I can get round that by suggesting we never see the negative states, only the consequences of antimatter scattering light which moves forward in time to our frame of reference.


In other words the information from the four-vector of the antiparticle is rotated to our frame of reference.


Now I've never seen this before, so I'm guessing I've missed something obvious - many apologies in advance, I'm not trying to prove something just confused.




fluid dynamics - Could a fish in a sealed ball, move the ball?


If you had a glass ball filled with water, completely sealed and containing a fish, could the fish move the ball?



Answer



Yes, with gravity and a generous definition of "moving".. it would be the same principle as the toys where you can control a sphere using a radio control (or using your iphone). The fish swims along the edge and gravity pulls it back down, which starts a rotation of the water and by friction to the sphere starts the rolling motion of the sphere on the ground or other surface. Obviously the water/sphere friction will probably be miniscule, but at least it is possible in theory :)



A follow-up question would of course be if it's possible to move a hermetically sealed sphere freefloating in vacuum and without gravity or any other appreciable fields intersecting it. If you solve this, I'm pretty sure NASA will want to talk to you (or the fish)!


Tuesday, 27 March 2018

general relativity - Alcubierre Drive - Clarification on relativistic effects


On the Wikipedia article on the Alcubierre drive, it says:



Since the ship is not moving within this bubble, but carried along as the region itself moves, conventional relativistic effects such as time dilation do not apply in the way they would in the case of a ship moving at high velocity through flat spacetime relative to other objects.



And...



Also, this method of travel does not actually involve moving faster than light in a local sense, since a light beam within the bubble would still always move faster than the ship; it is only "faster than light" in the sense that, thanks to the contraction of the space in front of it, the ship could reach its destination faster than a light beam restricted to travelling outside the warp bubble.




I'm confused about the statement "conventional relativistic effects such as time dilation do not apply".


Say Bob lives on Earth, and Jill lives on a planet in Andromeda, and we'll say for the sake of argument that they're stationary. If I were to travel from Bob to Jill using an Alcubierre drive such that the journey would take me, say, 1 week from my reference frame... how long would Jill have to wait from her reference frame? Do the time dilation effects cancel out altogether? Would she only wait 1 week?



Answer



Spacetime can dynamically evolve in a way which apparently violates special relativity. A good example is how galaxies move out with a velocity v = Hd, the Hubble rule, where v = c = Hr_h at the de Sitter horizon (approximately) and the red shift is z = 1. For z > 1 galaxies are frame dragged outwards at a speed greater than light. Similarly an observer entering a black hole passes through the horizon and proceeds inwards at v > c by the frame dragging by radial Killing vectors.


The Alcubierre warp drive is a little spacetime gadget which compresses distances between points of space in a region ahead of the direction of motion and correspondingly expands the distance between points in a leeward region. If distances between points in a forwards region are compressed by a factor of 10 this serves as a “warp factor” which as I remember is $w~=~1~+~ln(c)$, so a compression of 10 is a warp factor 3.3. The effect of this compression is to reduce the effective distance traveled on a frame which is commoved with the so called warp bubble. This compression of space is given by $g_{tt}$ $=~1~-~vf(r)$.


Of course as it turns out this requires exotic matter with $T^{00}~<~0$, which makes it problematic. Universe is also a sort of warp drive, but this is not due to a violation of the weak energy condition $T^{00}~\ge~0$. Inflationary pressure is due to positive energy. The gravity field is due to the quantum vacuum, and this defines an effective stress-energy tensor $T^{ab}$ with components $T^{00}~=~const*\rho$, for $\rho$ energy density, and $T^{ij}~=~const*pu^iu^j$, for $i$ and $j$ running over spatial coordinates $u^i$ velocity and $p$ pressure density. For the de Sitter spacetime the energy density and pressure satisfies a state $p~=~w*\rho$ where $w~=~-1$. So the pressure in effect is what is stretching out space and frame dragging galaxies with it. There is no need for a negative energy density or exotic matter.


Negative energy density or negative mass fields have serious pathologies. Principally since they are due to quantum mechanics the negative eigen-energy states have no lower bound. This then means the vacuum for these fields is unstable and would descend to ever lower energy levels and produce a vast amount of quanta or radiation. I don’t believe this happens. The Alcubierre warp drive then has a serious departure between local laws of physics and global ones, which is not apparent in the universe or de Sitter spacetime. The Alcubierre warp drive is then important as a gadget, along with wormholes as related things, to understand how nature prevents closed timelike curves and related processes.


Addendum:


The question was asked about the redshift factor and the cosmological horizon. This requires a bit more than a comment post. On a stationary coordinate region of the de Sitter spacetime $g_{tt}~=~1~-~\Lambda r^2/3$. This metric term is zero for $r~=~\sqrt{3/\Lambda}$, which is the distance to the cosmological horizon.


The red shift factor can be considered as the expansion of a local volume of space, where photons that enter and leave this “box” can be thought of as a standing wave of photons. The expansion factor is then given by the scale factor for the expansion of the box $$ z~=~\frac{a(t_0)}{a(t)}~-~1 $$ The dynamics for the scale factor is given by the FLRW metric $$ \Big(\frac{\dot a}{a}\Big)^2~=~\frac{8\pi G\rho}{3} $$ for $k~=~0$. The left hand side is the Hubble factor, which is constant in space but not time. Writing the $\Lambda g_{ab}~=~8\pi GT_{ab}$ as a vacuum energy and $\rho~=~T_{00}$ we get $$ \Big(\frac{\dot a}{a}\Big)^2~=~H^2~=~\frac{\Lambda}{3} $$ the evolution of the scale factor with time is then $$ a(t)~=~\sqrt{3/\Lambda}e^{\sqrt{\Lambda /3}t}. $$ Hence the ratio is $a(t)/a(t_0)~=~ e^{\sqrt{\Lambda /3}(t-t_0)}$.The expansion is this exponential function, which is Taylor expanded to give to first order the ratio above $$ a(t)/a(t_0)~\simeq~1~+~H(t_0)(t_0-t)~=~1~+~H(t_0)(d-d_0)/c $$ which gives the Hubble rule. $z~=~a(t)/a(t_0)~-~1$. It is clear that from the general expression that $a(t)$ can grow to an arbitrarily large value, and so can $z$. On the cosmological horizon for $d~-~d_0~=~r_h~=~\sqrt{3/\Lambda}$ we have $z~=~1$.



Looking beyond the cosmological horizon $r_h~\simeq~10^{10}$ly is similar to an observer in a black hole looking outside to the exterior world outside the black hole horizon. People get confused into thinking the cosmological horizon is a black membrane similar to that on a black hole. Anything which we do observe beyond the horizon we can never send a signal to, just as a person in a black hole can see the exterior world and can never send a message out.


atomic physics - How can we prove that the shape of atom is spherical?



i am looking for a derivation that can prove that the shape of an atom is spherical . I experimentally proved this statement but i need a theoretical way.



Answer



If you want to prove an isolated atom is spherically symmetrical, you could proceed by showing the sum of the probabilty functions (wave functions squared) of each orbital results is a spherically symmetrical distribution.



Certainly s-orbitals are spherically symmetrical.


The sum for an entire subshell of orbitals (such as all three 2p orbitals) is also spherically symmetrical (despite wikipedia cartoons that may look otherwise).


According to Shapes of Atoms, J. Chem. Educ., 1965, 42 (3), pages 145-146, all isolated atoms with half-filled valence shells or filled valence shells are spherical, while the remainder are not. For example, the articles says boron and oxygen are prolate and halogens are oblate.


The article concludes the non-spherical shapes based upon a particular subset of orbitals being occupied in the valence subshell; however, personally I would think that for an isolated atom all the valence orbitals would be degenerate and the wave function would be a linear combination of all the degenerate orbitals. Hopefully someone will say how my thinking is misguided.




In fact it is the above article that is incorrect, and J. Chem. Education published a retraction a few months later J. Chem. Educ., 1965, 42, pages 397-398 stating "all isolated atoms are spherical" (emphasis in original) and that the previous article failed to consider degeneracy of valence orbitals.


general relativity - If a photon has no mass, how can it be attracted by the Sun?


I read that the photon doesn't have mass, but my teacher says that the photon has mass because the sun can attract it (like in the experiments to prove the theory of relativity).


I think that there is another reason to explain that. How can I explain that the photon doesn't have mass and the sun attracts photons?




Monday, 26 March 2018

electrostatics - Does a AAA battery have a dipole moment?


Does a AAA or D battery have an electric dipole moment? Why don't the opposite poles of two batteries attract each other like that of magnet's?




astronomy - Where to find the current positions and velocities of the planets?



I've written a program which simulates the motions of planets and other bodies. I'd like to run it on our own solar system, but to do so I need to know the current positions (preferably in heliocentric coordinates) of the planets as well as their current velocities. Is there a website where I can find this?


I've found all the positions of the planets here, and I can find their average orbital speed fairly easily, but for some planets (e.g. Mercury) the orbital speed varies a fair amount.



Answer



I take it this still works: http://www.physicsforums.com/showthread.php?t=114165.


quantum mechanics - Is the Heisenberg picture of an open-system very different than that of a closed one?


For a closed system the time evolution (in the Heisenberg picture) of an operator $A$ is given by


$$A(t) = U^{\dagger}(t)AU(t)$$


with $U^{\dagger} U = 1\!\!1$, so that for some other operator $C$ we have: $$C(t) = (AB)(t) = A(t)B(t)$$


However for an open system, the time evolution of an operator is given by:


$$ A(t) = \sum_{\alpha,\beta} W_{\alpha,\beta}^{\dagger}(t)\,A\,W_{\alpha,\beta}(t) $$


Where the Krauss operators $W$ satisfy $\sum_{\alpha,\beta} W_{\alpha,\beta}^{\dagger}(t) W_{\alpha,\beta}(t) = 1\!\!1$. So that in general it seems that $C(t) = (AB)(t) \neq A(t)B(t)$, since:


$$ C(t) = \sum_{\alpha,\beta} W_{\alpha,\beta}^{\dagger}(t)\,A B\,W_{\alpha,\beta}(t) \neq \sum_{\alpha,\beta,\gamma,\delta} W_{\alpha,\beta}^{\dagger}(t)\,A\,W_{\alpha,\beta}(t) W_{\gamma,\delta}^{\dagger}(t)\,B\,W_{\gamma,\delta}(t) = A(t)B(t) $$



Is there some property that I miss of these $W$ operators, or is this Heisenberg picture so different than that of a closed system? For example the commutator equality $[x(t),p(t)] = i$ does not seem to hold in general.



Answer



Indeed, for a product operator $\hat{C} = \hat{A}\hat{B}$, it is not true that $\hat{C}(t) = \hat{A}(t) \hat{B}(t)$ for a general (i.e. non-unitary) evolution in the Heisenberg picture. It is instructive to consider the simple example of a harmonic oscillator equilibrating with a thermal bath. This is described by a Lindblad equation $$\dot{\rho} = -i[\omega \hat{a}^\dagger\hat{a},\rho] + \gamma \mathcal{D}[\hat{a}] \rho + \gamma {\rm e}^{-\beta \omega} \mathcal{D}[\hat{a}^\dagger]\rho,$$ where $[\hat{a},\hat{a}^\dagger]=1$, $\omega$ is the oscillator frequency, $\gamma$ is the damping rate, $\beta$ is the inverse temperature and $\mathcal{D}[\hat{L}]\rho = \hat{L}\rho\hat{L}^\dagger - \tfrac{1}{2}\{\hat{L}^\dagger\hat{L},\rho\}$. In the Heisenberg picture, the solution for the ladder operators is $$ \hat{a}(t) = {\rm e}^{-i \omega t - \bar{\gamma}t/2}\hat{a}(0),$$ where $\bar{\gamma} = \gamma(1-{\rm e}^{-\beta \omega})$. This expresses the fact that initial oscillations (which are caused by initial coherences in the energy eigenbasis) should decay to zero in the thermal steady state. Now consider the evolution of $\hat{n} = \hat{a}^\dagger\hat{a}$. If it were true that $\hat{n}(t) = \hat{a}^\dagger(t)\hat{a}(t)$, then we would have $ \hat{n}(t) = {\rm e}^{-\bar{\gamma}t}\hat{n}(0),$ which is obviously wrong since it would mean no excitations at any temperature in the thermal steady state. In fact, we have $$ \hat{n}(t) = {\rm e}^{-\bar{\gamma}t}(\hat{n}(0)-n_\beta) + n_\beta,$$ where $n_\beta = ({\rm e}^{\beta \omega}-1)^{-1}$ is the equilibrium excitation number.


The point of the example is that the relation $\hat{C}(t) = \hat{A}(t) \hat{B}(t)$ places rigid constraints on the relationship between coherences and populations. This constraint holds true for unitary dynamics, which preserves the purity of states. However, many physically relevant situations involve initial coherence decaying to zero while the associated populations do not, in such a way that the purity decreases. Such non-unitary dynamics therefor cannot obey $\hat{C}(t) = \hat{A}(t) \hat{B}(t)$ in general.


Sunday, 25 March 2018

black holes - What happens to orbits at small radii in general relativity?


I know that (most) elliptic orbits precess due to the math of general relativity, like this:


precession


source: http://en.wikipedia.org/wiki/Two-body_problem_in_general_relativity


I also know that something is different for orbits at radii below a certain value. Wikipedia explains this as follows, I'm confused by this and just want to ask for clarification:




If the particle slips slightly inwards from $r_{inner}$ (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to $r=0$.



What does this mean? If you drew a graph of the path a particle takes in this unstable regime, what would it look like? Why is the stability transition point further than the Schwarzschild radius? Why does this graph show an unstable point beyond the Schwarzschild radius? For elliptical orbits that come very close to the event horizon, is there some kind of orbital decay? How is energy conserved?


In short, do all orbits (with GR effects) look like the precession shown above, or is there another shape that we see if it gets closer to the Schwarzschild radius?



Answer




What does this mean?



It means that there won't be any (periodic) orbit anymore; the answer to your title question is therefore that it will cease to exist. The value of $r$ will just monotonically decrease. Obviously, when it falls below the event horizon, there's no way for the particle to return outside the black hole i.e. to values of $r$ greater than the event horizon's. The particle will end up in the singularity.




Why is the stability transition point further than the Schwarzschild radius?



These two points have different values of $r$ because they are defined by different conditions. The event horizon is the boundary beneath which one cannot escape outside, whatever he does; he may try to use his jets to escape as quickly as he can but it won't be enough to escape if he's beneath the event horizon.


The minimal orbit radius is the minimal value of $r$ beneath which one cannot escape if he is only allowed to freely fall. Clearly, if one doesn't resist, it's easier for the gravitational field to swallow him, so the region from which the singularity is an inevitable fate in this case is larger.



Why does this graph show an unstable point beyond the Schwarzschild radius?



I just explained why the critical values of $r$ beneath which one can no longer oscillate are inevitably outside the event horizon, so it's the same question as the second question answered here. There can't ever be periodic orbits inside the black hole (less than event horizon) because this would contradict the fact that an observer inside is inevitably dragged towards the singularity.




For elliptical orbits that come very close to the event horizon, is there some kind of orbital decay?



There are no eliptical orbits near the event horizon anymore. This is the main point that this whole material is about although because you didn't know the answer when you were writing the question, it may be justified that you added lots of confusing questions caused by your assuming wrong answers to the previous ones.



How is energy conserved?



Energy is perfectly conserved in all these considerations. Like always in similar mechanical exercises, even in non-relativistic mechanics, the decrease of kinetic energy is compensated by the increase of potential energy and vice versa. However, the formulae for the potential and kinetic energy have a new, nonlinear dependence on $r$ which is why it's no longer true that all the trajectories are simple conics. One must say that even in Newton's gravity, the conics character of all trajectories was a sort of a coincidence, one that doesn't appear for any other potential than $K/r$.


Note that even in Newton's mechanics, it's untrue that all trajectories are periodic. With too high a velocity, the trajectories are parabolic or hyperbolic.



In short, do all orbits (with GR effects) look like the precession shown above, or is there another shape that we see if it gets closer to the Schwarzschild radius?




All orbits qualitatively look like precession but, as discussed in every single question above, there are no back-and-forth orbits for certain initial conditions. So for these initial conditions, going too close to the event horizon, the trajectories will qualitatively look like "spirals".


newtonian mechanics - Why does work depend on distance?


So the formula for work is$$ \left[\text{work}\right] ~=~ \left[\text{force}\right] \, \times \, \left[\text{distance}\right] \,. $$


I'm trying to get an understanding of how this represents energy.


If I'm in a vacuum, and I push a block with a force of $1 \, \mathrm{N},$ it will move forwards infinitely. So as long as I wait long enough, the distance will keep increasing. This seems to imply that the longer I wait, the more work (energy) has been applied to the block.


I must be missing something, but I can't really pinpoint what it is.


It only really seems to make sense when I think of the opposite scenario: when slowing down a block that is (initially) going at a constant speed.



Answer



You have to put in the distance on which the force acts. If you release the force, there will be no work done since there is no force acting on the body.


Saturday, 24 March 2018

entropy - Second law of thermodynamics and the arrow of time: why isn't time considered fundamental?


I've come across this explanation that the "arrow of time" is a consequence of the second law of thermodynamics, which says that the entropy of an isolated system is always increasing. The argument is that the past looks different from the future because of this increase in entropy. However, this still doesn't make time vanish, since a hypothetical clock could still be ticking in a completely uniform universe, but only this time there is no change in entropy because it is already at a maximum. In this maximum entropy universe, the past would look just like the future, but that need not mean that time isn't still flowing. It is just that we $\it{associate}$ the flow of time with a change that is recognizable.


But after having watched a few episodes of Professor Brian Cox's Wonders of the Universe, I want to know the deeper reason behind why people make the entropy argument. I've heard Sean Carroll make a similar argument to a group of physicists, so I get the idea it is not just a sloppy popularization.




electrostatics - Force when distance between charge is zero


According to coulomb law


$$ F = \frac{q_1q_2}{r^2} $$


I want to know what happens to force when $r=0$. If $F \to \infty$ then the charges can't be separated! But if an unlike charge of higher magnitude is placed beside any of $q_1$ or $q_2$ then it gets attracted. Can anyone clear me out?





energy - Do virtual photons have a frequency?


Real photons do have frequencies, which is directly related to its energy. So, can virtual photons that take part in EM interactions have frequencies too?


When my hand is pressed up against a glass window, do the virtual photons taking part in the EM interaction keeping my hand from falling through the window have a frequency compared to the photons that passing through the window (visible spectrum) or those reflecting off it (frequencies the glass is opaque to)?



Also, since virtual photons may be massive and definitely have four-momentum, they definitely do have some energy - so is there any notion of frequency?




General relativity: gauge fixing


In his lectures professor Hamber said that the metric tensor is not unique, just like the 4 vector potential is not unique for a unique field in electrodynamics. Since the metric tensor is symmetric, only ten components of the metric tensor are unique.


However, as the covariant divergence of the Einstein's tensor is zero, 4 more constraints are imposed and hence the number of independent components of metric tensor now has come down to 6. Finally he says that only two are unique.


How did he arrive at the final result of 2 unique components of metric tensor. Can you please explain tis me ? Also, what is the physical difference between Ricci tensor and Reimann tensor ?




phase diagram - Why is supercritical fluid not considered a separate state of matter?


As given on this link, supercritical fluids are viewed more as a continuum which has both liquid and gas properties. This continuum is obtained when a gas is brought to a pressure and a temperature higher than its critical values. The intermolecular distances of a pure supercritical fluid are in between that of a liquid and gas. With so many qualitative differences between a supercritical fluid and a gas and a liquid. Why is it not considered a separate state of matter?



Answer



We normally consider the various states of matter to be separated by a phase transition, and generally this is a first order phase transition (an exception is the second order glass-liqid transition). So for example the solid to liquid transition is (usually) a first order phase transition, and likewise the liquid to gas transition.


However if we move from the liquid to the supercritical fluid by increasing the temperature, as shown by the arrow in this diagram:


Supercritical fluid



then we measure neither a first or second phase transition. The system changes continuously. You'd get a similar result by starting with the gas and increasing the pressure to move into the supercritical region.


You'll hear arguments about what constitutes a separate phase of matter, e.g. about plasmas or superfluid states, and I'm sure someone somewhere will have referred to the supercritical fluid as a separate state. However there is no thermodynamic reason to do so.


general relativity - Ricci scalar for a diagonal metric tensor


I was wondering if there is a general formula for calculating Ricci scalar for any diagonal $n\times n$ metric tensor?



Answer



The previous answer is correct, but does not give a practical algorithm for humans, because it is a nightmare to calculate the curvature tensor. You need a good hand algorithm, or else you need a symbolic manipulation package. I prefer hand calculations for the symmetric Ansatzes, because they are always revealing.


The traditional simplified method is to use curvature forms, and this method is described in Misner Thorne and Wheeler. It is indespensible for understanding Kerr Solutions. It also pays to study the Newman-Penrose formalism, because it gives physical insight. I prefer to use my own mathematically inelegant home-made method, because much of the simplification in the advanced methods is really only due to the use of what is called "sparse-matrix computation" in the computer science literature.


If you have a matrix which is mostly zeros, like the on-diagonal curvature, you shouldn't write it in matrix form, unless you want to build good strong writing-hand muscles. Introduce noncovariant basis tensors "l_{ij}" which are nonzero in i,j position. Then write the metric in a mostly-plus convention as:


$$g_{\mu\nu} = -A l_{00} + B l_{11} + C l_{22} $$ $$g^{\mu\nu} = -{1\over A}l^{00} + {1\over B} l^{11} + {1\over C}l^{22}$$


Where I have gone to three dimensions so as to prevent a hopelessly long answer, and where I hope the notation is clear. For theoretical elegance, the basis tensor $l_{00}$ should really be written as $l^{00}_{\mu\nu}$ if you want it to be consistent with the usual index conventions, but since the goal is to get the writing muscles as flabby as possible, don't do that. Since the l's are ridiculously coordinate dependent, you can express ridiculously non-tensorial objects like the connection coefficients and the pseudo-stress-energy tensor.


Calculating the connection


There are tricks to calculating the connection, like deriving the geodesic equation, but I won't use them. If you use the basis-tensors, it is no work at all to get the connection coefficients, and with practice, you can do most of the work in your head for the simpler Ansatzes.



First, differentiate the metric. Since "diagonal" is not much of a simplification, I will assume "diagonal and dependent only on x1 and x0". I will use a prime for differentiating with respect to $x_1$, and a dot for differentiating with respect to $x_0$:


$$g_{\mu\nu,\alpha} = - \dot{A} l_{000} - A' l_{001} + \dot{B}l_{110} + B'l_{111} + \dot{C}l_{220} + C' l_{221}$$


The lesson is--- differentiation is trivial. Notice that this is symmetric on the first two indices, and nothing special on the third index. The Christoffel symbols are symmetric on the last two indices, and nothing special on the first. You transfer symmetry between index positions like this:


$$P_{i|jk} = Q_{ij|k} + Q_{ik|j} - Q_{jk|i}$$


Where P is symmetric in the last two position, and Q is symmetric in the first two. Get used to this, because it comes up a lot. The first term has the same index order just by using a good index order convention, the second term forcibly symmetrizes the second and third positions, and the last term is required so that P keeps all the information in Q. You can do this procedure on the l's automatically, just by replacing $l_{001}$ with $l_{001} + l_{010} - l_{100}$, and so on. This is what you do to get $\Gamma$ from the derivative of g.


Here is the formula for the all-lower-index $\Gamma$ (its not written this way ever, because $\Gamma$ is not a tensor, but I do it here, just to spare the writing hand).


$$\Gamma_{\mu|\nu\sigma} = -{1\over 2} ( -\dot{A} l_{000} - A'(l_{001} + l_{010} -l_{100}) $$ $$+ \dot{B} (l_{110} + l_{101} - l_{011}) + B' l_{111}$$ $$+ \dot{C}(l_{220} + l_{202} - l_{022}) + C'(l_{221} + l_{212} - l_{122}) )$$


To raise the indices when the metric is diagonal is trivial, you just raise the index on the l and divide by the appropriate diagonal entry:


$$\Gamma^\mu_{\nu\sigma} = {\dot{A}\over 2A} l^0_{00} - {A'\over 2A} (l^0_{01} + l^0_{10}) - {A'\over 2B} l^1_{00} + {\dot{B}\over 2B}(l^1_{10} + l^1_{01}) - {\dot{B}\over 2A}l^0_{11} + {B'\over 2B} l_{111} +... $$


Where the rest should be obvious. With practice, this takes a minute to do by hand.



Calculating the Ricci curvature.


To calculate the curvatures, it is important to trace as you go, because this halves the work. The Riemann curvature always has a bunch of Weyl junk that you mostly don't care about.


I always write the formula for the Riemann tensor this way:


$$ R^\mu_{\nu\lambda\sigma} = \Gamma^\mu_{\nu\lambda,\sigma} \mathrm{(AS)} + \Gamma^\mu_{\nu s}\Gamma^s_{\lambda\sigma} \mathrm{(AS)}$$


The "AS" means subtract the same expression with $\nu$ and $\sigma$ interchanged. This form has the property that it is antisymmetric on the lower first and third index, so this is not the usual convention for the Riemann tensor, which is antisymmetric on the last two indices. But this is easy to fix at the end. Trust me, this is the best convention, you fix it up at the end.


The Ricci trace in this convention is on the first two indices:


$$ R_{\mu\nu} = R^\alpha_{\alpha\mu\nu}$$


This is important, because each term you get in the Riemann tensor comes with an "l", and if the upper number of the l is not the same as the leftmost lower number, then that term doesn't contribute to the Ricci tensor. To get the Ricci tensor, you just ignore all $l^a_{bcd}$ with $a\ne b$ and write down $l_{cd}$ in place of those $l$'s which have $a=b$.


Now, differentiate the expression for $\Gamma$, tacking an index on the end. I will demonstrate with one of the contributions, from taking the $x_1$ derivative of the first term:


$$\Gamma^\mu_{\nu\lambda,\sigma} \mathrm{(AS)} = ..+ ({\dot{A}\over 2A})' (l^0_{001} - l^0_{100})$$



Where the second term is the antisymmetrizer. But now, look at the two l's --- does one of them have a matching index on the top left and bottom left? Yes! So erase the top and bottom left numbers from the l, and you are left with a contribution to the Ricci tensor:


$$({\dot{A}\over 2A})'l_{01}$$


This can all be done in your head, term by term. If you get an $l$ which is $l^0_{221}$ it can't contribute to Ricci, because the bottom first and third index don't match the top, if you get $l^0_{121}$ then it is killed by antisymmetrization, etc, etc, it's all obvious.


Next you need to multiply $\Gamma$ with itself, and trace. Here you work out all the terms. But it is a finite calculation, not a hopeless one. You don't get a contribution unless the leftmost lower index is the same as the upper index, so that leaves only a few terms, and further, you get zero on some l-terms after antisymmetrizing and tracing (in your head). Then there are only a handful of remaining terms, and these are real contributions to the curvature, so there is no way to avoid calculating them.


The first time takes a while, but with practice it only takes some minutes for simpler Ansatzes and some hours for the worst ones.


cosmology - Smolin on Cosmological selection and neutron stars


Regarding the cosmological selection hypothesis and testable predictions, Lee Smolin asserted the following:



"Smolin: I did make two predictions which were eminently checkable by astrophysical and cosmological observations, and both of them could easily have been falsified by observations over the last 20 years, and both have been confirmed by observations so far.


One of them concerns the masses of neutron stars and the prediction is there can't be a neutron star heavier than about twice the mass of the sun. This continues to be confirmed by the best measurements of the masses of neutron stars."



What is he referring to? As far as I know, the neutron star mass limit is a prediction of GR, and doesn't suggest that any fine-tuning is involved in it. Is this correct?



http://www.space.com/21335-black-holes-time-universe-creation.html




Friday, 23 March 2018

quantum mechanics - Are orbitals observable physical quantities in a many-electron setting?


Orbitals, both in their atomic and molecular incarnations, are immensely useful tools for analysing and understanding the electronic structure of atoms and molecules, and they provide the basis for a large part of chemistry and in particular of chemical bonds.


Every so often, however, one hears about a controversy here or there about whether they are actually physical or not, or about which type of orbital should be used, or about whether claimed measurements of orbitals are true or not. For some examples, see this, this or this page. In particular, there are technical arguments that in a many-body setting the individual orbitals become inaccessible to experiment, but these arguments are not always specified in full, and many atomic physics and quantum chemistry textbooks make only casual mention of that fact.


Is there some specific reason to distrust orbitals as 'real' physical quantities in a many-electron setting? If so, what specific arguments apply, and what do and don't they say about the observability of orbitals?



Answer



Generally speaking, atomic and molecular orbitals are not physical quantities, and generally they cannot be connected directly to any physical observable. (Indirect connections, however, do exist, and they do permit a window that helps validate much of the geometry we use.)


There are several reasons for this. Some of them are relatively fuzzy: they present strong impediments to experimental observation of the orbitals, but there are some ways around them. For example, in general it is only the square of the wavefunction, $|\psi|^2$, that is directly accessible to experiments (but one can think of electron interference experiments that are sensitive to the phase difference of $\psi$ between different locations). Another example is the fact that in many-electron atoms the total wavefunction tends to be a strongly correlated object that's a superposition of many different configurations (but there do exist atoms whose ground state can be modelled pretty well by a single configuration).


The strongest reason, however, is that even within a single configuration $-$ that is to say, an electronic configuration that's described by a single Slater determinant, the simplest possible many-electron wavefunction that's compatible with electron indistinguishability $-$ the orbitals are not recoverable from the many-body wavefunction, and there are many different sets of orbitals that lead to the same many-body wavefunction. This means that the orbitals, while remaining crucial tools for our understanding of electronic structure, are generally on the side of mathematical tools and not on the side of physical objects.





OK, so let's turn away from fuzzy handwaving and into the hard math that's the actual precise statement that matters. Suppose that I'm given $n$ single-electron orbitals $\psi_j(\mathbf r)$, and their corresponding $n$-electron wavefunction built via a Slater determinant, \begin{align} \Psi(\mathbf r_1,\ldots,\mathbf r_n) & = \det \begin{pmatrix} \psi_1(\mathbf r_1) & \ldots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \ldots & \psi_n(\mathbf r_n) \end{pmatrix}. \end{align}



Claim


If I change the $\psi_j$ for linear combinations of them, $$\psi_i'(\mathbf r)=\sum_{j=1}^{n} a_{ij}\psi_j(\mathbf r),$$ then the $n$-electron Slater determinant $$ \Psi'(\mathbf r_1,\ldots,\mathbf r_n) = \det \begin{pmatrix} \psi_1'(\mathbf r_1) & \ldots & \psi_1'(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n'(\mathbf r_1) & \ldots & \psi_n'(\mathbf r_n) \end{pmatrix}, $$ is proportional to the initial determinant, $$\Psi'(\mathbf r_1,\ldots,\mathbf r_n)=\det(a)\Psi(\mathbf r_1,\ldots,\mathbf r_n).$$ This implies that both many-body wavefunctions are equal under the (very lax!) requirement that $\det(a)=1$.



The proof of this claim is a straightforward calculation. Putting in the rotated orbitals yields \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det \begin{pmatrix} \psi_1'(\mathbf r_1) & \cdots & \psi_1'(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n'(\mathbf r_1) & \cdots & \psi_n'(\mathbf r_n) \end{pmatrix} \\&= \det \begin{pmatrix} \sum_{i}a_{1i}\psi_{i}(\mathbf r_1) & \cdots & \sum_{i}a_{1i}\psi_{i}(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \sum_{i}a_{ni}\psi_{i}(\mathbf r_1) & \cdots & \sum_{i}a_{ni}\psi_{i}(\mathbf r_n) \end{pmatrix}, \end{align} which can be recognized as the following matrix product: \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det\left( \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nn} \\ \end{pmatrix} \begin{pmatrix} \psi_1(\mathbf r_1) & \cdots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \cdots & \psi_n(\mathbf r_n) \end{pmatrix} \right). \end{align} The determinant then factorizes as usual, giving \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nn} \\ \end{pmatrix} \det \begin{pmatrix} \psi_1(\mathbf r_1) & \cdots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \cdots & \psi_n(\mathbf r_n) \end{pmatrix} \\ \\&=\det(a)\Psi(\mathbf r_1,\ldots,\mathbf r_n), \end{align} thereby proving the claim.




Disclaimers


The calculation above makes a very precise point about the measurability of orbitals in a multi-electron context. Specifically, saying things like




the lithium atom has two electrons in $\psi_{1s}$ orbitals and one electron in a $\psi_{2s}$ orbital



is exactly as meaningful as saying



the lithium atom has one electron in a $\psi_{1s}$ orbital, one in the $\psi_{1s}+\psi_{2s}$ orbital, and one in the $\psi_{1s}-\psi_{2s}$ orbital,



since both will produce the same global many-electron wavefunction. This does not detract in any way from the usefulness of the usual $\psi_{n\ell}$ orbitals as a way of understanding the electronic structure of atoms, and they are indeed the best tools for the job, but it does mean that they are at heart tools and that there are always alternatives which are equally valid from an ontology and measurability standpoint.


However, there are indeed situations where quantities that are very close to orbitals become accessible to experiments and indeed get measured and reported, so it's worth going over some of those to see what they mean.


The most obvious is the work of Stodolna et al. [Phys. Rev. Lett. 110, 213001 (2013)], which measures the nodal structure of hydrogenic orbitals (good APS Physics summary here; discussed previously in this question and this one). These are measurements in hydrogen, which has a single electron, so the multi-electron effect discussed here does not apply. These experiments show that, once you have a valid, accessible one-electron wavefunction in your system, it is indeed susceptible to measurement.


Somewhat more surprisingly, recent work has claimed to measure molecular orbitals in a many-electron setting, such as Nature 432, 867 (2004) or Nature Phys. 7, 822 (2011). These experiments are surprising at first glance, but if you look carefully it turns out that they measure the Dyson orbitals of the relevant molecules: this is essentially the overlap $$ \psi^\mathrm{D}=⟨\Phi^{(n-1)}|\Psi^{(n)}⟩ $$ between the $n$-electron ground state $\Psi^{(n)}$ of the neutral molecule and the relevant $(n-1)$-electron eigenstate $\Phi^{(n-1)}$ of the cation that gets populated. (For more details see J. Chem. Phys. 126, 114306 (2007) or Phys. Rev. Lett. 97, 123003 (2006).) This is a legitimate, experimentally accessible one-electron wavefunction, and it is perfectly measurable.



newtonian mechanics - Dependence of Friction on Area


Is friction really independent of area? The friction force, $f_s = \mu_s N$. The equation says that friction only depends on the normal force, which is $ N = W = mg$, and nature of sliding surface, due to $\mu_S$.


Now, less inflated tires experiences more friction compared to well inflated tire. Can someone give clear explanation, why friction does not depend on area, as the textbooks says?



Answer



The increased 'resistance' of an underinflated tyre is due to mechanical deformation, friction is independent of area as suggested. The simplest explanation for me is that: as area increases the applied force per unit area decreases, but there is more contact surface to resist motion.


Added as per Zass' suggestion below:


$$\rm{Friction}= \rm{Material\ Coefficient} \times \rm{Pressure} \times \rm{Contact Area}$$


Where the material coefficient is a measure of the 'grippiness' of the material, the pressure applied to the surface and the area of the surfaces in contact. So we can see the area in the pressure term cancels with the third term.


This is not to be confused with traction, where spreading the motive force over a larger area can help.


optics - Can telescope have dual purpose?



Astronomical telescope has a small eye piece and a large objective lens. Whereas a compound microscope has a large eye piece and small objective lens. Therefore can we use astronomical telescope as compound microscope by viewing it from objective lens?




How does the Higgs Boson gain mass itself?



If the Higgs field gives mass to particles, and the Higgs boson itself has mass, does this mean there is some kind of self-interaction?


Also, does the Higgs Boson have zero rest mass and so move at light-speed?



Answer



Most of the popular science TV programmes and magazine articles give entirely the wrong idea about how the Higgs mechanism works. They tend to give the impression that there is a single Higgs boson that (a) causes particles masses and (b) will be found around 125GeV by the LHC.


The mass is generated by the Higgs field. See the Wikipedia article on the Higgs mechanism for details. To (over)simplify, the Higgs field has four degrees of freedom, three of which interact with the W and Z bosons and generate masses. The remaining degree of freedom is what we see as the 125Gev Higgs boson.


In a sense, the Higgs boson that the LHC is about to discover is just what's left over after the Higgs field has done it's work. The Higgs boson gets its mass from the Higgs mechanism just like the W and Z bosons: it's not the origin of the particle masses.


The Higgs boson doesn't have zero rest mass.


A quick footnote:


Matt Strassler's blog has an excellent article about this. The Higgs mass can be written as an interaction with the Higgs field just like e.g. the W boson. However Matt Strassler makes the point that this is a coincidence rather than anything fundamental and unlike the W and Z the Higgs boson could have a non-zero mass even if the Higgs field was zero everywhere.


newtonian mechanics - The effect of windspeed on a car


I've worked problems in the past in trig class concerning the effect of wind on the speed of a plane and it's flight path and was wondering if a similar thing occurs with a car.


First off, I'm pretty sure that if the speedometer reads 60 mph, even if the wind is blowing 15 mph in the same direction you will still have only traveled 60 miles at the end of an hour. My question is whether the car is traveling faster due to aid of the wind with respect to amount of work the engine has to perform and the gas consumed.


Is it correct to believe that the car is traveling 60 mph off of 45 mph effort, or, does it not work that way?



Answer



No, while the work the engine does would be reduced by a tailwind, it would not be reduced to be equivalent to the relative speed travel.


Work $\ne$ Force



The work that the engine must exert to maintain speed is equal to the drag times the velocity. So let's look at the ideal situation where the only drag on the car was due to the wind. The drag on a car going 60 mph with a 15 mph tail wind would indeed be equivalent to the drag on the same car going 45 mph in still air. However, the work that the faster car would have to do to overcome this drag would still be $\frac43$ as much as the slower moving car.


Interestingly, this is not the case with the airplane. This is because while the car pushes on the ground, the airplane pushes on the air. When there is a tail wind, the airplane doesn't have to use as much energy to impart the same amount of momentum on the wind.


Other drag


The car engine must overcome other resistance forces besides air resistance. The rolling friction of the tires, the engine and drive train friction, and others will contribute to the amount of work the car must do to maintain speed. Many of these sources of friction increase with wheel speed, and thus velocity of the car, so it would not only take more work and power to maintain the 60 mph, but also more force.


electric circuits - I don't understand what we really mean by voltage drop


This post is my best effort to seek assistance on a topic which is quite vague to me, so that I am struggling to formulate my questions. I hope that someone will be able to figure out what it is I'm trying to articulate.


If we have a circuit with a resistor, we speak of the voltage drop across the resistor.


I understand all of the calculations involved in voltage drop (ohm's law, parallel and series, etc.). But what I seek is to understand on a conceptual level what voltage drop is. Specifically: what is the nature of the change that has taken place between a point just before the resistor and a point just after the resistor, as the electrons travel from a negatively to a positively charged terminal.


Now as I understand it, "voltage" is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal, and "resistance" is a force caused by a material which, due to its atomic makeup, causes electrons to collide with its atoms, thus opposing that flow of electrons, or "current". So I think I somewhat understand voltage and resistance on a conceptual level.


But what is "voltage drop"? Here's what I have so far:





  • Voltage drop has nothing to do with number of electrons, meaning that the number of electrons in the atoms just before entering the resistor equals the number of atoms just after




  • Voltage drop also has nothing to do with the speed of the electrons: that speed is constant throughout the circuit




  • Voltage drop has to do with the release of energy caused by the resistor.





Maybe someone can help me understand what voltage drop is by explaining what measurable difference there is between points before the resistor and points after the resistor.


Here's something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn't the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I'd like to get clarification as to why.


Perhaps I can clarify what I'm trying to get at with the famous waterwheel analogy: we have a pond below, a reservoir above, a pump pumping water up from the pond to the reservoir, and on the way down from the reservoir, the water passes through a waterwheel, the waterwheel being analogous to the resistor. So if I were to stick my hand in the water on its way down from the reservoir, would I feel anything different, depending on whether I stuck my hand above or below the waterwheel? I hope that this question clarifies what it is I'm trying to understand about voltage drop.


EDIT: I have read and thought about the issue more, so I'm adding what I've since learned:


It seems that the energy which is caused by the voltage difference between the positive and negative terminals is used up as the electrons travel through the resistor, so apparently, it is this expenditure of energy which is referred to as the voltage drop.


So it would help if someone could clarify in what tangible, empirical way could we see or measure that there has been an expenditure of energy by comparing a point on the circuit before the resistor and a point on the circuit after the resistor.


EDIT # 2: I think at this point what's throwing me the most is the very term "voltage drop".


I'm going to repeat the part of my question which seems to be still bothering me the most:


"Here's something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn't the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I'd like to get clarification as to why."



In other words, whatever takes place across the resistor, how can we call this a "voltage drop" when the voltage is a function of the difference in number of electrons between the positive terminal and negative terminal?


Now I've been understanding the word drop all along as "reduction", and so I've been interpreting "voltage drop" as "reduction in voltage". Is this what the phrase means?


Since I've read that voltage in all cases is a measurement between two points, then a reduction in voltage would necessarily require four different points: two points to delineate the voltage prior to the drop and two points to delineate the voltage after the drop, so which 4 points are we referring to?


Perhaps a more accurate term would have been "drop in the potential energy caused by the voltage" as opposed to a drop in the voltage?


EDIT # 3: I think that I've identified another point which has been a major (perhaps the major) contribution to the confusion I've been having all along, and that is what I regard as a bit of a contradiction between two essential definitions of voltage.


When we speak of a 1.5V battery, even before it is hooked up to any wiring / switches / load / resistors / whatever, we are speaking of voltage as a function of nothing other than the difference in electric charge between the positive and negative terminals, i.e the difference in excess electrons between the two terminals.


Since there is a difference in number of electrons only in reference to the terminals, I therefore have been finding it confusing to discuss voltage between any other two points along the circuit -- how could this be a meaningful issue, since the only points on the circuit where there is a difference in the number of electrons is at the terminals -- so how can we discuss voltage at any other points?


But there is another definition of voltage, which does make perfect sense in the context of any two points along a circuit. Here we are speaking of voltage in the context of Ohm's law: current/resistance. Of course, in this sense, voltage makes sense at any two points, and since resistance can vary at various points along the circuit, so clearly voltage can vary at different points along the circuit.


But, unlike the first sense of voltage, where the voltage is a result of the difference in electrons between the terminals, when we speak of voltage between two points along the circuit, say, between a point just before a resistor and a point just after the resistor, we are not saying that there any difference in number of electrons between these two points.


I believe that it is this precise point which has been the main source of my confusion all along, and that's what I've been trying to get at all along. And this is what I've been struggling to ask all along: okay, in a battery, you can tell me that there is a voltage difference between the two terminals, meaning that you can show me, tangibly and empirically, that the atoms at the positive terminal have a deficit of electrons, and the atoms at the negative terminal have a surplus of electrons, and this is what we mean by the voltage between the two, then I can understand that.



But in contrast, I accept that there is voltage (I/R) between a point just before a resistor and just after a resistor -- but can you take those two points, the one before the resistor and the one after the resistor, and show me any measurable qualitative difference between the two? Certainly there is no difference between the number of electrons in the atoms of those two points. In point of fact, I believe that there is no measurable difference between the two points.


Ah, now you'll tell me that you can show me the difference between the two points: you'll hook up a voltmeter to the two points, and that shows the voltage between them!


Sure, the voltmeter is telling us that something has happened between the two points. But the voltmeter does not tell us anything inherent in the points themselves -- unlike the two terminals of a battery, where there is an inherent difference between the two points: one has more excess electrons than the other -- that is a very inherent, concrete difference.


I guess what we can say is that the electrons travelling at a point just before the resistor are travelling with more energy than the electrons travelling at a point just after the resistor. But is there any way of observing the difference in energy other than a device that simply tells us that the amount of energy has dropped between the two points?


Let me try another way: we could also hook up a voltmeter to the two battery terminals, and the reading would indicate that there is voltage between the two terminals. And if I would ask you yes, but what is it about those two points that is causing that voltage, you could then say, sure: look at the difference in electrons between the two points -- that is the cause for the reading of the voltmeter.


In contrast, when we hook up the voltmeter to the points just before and after the resistor, and the reading indicates a voltage between the two terminals. But in this case if I would now ask you the same question: yes, but what is it about those two points that is causing the voltage, I'm not sure if you'd have an answer.


I think this crucially fundamental difference between the two senses of voltage is generally lost in such discussions.




Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...