Sunday 31 December 2017

visible light - All objects radiate energy, but we cannot see all objects in the dark. Why?


We claim that all objects radiate energy by virtue of their temperature and yet we cannot see all objects in the dark. Why not?



Answer



The human eye is only capable of perceiving a very limited range of electromagnetic radiation, with wavelengths ~400-800 nanometer. Objects at low temperatures (room temperature) do not emit an appreciable amount of radiation in this range. The fact that we CAN see objects when it's light is due to reflection. For more info, take a look at this wikipedia page



special relativity - Why does nonlinearity in quantum mechanics lead to superluminal signaling?



I recently came across two nice papers on the foundations of quantum mechancis, Aaronson 2004 and Hardy 2001. Aaronson makes the statement, which was new to me, that nonlinearity in QM leads to superluminal signaling (as well as the solvability of hard problems in computer science by a nonlinear quantum computer). Can anyone offer an argument with crayons for why this should be so?


It seems strange to me that a principle so fundamental and important can be violated simply by having some nonlinearity. When it comes to mechanical waves, we're used to thinking of a linear wave equation as an approximation that is always violated at some level. Does even the teensiest bit of nonlinearity in QM bring causality to its knees, or can the damage be limited in some sense?


Does all of this have any implications for quantum gravity -- e.g., does it help to explain why it's hard to make a theory of quantum gravity, since it's not obvious that quantum gravity can be unitary and linear?



S. Aaronson, "Is Quantum Mechanics An Island In Theoryspace?," 2004, arXiv:quant-ph/0401062.


L. Hardy, "Quantum theory from five reasonable axioms," 2001, arXiv:quant-ph/0101012.





homework and exercises - Reaction forces in pyramid stacking of steel coils



I was tasked with solving this problem at work as we are out of coil holders and we bought 140 coils and need to stack them asap. Companies that make the storage holders are on back order for several weeks and no one is willing to share responsibility for this task. Safety first!


Given: 15 steel coils stacked 3 rows high (6 bottom, 5 middle, 4 top). Each weigh 10,000 lbs (4.5 metric tons) max (8500# for most), have an ID of 20" and a length of 60". Some quick calculations will show that the OD will be 32-33in.


Find: All reaction forces at the bottom for the following 2 scenarios: 1) There is only 2 lateral supports at the base on either side of the entire stack. 2) Each coil has its own support laterally.


This question becomes quite complex the more it is looked into. I started with a single top coil and the geometry would make the Normal force act 30 degrees from vertical, and 60 degrees from horizontal so $2 N sin(60)=10000$. So then N=5773.5 lbf going into the second row. This is when it gets confusing and I would like input on the best way to go about it. I was trying to imagine the possibility of a method of joints type of approach. I was going to move on and just look at a free body diagram of each coil independently.




Saturday 30 December 2017

open quantum systems - Is hermicity of the reduced density matrix preserved here?


I am following along Breuer and Petruccione's book . I would like to know if the property $\rho^{\dagger} = \rho$ is preserved for evolution that is described by the Born Approximation.


For a Hilbert space $\mathscr{H}_s \otimes \mathscr{H}_{b}$ describing a system of interest and some reservoir/bath, we consider a time-independent Hamiltonian of the form $$ H = H_s \otimes \mathbb{I}_b + \mathbb{I}_s \otimes H_b + g H_{int} $$ where $g$ is some small coupling and $H_{int}$ describes some interaction between the system and the bath.


The full density matrix of the combined system and bath $\sigma(t)$ evolves via the von Neumann equation $$ \frac{d\sigma_I(t)}{dt} = - i [ V(t) , \sigma_{I}(t) ] $$ which is written in the interaction-picture, where $$ \sigma_{I}(t) := e^{+ i H_s t} \otimes e^{+ i H_b t} \sigma(t) e^{- i H_s t} \otimes e^{- i H_b t} \ \ \ \ \text{and} \ \ \ \ V(t) := e^{+ i H_s t} \otimes e^{+ i H_b t} H_{int} e^{- i H_s t} \otimes e^{- i H_b t} $$


If we define the reduced density matrix describing the system as the following partial trace $$ \rho(t) := \mathrm{Tr}_{b}[ \sigma(t) ] $$ and then $\rho_{I}(t) := e^{+ i H_s t} \rho(t) e^{- i H_s t}$, the Born Approximation says that $$ \frac{d\rho_I(t)}{dt} \simeq - g^2 \int_0^t ds\ \mathrm{Tr}_b\bigg( \big[ V(t), [V(s), \rho_I(s) \otimes \varrho_{B}] \big] \bigg) $$ where $\varrho_b$ is the initial state of the bath at $t=0$, where $\sigma(0) = \rho(0) \otimes \varrho_b$.


My Question: Suppose that $\rho(0)^{\dagger}= \rho(0)$ so that the initial density matrix is Hermitian. How can you use the above equation to show that $\rho(t)^{\dagger}= \rho(t)$ for $t>0$?



Is this possible? In the literature, I sometimes come across some statements that Lindblad equations preserve Hermicity (Lindblad equations being the above equation, after taking the Markov Approximation and then the secular/rotating-wave approximation).


Is it possible to prove this from this equation of motion? Or do we need additional assumptions? Or can this be more generally proven for $\rho$ without specifying to a particular evolution equation?




Friday 29 December 2017

buoyancy - Why is Buoyant Force $Vrho g$?


For a submerged object, buoyant force ($F_b$) is defined as:


$$F_b = V_{\text{submerged}} \times \rho \text{ (density)} \times g \text{ (gravitational constant)}$$


Conceptually, the buoyant force equation says that buoyant force exerted is equal to the weight of the volume of water a given object displaced. Why? I went online to http://faculty.wwu.edu/vawter/PhysicsNet/Topics/Pressure/BouyantForce.html


and found the following explanation, but it seems like non-sequitur to me:



Explanation: When an object is removed, the volume that the object occupied will fill with fluid. This volume of fluid must be supported by the pressure of the surrounding liquid since a fluid can not support itself. When no object is present, the net upward force on this volume of fluid must equal to its weight, i.e. the weight of the fluid displaced. When the object is present, this same upward force will act on the object.





Answer



The argument sounds perfectly reasonable.


Consider arbitrary parcel of fluid in equilibrium. It exerts downward force equal to it's weight on the surrounding fluid, and it does not move. Therefore according to the second law of motion, the downward force must be balanced by upward force of equal magnitude, the buoyant force (otherwise it would start to move, contradicting the equilibrium assumption).


The buoyant force is exerted by the fluid surrounding the parcel. Therefore if we replace the parcel with something else, there is no reason for that force to change.


Gauge fixing choice for the gauge field $A_0$


In many situations, I have seen that the the author makes a gauge choice $A_0=0$, e.g. Manton in his paper on the force between the 't Hooft Polyakov monopole.


Please can you provide me a mathematical justification of this? How can I always make a gauge transformation such that $A_0=0$?


Under a gauge transformation $A_i$ transforms as



$$A_i \to g A_i g^{-1} - \partial_i g g^{-1},$$


where $g$ is in the gauge group.



Answer



The "gauge fixing" condition $A_0=0$ called the temporal gauge or the Weyl-gauge please see the following Wikipedia page). This condition is only a partial gauge fixing condition because, the Yang-Mills Lagrangian remains gauge invariant under time independent gauge transformations:


$A_i \to g A_i g^{-1} - \partial_i g g^{-1}, i=1,2,3$


with $g$ time independent.


However, this is not the whole story: The time derivative of $A_0$ doesn't appear in the Yang-Mills Lagrangian.Thus it is not a dynamical variable. It is just Lagrange multiplier. It's equation of motion is just the Gauss law:


$\nabla.E=0$.


One cannot obtain this equation after setting $A_0 = 0$. So it must be added as a constraint and it must be required to vanish in canonical quantization on the physical states. (This is the reason that it is called the Gauss law constraint).


astronomy - How fast is Earth moving through the universe?


As the galaxy is moving and the solar system orbiting the galaxy and the Earth orbiting the sun. So how fast is each object moving and what is the fastest we move at?


Do we even know how fast the galaxy is moving that is not relative to another galaxy (although I guess velocity has to be measured relative to something).




Thursday 28 December 2017

distance - In relativity, what is the difference between a rod that is perpendicular to direction of motion and a rod parallel to the direction of motion?


In Feynman's Lectures on Physics , chapter 15, page 6 he writes about 2 identical, synchronized light signal clocks. These are clocks that consist of a rod (meter stick) with a mirror at each end, and the light goes up and down between the mirrors, making a ,click each time it goes down. He describes giving one of these clocks to a man flying out in space, while the other remains stationary. The man in the space ship mounts the clock perpendicular to the motion of the spaceship. Feynman then writes:




"the length of the rod will not change. How do we know that perpendicular lengths do not change? The men can agree to make marks on each other's y-meter stick as they pass each other. By symmetry, the two marks must come a the same y- and y' coordinates, since otherwise, when they get together to compare results, one mark will be above or below the other, and so we could tell who was really moving."



what exactly is the "test" with the marking of meter-sticks that Feynman is describing? why would it violate relativity, since it seems the person in the space ship would be looking outside? why would a change in a perpendicular length violate relativity, but not a change in parallel length--couldn't the men also make marks on each other's sticks, in the case of parallel lengths? Thank you!




optics - How can I measure the amplitude of a light wave?


Suppose I have a light wave and I want to measure its amplitude, or check to see if it has an amplitude of a certain value: how would one go about doing this?




electrostatics - Why is the radial direction the preferred one in spherical symmetry?



I am learning about electricity and magnetism by watching MIT video lectures.


In the lecture about Gauss's law, while trying to calculate the flux through a sphere with charge in it, the lecturer states that the direction of the electrical field is radial, since it is the only preferred direction that there is (since the problem have spherical symmetry).


But why is this the only preferred direction?


I think that there is also the direction perpendicular to the sphere, but since this is actually a two dimensional subspace of $\mathbb{R}^{3}$ I think that I can rule this out.


Is my conclusion, that since I can't determine any direction (which corresponds to a one dimensional subspace of $\mathbb{R}^{3}$) in any other way, then indeed the redial direction is the only preferred direction?



Answer



You are basically right - I'll just fill in some pedagogical details.


One way to see that there can't be any other direction without worrying about missing possibilities is to suppose for the purpose of contradiction that flux is pointed non-radially. Then use the following definition of spherical symmetry (remember, "spherical symmetry" isn't just a colloquialism - it has a precise meaning):



Any rotation of the system that keeps the center fixed will leave all physically observable quantities unchanged.




Pick a rotation about the axis passing through the center and through the point where you are interested in the flux. Any nonradial component of flux will rotate around this axis, and so you know such a component must be 0.


An analogy would be standing on the Earth's surface and firing a laser into the air. Unless the laser points radially away from (or directly into) the center of the Earth, you could rotate the Earth about the axis passing through your body and the laser would point to a new location.


forces - Why object in terminal velocity doesnt stop



In termianl velocity,the force due to weight is equal to force due to air resistance so why doesnt the object stop. Of course I know it won't stop but I want an explination in terms of physics. As what I understand is when 2 forces in opposite directions are equal, there is no movement. Thanks



Answer



When two forces in opposite directions are equal, there is no acceleration. That doesn't mean that there is no movement. It simply means that there is no change in velocity.


Working with indices of tensors in special relativity


I'm trying to understand tensor notation and working with indices in special relativity. I use a book for this purpose in which $\eta_{\mu\nu}=\eta^{\mu\nu}$ is used for the metric tensor and a vector is transformed according to the rule $$x'^\mu= \Lambda^\mu{}_{\alpha}x^\alpha$$ (Lorentz-transformation).


I think I understand what is going on up to this point but now, I'm struggling to understand how the following formula works:


$$\eta_{\nu\mu}\Lambda^{\mu}{}_{\alpha}\eta^{\alpha\kappa} ~=~ \Lambda_{\nu}{}^{\kappa}$$


Why is this not equal to (for instance) $\Lambda^{\kappa}{}_{\nu}$? In addition, I have trouble understanding what the difference is between $\Lambda_\alpha^{\ \ \beta}$, $\Lambda_{\ \ \alpha}^\beta$, $\Lambda^\alpha_{\ \ \beta}$ and $\Lambda^{\ \ \alpha}_\beta$ (order and position of indices). And if we write tensors as matrices, which indices stand for the rows and which ones stand for the columns?


I hope someone can clarify this to me.



Answer



With the tensor indices notation, each "slot" is distinct and can be raised and lowered separately. So $\eta^{\kappa\alpha}\Lambda^\mu_{\ \ \alpha} = \Lambda^{\mu\kappa}$. Then $\Lambda^{\mu\kappa}\eta_{\mu\nu} = \Lambda_\nu^{\ \ \kappa}$.



When representing these objects as matrices, the usual convention is the first index is the row and the second is the column.


Be careful to stick with the convention when converting a tensor notation equation into linear algebra with its matrix representation. If we had an equation like $A_{ij} = C^{k}{}_{j}B_{ik}$, we could represent it with matrices $\bf{A}= \bf{B}\bf{C}$. Notice the swap in order to make the matrix multiplication correctly represent the equation (if you look at the four indices of the multiplied tensor components, it should look like the inner two are repeated ... in this case $B_{ik}C^{k}{}_{j}$).


fluid dynamics - Why can't helicopters reach mount everest?


Is there a reason why people can't just take the helicopter to mount Everest? Why is it that helicopters can't reach that high?





visible light - Why is air invisible?



I think that something is invisible if it's isolated particles are smaller than the wavelength of visible light. Is this correct?


Why is air invisible? What about other gases and fumes which are visible?



Answer



I think the pithy answer is that our eyes adapted to see the subset of the electromagnetic spectrum where air has no absorption peaks. If we saw in different frequency ranges, then air would scatter the light we saw, and our eyes would be less useful.


Wednesday 27 December 2017

electromagnetism - Showing that Coulomb and Lorenz Gauges are indeed valid Gauge Transformations?


I'm working my way through Griffith's Introduction to Electrodynamics. In Ch. 10, gauge transformations are introduced. The author shows that, given any magnetic potential $\textbf{A}_0$ and electric potentials $V_0$, we can create a new set of equivalent magnetic and electric potentials given by:


$$ \textbf{A} = \textbf{A}_0 + \nabla\lambda \\ V = V_0 - \frac{\partial \lambda}{\partial t}. $$


These transformations are defined as a "gauge transformation". The author then introduces two of these transformations, the Coulomb and Lorenz gauge, defined respectively as:


$$ \nabla \cdot \textbf{A} = 0 \\ \nabla \cdot \textbf{A}= -\mu_0\epsilon_0\frac{\partial V}{\partial t}. $$



This is where I am confused. I do not understand how picking the divergence of $\textbf{A}$ to be either of these two values actually constitutes a gauge transformation, as in it meets the conditions of the top two equations. How do we know that such a $\lambda$ even exists for setting the divergence of $\textbf{A}$ to either of these values. Can someone convince me that such a function exists for either transformation, or somehow show me that these transformations are indeed "gauge transformations" as they are defined above.



Answer



Comment to the question (v1): It seems OP is conflating, on one hand, a gauge transformation


$$ \tilde{A}_{\mu} ~=~ A_{\mu} +d_{\mu}\Lambda $$


with, on the other hand, a gauge-fixing condition, i.e. choosing a gauge, such e.g., Lorenz gauge, Coulomb gauge, axial gauge, temporal gauge, etc.


A gauge transformation can e.g. go between two gauge-fixing conditions. More generally, gauge transformations run along gauge orbits. Ideally a gauge-fixing condition intersects all gauge orbits exactly once.


Mathematically, depending on the topology of spacetime, it is often a non-trivial issue whether such a gauge-fixing condition is globally well-defined and uniquely specifies the gauge-field, cf. e.g. the Gribov problem. Existence and uniqueness of solutions to gauge-fixing conditions is the topic of several Phys.SE posts, see e.g. this and this Phys.SE posts.


quantum field theory - Why fermions have a first order (Dirac) equation and bosons a second order one?


Is there a deep reason for a fermion to have a first order equation in the derivative while the bosons have a second order one? Does this imply deep theoretical differences (like space phase dimesion etc)?


I understand that for a fermion, with half integer spin, you can form another Lorentz invariant using the gamma matrices $\gamma^\nu\partial_\nu $, which contracted with a partial derivative are kind of the square root of the D'Alembertian $\partial^\nu\partial_\nu$. Why can't we do the same for a boson?


Finally, how is this treated in a Supersymmetric theory? Do a particle and its superpartner share a same order equation or not?




Loop-Quantum Gravity versus String Theory



Basically asking what were the motives behind each theory. What was it that led physicists toward these ideas?




newtonian mechanics - Is it correct to use Bernoulli's theorem to explain that a disk floats in the air? Should the correct explanation be based on Boyle's law?


In this video, a device is introduced from 1 minute to 18 seconds. After entering the air, the disc at the bottom of the device will be suspended in the air. The video explains that this is because of the Bernoulli effect, but I think this explanation is incorrect, because the disc gap, the radial volume is increasing. Therefore, the reason why the disk is suspended is that the air volume becomes larger, so the pressure decreases (according to Boyle's law). If the air moves along the radial direction at a constant speed, the volume of the air must be constantly increasing, right? So the pressure is decreasing, right?



The disc is suspended in the air




quantum mechanics - Difference for boundary condition, particle in a box


When solving the simple problem of a free particle in a box of volume $V = L^3$, we can impose either periodic boundary conditions $\psi(0) = \psi(L)$ and $\psi '(0)= \psi'(L)$ either strict boundary condition $\psi(0)=0$ and $\psi(L)=0$.


Now, when looking for the eigenstates of the Hamiltonian, one can easily solve the problem with the separation of variables method, and impose the boundary conditions.



For the case of periodic boundaries, one has that the eigenvalue in each direction $(x,y,z)$ of space must be $\lambda_k = 2\pi k/L$, while in the strict boundaries you find $\lambda_k = \pi k/L$.


Now, in every textbook I could find, they say that in the first case we have that $k\in \mathbb{Z}$ and in the second $k \in \mathbb{N}$.


For the case $k = 0$ I can understand the difference, but I do not understand why negatives $k$ in the case of boundary condition must be discarded.


Edit : (Sorry for bumping) I understand why "physically" the solutions with periodic boundary conditions differ from the one with strict boundary conditions, and you're answer is perfectly sensible in that way.


However, surely there must be a way of showing (mathematically, I mean) that in the case of strict conditions choosing $k \in \mathbb{Z}$ yields "duplicate" solutions? Could someone offer some insight concerning this proof?




special relativity - Symmetric Time Dilation in Uniform Relative Motion


I feel (and hope) this is an easily answerable question among physicists versed in GR. I promise that I searched for other answers on the forum. Here goes:


Observer 1 starts at X distance from observer 2, moves at 99% the speed of light towards observer 2 and then stops to interact with observer 2.


My questions are:




  1. Since each observer sees the other as moving slower/faster due to the interchangeability of reference frames, do the slower/faster speeds cancel one another out?





  2. If so, does time dilation really matter for observers?






cosmology - Is observable universe an explanation against Olbers' paradox?


First of all, let me tell you that I'm not a physicist but rather a computer scientist with a mere interest in physics at nowhere near a professional level so feel free to close this question if it doesn't make any sense.



I remember a physicist friend mentioning me about an argument about the finiteness of the universe. I have looked it up and it turned out to be Olbers' Paradox.


We computer scientist like to use astronomical numbers to help us imagine the complexity of an algorithm. One of the most common one is the number of atoms in the Observable Universe (which we take as $10^{80}$) so I have a crude understanding about the observable universe concept.


I had known these two for some time hence I woke up with the dilemma today. So my question is, how come it can be argued that universe is finite just because it is dark if we know that we can only observe a finite portion of it? Can't it be the case that the universe is infinite even if the sky is dark because not all the light from all the stars reach the earth?


I have searched this a little bit but I think I need an explanation in simpler terms (like popular physics). A historical perspective would also be welcomed.



Answer




So my question is, how come it can be argued that universe is finite just because it is dark if we know that we can only observe a finite portion of it?



You are mixing two theories here. Olbers paradox has as a basic theory a static infinite in space and time universe. The dark night sky means that either the universe is not static, or has a beginning, or has a finite extent in space. Or all three.




Can't it be the case that the universe is infinite even if the sky is dark because not all the light from all the stars reach the earth?



A different model than a static infinite in space and time universe is needed in this case, an infinite universe that appeared at a time t=0, for example, so that the light of distant stars would not have reached us by now. But there are more data than the dark night sky to be fitted by a cosmological model and the available data fit the Big Bang model quite well:



the Big Bang occurred approximately 13.75 billion years ago, which is thus considered the age of the Universe. After its initial expansion from a singularity, the Universe cooled sufficiently to allow energy to be converted into various subatomic particles, including protons, neutrons, and electrons.



soft question - How to calculate the highest theoretical artificial hill?


The biggest peak in the world is Mount Everest.


Imagine someone starting to make an artificial hill (like pyramide) from soil (earth).


So, when starting with an 200x200 Km base area, with 45degree slope, its mathematical height is 100km (Low orbital space height). Is it possible to make such an artificial peak? (Without taking into account financial issues and so on.)


If not, why not? What would happen? Is there a height limit?



Answer




Your question is a classical college exercise. The limit is supposed to be the melting of the base of the artificial mountain under the pressure, which is linked with the energy of chemical bounds. You have an example of such a calculation here. You can also look it for an arbitrary planet size : the smaller the planet, the smaller the gravity is, and the bigger the biggest mountain can be. And when the mountain can be as big as the radius of the planet, you have roughly the dwarf planet / asteroid boundary.


Tuesday 26 December 2017

newtonian mechanics - Rigid body collision, 3 circles in contact


I'm working on a 2d physics simulation. It's a continuous time simulation, that is, it uses swept shapes over the time-frame and geometrical/vector 'analysis' to determine most immediate time of contact. The world is then stepped forward to that point and collisions are resolved before proceeding to step the world forward to the next time of collision.


I do not use relaxation techniques or any kind of intersection resolution. The objects are moved to contact states and contact events are resolved as collisions.


I have tried a CONTACT_RANGE of 0.001f down to 0.00001f. This value determines whether objects are in contact. Lowering this value leads to less overlap states, but presumably at some point, lowering the value further will lead to the simulation missing contact events because inaccuracies will lead to the systems of bodies being stepped into the discrete or overlap state.


Now, to the problem, the simulation seems to compute the outcome of collisions between 2 bodies (circles in the case I'm looking at), correctly. But I have not correctly dealt with 3 or more bodies being in contact.


In the past, I did not deal with momentum / KE correctly, but I did constrain velocities such that multi-body systems would not intersect in the next sub-frame. But now that I have a correct solution to 2 body collision, I want to do it properly for more bodies.


I'm not sure how to proceed. I have read an interesting suggestion, that in reality, no collisions are simultaneous, and they can be dealt with one after the other. Is this true ?



Could I simply compute the resultant velocity of a body from one contact, then do the same with the next contact etc?


And would that lead to the same solution as though I computed velocity in one step, from all contacts simultaneously?


Or do I need a single function for solving the velocity from multiple contacts?


Any guidance is appreciated.


Gavin



Answer



As your read suggestion suggested, you can approximate physical reality with "no collisions are simultaneous". The reason is that the physical world is full of indeterminacies (AKA errors) due to thermal fluctuations and many other sources. What this means is that, even if the strict mathematical solutions that you will find by assuming simultaneous collisions versus assuming random collisions (treating all collisions that your algorithm thinks are simultaneous as happening in random order) could differ, the "physical" behavior of the system will not change in any meaningful way. No human could tell the difference between the two. In conclusion, you are safe in treating the system of collisions as non-simultaneous, and the consequences will be undetectable for the human eye (including any real physicist's real world measurements)


nuclear physics - Island of Stability


When I was much younger, I remember being fascinated by the thought of an Island of Stability at very high atomic numbers. However, I have not heard much on this and I was wondering





  • Did this idea ever have any meaningful support in the scientific community, or was it a fringe theory?




  • Is there any validity to this considering how much more we actually know about atomic construction?





Answer



I don't have the rep to post this as a comment, so I'll give it as an answer. I should say that I am not necessarily in this field; I am an astrophysicist.


I suspect that the idea of the Island of Stability is just a continuation of the early shell model for nuclei. This is basically a statement that nuclei with closed or filled energy shells are the most stable; this is analogous to the idea for atoms where the noble gases have filled electron orbitals, and don't interact as strongly as those atoms with unfilled orbitals.



The case of nuclei is more complicated because you have to consider the "orbitals" of two species: protons and neutrons. Furthermore, protons are charged while neutrons are neutral - which gives the two species slightly different behavior with regards to interpretation as shell structure. Nevertheless, there are so-called "Magic numbers" of protons and neutrons that give rise to the stable nuclei in the shell model. If you look at the valley of stability, you will see that the # protons = # neutrons, up to about an atomic number of 20, and then the valley turns over with stable nuclei having more neutrons than protons. There are several reasons for this turn over, some of which are related to the Coulomb repulsion between protons. This all likely is a deviation from the perfect shell structure.


If one continues the shell idea - perhaps taking into account the fact that protons are charged - past the known valley of stability, then there is a possibility of an island of stability, whereupon the next largest "stable shell" is filled. I think the fact that the simple shell model breaks down even at lower atomic number is an indication that it shouldn't' be used at the high atomic numbers implied in the island of stability.


I don't know of any group currently working on this region of the chart of nuclides. The closest thing I can think of is the upcoming Facility for Rare Isotope Beams (FRIB) project that will likely investigate very neutron rich material. The "Rare" in FRIB effectively means unstable, but they may be able to probe around in the region of the expected island of stability.


newtonian gravity - How does Earth carry Moon with it, if it can not force Moon to touch it by gravitational force?



Earth's gravitational force is acting on its Moon in such a way that it forces the Moon to rotate round its orbit by centripetal force and carries it while rotating round the Sun by gravitational force. I don't understand why in this condition Moon doesn't come to the Earth while Earth is carrying it through gravitational force to rotate round the Sun?



Answer



I think I might understand another facet of your question besides what is addressed in the comments. Let me demonstrate a result in classical mechanics which I think might alleviate your concern.


The result is that


Given a system of particles, the center of mass of the system moves is though it were a point mass acted on by the net external force on the system.


So if you think of the Earth-Moon system as being acted on by a net external force which is simply the gravitational attraction to the Sun (to good approximation), then what's happening is that this entire system is orbiting (essentially freely falling) around the sun. The details of what's happening in the Earth-Moon system itself are described by the first link in the original comments, but for purposes of what's happening to the entire system consisting of the Earth+Moon when it orbits the Sun, the details of the internal interactions don't really matter.


Here is a proof of the statement above:


Consider a system of particles with masses $m_i$ and positions $\mathbf x_i$ as viewed in an inertial frame. Newton's second law tells us that the net force $\mathbf F_i$ on each particle is equal to its mass times its acceleration; $$ \mathbf F_i = m_i \mathbf a_i, \qquad \mathbf a_i = \ddot{\mathbf x}_i $$ Let $\mathbf f_{ij}$ denote the force of particle $j$ on particle $i$, and let us break up the force $\mathbf F_i$ on each particle into the sum of the force $\mathbf F^e_i$ due to interactions external to the system and the net force $\sum_j \mathbf f_{ij}$ due to interactions with all other particles in the system; $$ \mathbf F_i = \mathbf F_i^e + \sum_j \mathbf f_{ij} $$ Combining these two facts, we find that $$ \sum_i m_i\mathbf a_i = \sum_i \mathbf F_i^e + \sum_{ij} \mathbf f_{ij} $$ The last term vanishes by Newton's third law $\mathbf f_{ij} = -\mathbf f_{ji}$. The term on the left of the equality is just $M\ddot {\mathbf R}$ where $M$ is the total mass and $\mathbf R$ is the position of the center of mass of the system. Combining these facts gives $$ M\ddot{\mathbf R} = \sum_i \mathbf F_i^e $$


newtonian mechanics - Rotation of a vector


Is a vector necessarily changed when it is rotated through an angle?


I think a vector always gets changed because its projection will change, and also its inclination with axes will always change. However the direction may remain same. Kindly make things clear to me.



Answer



Rotation of a 3-vector


enter image description here


We'll find an expression for the rotation of a vector $\mathbf{r}=(x_1,x_2,x_3)$ around an axis with unit vector $\mathbf{n}=(n_1,n_2,n_3)$ through an angle $\theta$, as shown in Figure .


The vector $\mathbf{r}$ is analysed in two components \begin{equation} \mathbf{r}=\mathbf{r}_\|+\mathbf{r}_\bot \tag{01} \end{equation} one parallel and the other normal to axis $\mathbf{n}$ respectively \begin{eqnarray} &\mathbf{r}_\| &=(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\mathbf{n} \tag{02a}\\ &\mathbf{r}_\bot &=(\mathbf{n}\times\mathbf{r})\times \mathbf{n}= \mathbf{r}-(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\mathbf{n} \tag{02b} \end{eqnarray} If $\mathbf{r}$ is rotated to $\mathbf{r}^{\prime}$ \begin{equation} \mathbf{r}^{\prime}=\mathbf{r}^{\prime}_\|+\mathbf{r}^{\prime}_\bot \tag{03} \end{equation} then the parallel component remains unchanged \begin{equation} \mathbf{r}^{\prime}_\|=\mathbf{r}_\| =(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\mathbf{n} \tag{04} \end{equation} while the normal component $\mathbf{r}_\bot =(\mathbf{n}\times\mathbf{r})\times \mathbf{n}$ is rotated by the angle $\theta$, so having in mind that this vector is perpendicular to $\mathbf{n}\times\mathbf{r}$ and of equal norm \begin{equation} \left\|(\mathbf{n}\times\mathbf{r})\times \mathbf{n}\right\|=\left\|\mathbf{n}\times\mathbf{r}\right\| \tag{05} \end{equation} we find the expression, see Figure below \begin{eqnarray} \mathbf{r}^{\prime}_\bot &=& \cos\theta\left[(\mathbf{n}\times\mathbf{r})\times \mathbf{n}\right]+\sin\theta\left[\mathbf{n}\times\mathbf{r}\right]\nonumber\\ &=& \cos\theta\left[\mathbf{r}-(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\mathbf{n}\right]+\sin\theta\left[\mathbf{n}\times\mathbf{r}\right]\nonumber\\ &=& \cos\theta\;\mathbf{r}-\cos\theta(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\mathbf{n}+\sin\theta\left[\mathbf{n}\times\mathbf{r}\right] \tag{06} \end{eqnarray}



and so finally the vector expression


\begin{equation} \bbox[#FFFF88,12px]{\mathbf{r}^{\prime}= \cos\theta \cdot\mathbf{r}+(1-\cos\theta)\cdot(\mathbf{n}\boldsymbol{\cdot}\mathbf{r})\cdot\mathbf{n}+\sin\theta\cdot(\mathbf{n}\times\mathbf{r})} \tag{07} \end{equation}


From this the $3\times3$ rotation matrix reads \begin{equation} \mathbb{A}\left(\mathbf{n}, \theta\right) = \text { 3D-rotation around axis} \:\:\mathbf{n}=\left(n_{1}, n_{2},n_{3}\right)\:\: \text{through angle} \:\:\theta \end{equation} \begin{equation} = \bbox[#FFFF88,12px]{ \begin{bmatrix} \cos\theta+(1-\cos\theta)n_1^2&(1-\cos\theta)n_1n_2-\sin\theta n_3&(1-\cos\theta)n_1n_3+\sin\theta n_2\\ (1-\cos\theta)n_2n_1+\sin\theta n_3&\cos\theta+(1-\cos\theta)n_2^2&(1-\cos\theta)n_2n_3-\sin\theta n_1\\ (1-\cos\theta)n_3n_1-\sin\theta n_2&(1-\cos\theta)n_3n_2+\sin\theta n_1&\cos\theta+(1-\cos\theta)n_3^2 \end{bmatrix}} \tag{08} \end{equation}


enter image description here


quantum mechanics - Is $∣1 rangle$ an abuse of notation?


In introductory quantum mechanics it is always said that $∣ \rangle$ is nothing but a notation. For example, we can denote the state $\vec \psi$ as $∣\psi \rangle$. In other words, the little arrow has transformed into a ket.


But when you look up material online, it seems that the usage of the bra-ket is much more free. Example of this usage: http://physics.gu.se/~klavs/FYP310/braket.pdf pg 17




A harmonic oscillator with precisely three quanta of vibrations is described as $|3\rangle$., where it is understood that in this case we are looking at a harmonic oscillator with some given frequency ω, say.


Because the state is specified with respect to the energy we can easily find the energy by application of the Hamiltonian operator on this state, H$|3\rangle$. = (3 + 1/2)$\omega h/2\pi |3 \rangle$.



What is the meaning of 3 in this case? Is 3 a vector? A scalar? If we treat the ket symbol as a vector, then $\vec 3$ is something that does not make sense.


Can someone clarify what it means for a scalar to be in a ket?



Answer




What is the meaning of 3 in this case?




In this case, the character "3" is a convenient, descriptive label for the state with three quanta present.


It is often the case that an eigenstate is labelled with its associated eigenvalue.


In the harmonic oscillator case, the number operator commutes with the energy operator (Hamiltonian) so a number eigenstate is also an energy eigenstate.


Thus, the state with three quanta present satisfies


$$\hat N |3\rangle = 3\,|3\rangle$$


But, it also satisfies


$$\hat H |3\rangle = (3 + \frac{1}{2})\hbar \omega\, |3\rangle = \frac{7}{2} \hbar \omega\,|3\rangle$$


So we would be justified in labelling this state as


$$|\frac{7}{2} \hbar \omega\rangle $$


though that's not typical.



general relativity - Is there a simply classification of maximally symmetric spaces?


By maximally symmetric space I mean a (pseudo-) Riemannian manifold of dimension $n$ that has $n(n+1)/2$ linearly independent Killing vector fields. I seem to remember that there are only three kinds, one of them being Minkowski space, and another being de Sitter space. And the third probably being the sphere. But I'm not quite sure this is true in any dimension. Can someone shed some light on this issue? I would also very much appreciate references.


EDIT: Although the question above might have suggested it (because I wasn't thinking straight), I do not mean to focus on Lorentzian manifolds only. Indeed, as mentioned in the comments, the sphere I mention above is Riemannian, while the other two mentioned manifolds are Lorentzian, so that did not make very much sense on my part, because clearly there are more geometries then three (in at least 2 dimensions), thinking of Euclidean space.




electromagnetism - How do electrons know which path to take in a circuit?


The current is maximum through those segments of a circuit that offer the least resistance. But how do electrons know beforehand that which path will resist their drift the least?



Answer



This is really the same as Adam's answer but phrased differently.


Suppose you have a single wire and you connect it to a battery. Electrons start to flow, but as they do so the resistance to their flow (i.e. the resistance of the wire) generates a potential difference. The electron flow rate, i.e. the current, builds up until the potential difference is equal to the battery voltage, and at that point the current becomes constant. All this happens at about the speed of light.


Now take your example of having let's say two wires (A and B) with different resistances connected between the wires - lets say $R_A \gt R_B$. The first few electrons to flow will be randomly distributed between the two wires, A and B, but because wire A has a greater resistance the potential difference along it will build up faster. The electrons feel this potential difference so fewer electrons will flow through A and more electrons will flow through wire B. In turn the potential along wire B will build up and eventually the potential difference along both wires will be equal to the battery. As above this happens extremely rapidly.


So the electrons don't know in advance what path has the least resistance, and indeed the first few electrons to flow will choose random paths. However once the current has stabilised electron flow is restricted by the electron flowing ahead, and these are restricted by the resistance of the paths.


To make an analogy, imagine there are two doors leading out of a theatre, one small door and one big door. The first person to leave after the show will pick a door at random, but as the queues build up more people will pick the larger door because the queue moves faster.



Monday 25 December 2017

quantum mechanics - What periodic functions of the angle operator are Hermitian?



Let $\hat{\theta}$ be one of the position operators in cylindrical coordinates $(r,\theta,z)$. Then my question is, for what periodic functions $f$ (with period $2\pi$) is $f(\hat{\theta})$ a Hermitian operator? In case it's not clear, $f(\hat{\theta})$ is defined by a Taylor series.


The reason I ask is because of my answer here dealing with an Ehrenfest theorem relating angular displacement and angular momentum. This journal paper says that the two simplest functions which satisfy this condition are sine and cosine. First of all I'm not sure how to prove that they satisfy the condition, and second of all I want to see what other functions satisfy it.


What makes this tricky is that the adjoint of an infinite sum is not equal to the infinite sum of the adjoints.




Sunday 24 December 2017

newtonian mechanics - Why does Newton's third law exist even in non-inertial reference frames?


While reviewing Newton's laws of motion I came across the statement which says Newton's laws exist only in inertial reference frames except the third one. Why is it like that?




spacetime dimensions - String theory: why not use $n$-dimensional blocks/objects/branes?


I have a basic question: if we use 1d string to replace 0d particle to gain insight of nature in string theory, and advanced to use 2d membranes, can we imagine that using $3$- or $n$-dimensional blocks/objects/branes as basic units in physics theory? Where is the end of this expansion?




semiconductor physics - Does the Fermi level change under change of temperature, voltage or other conditions?


From a previous post with similar title, (What's the difference between Fermi Energy and Fermi Level?) I think it is safe to assume that




  1. In a block of material, Fermi energy is the level, up to which, electrons will fill all the available states, @ T = 0.





  2. In a block of material, in room temperature, electrons will be exited and recombined all the time. at the "Fermi level" there is 50% chance that an electron can be found there.




My questions are now




  1. For the same block of material, does the Fermi level change when the block is subject to temp change, external voltage, etc? I think it should.





  2. For the same block of material, should the Fermi energy should remain the same, i.e. is it an intrinsic property of that material, just like mass?




  3. When you write down the Fermi Function, in the exponential term $\frac{E-E_f}{KT}$, what is $E_f$? The Fermi level or the Fermi energy?




I am getting misleading information from the internet: http://hyperphysics.phy-astr.gsu.edu/hbase/solids/fermi.html




Saturday 23 December 2017

fourier transform - Using Plancherel's theorem on delta function


Plancherel's Theorem states that for $f \in L^{2}(\mathbb{R})$ we have


$$f(x) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{\infty}F(k)e^{ikx}dk \Longleftrightarrow F(k) = \frac{1}{\sqrt{2 \pi}}\int^{\infty}_{\infty}f(x)e^{-ikx}dx.$$



If we consider $f(x) := \delta(x)$ (the delta function) then using this theorem it follows simply that $$\delta(x) = \frac{1}{2 \pi}\int_{-\infty}^{\infty}e^{ikx}dk.$$


Clearly then for $x = 0$ and $x \neq 0$ the integral is divergent. Also, apparently $\delta(x)$ is not square integrable, hence I'm not sure that we can even use Plancherel's Theorem. But having said that I understand that this result does hold. Is it incorrect to show that this result is true using Plancherel's Theorem?




quantum mechanics - Are there any true discontinuities in physics?


When we first learn physics, it's often presented very 'discontinuously'. For example, pop quantum likes to talk about objects being "either" particles or waves, leading to a lot of confused questions about how things switch between the two. Once you learn about wavefunctions, the problem goes away; 'particle' and 'wave' are just descriptions of two extreme kinds of wavefunctions.


In general, further learning 'fills in' the knowledge holes that discontinuities cover up:




  • Phase transitions in thermodynamics. These are only truly discontinuous in the $N \to \infty$ limit, which doesn't physically exist. For large but finite $N$, we can use statistical mechanics to get a perfectly continuous answer.

  • Measurement in quantum mechanics. 'Copenhagen collapse' is not instantaneous, it's the result of interaction with an external system, which occurs in continuous time.

  • Optical decays. Without QED, the best model is to just have atoms suddenly and randomly emit photons with some lifetime. With QED, we have a perfectly continuous time evolution (allowing for, e.g. Rabi oscillations).


At this point I'm having trouble thinking of any 'real' discontinuities. Are there any theories (that we believe to be fundamental) that predict a discontinuity in a physically observable quantity?




To address several comments: I am not looking for a discontinuity in time, as this is associated with infinite energy. I am not looking for experimental confirmation of a discontinuity in time, since that's impossible.


I am asking if there is any measurable parameter in any of our currently most fundamental theories which changes discontinuously as a function of another measurable parameter, according to the theory itself. For example, if phase transitions actually existed, then phase as a function of temperature or pressure would work.



Answer



Any process that causes a physical quantity to become truly discontinuous in space and/or time by definition takes place over an extremely (in fact, infinitely) short time or length scale. From the usual uncertainty principles of quantum mechanics, these process would have huge energy or momentum, and would presumably result in both very strong quantum and gravitational effects. Since we don't have a good theory of quantum gravity, there's really very little we can say with confidence about such extreme regimes.



But even if we do one day come up with a perfectly well-defined and self-consistent theory that reconciles quantum field theory with general relativity and is completely continuous in every way, that still won't settle your question. Such a theory can never be proven to be "the final theory," because there will always be the possibility that new experimental data will require it to be generalized. The most likely place for this "new physics" would probably be at whatever energy scales are beyond our current experimental reach at the time. So we'll probably always be the least confident of the physics at the very smallest of length or time scales.


A similar line of thought holds for the possibility of absolute discontinuities in energy or momentum: ruling out, say, really tiny discontinuities in energy would require knowing the energy to extremely high precision. But by the energy-time uncertainty relation, establishing the energy to such high precision would require an extremely long time - and eventually the required time scale would become too long to be experimentally feasible.


So extremely long and extremely short time/length scales both present fundamental difficulties in different ways, and your question will probably never be answerable.


quantum field theory - Conformal symmetry, Weyl symmetry, and a traceless energy-momentum tensor


I'm trying to drill down the exact relation between conformal symmetry, Weyl symmetry, and tracelessness of the energy-momentum tensor. However, I'm getting quite confused because every book I can find seems to be treating this subject extremely sloppily.


First, following the exposition here, a conformal transformation is defined to be a diffeomorphism which satisfies $$g'_{\mu\nu}(x') = \Omega^{-2} g_{\mu\nu}(x)$$ followed by a Weyl transformation (i.e. a local rescaling of the metric and fields), so that the composite of the two maps transforms the metric as $$g_{\mu\nu}(x) \to g'_{\mu\nu}(x') = g_{\mu\nu}(x).$$ In fact, this is already a big source of confusion, because many sources call these diffeomorphisms conformal transformations in themselves, while other sources call Weyl transformations conformal transformations. But as far as I can tell, the "true" conformal transformations people actually use require both of these transformations. In other words, using the common nomenclature, a conformal transformation is a conformal transformation plus a conformal transformation. As far as I can tell, no source defines a conformal transformation explicitly, and most describe it as "a diffeormorphism that preserves angles", which is a completely vacuous statement.


In any case, both steps also affect the matter fields $\Phi$, so the variation of the action under a conformal transformation should have four terms, $$\delta S = \int_M d^4x \left(\frac{\delta S}{\delta \Phi} (\delta^d \Phi + \delta^w \Phi) + \frac{\delta S}{\delta g_{ab}} (\delta^d g_{ab} + \delta^w g_{ab}) \right)$$ where $\delta^d$ is the variation due to the diffeomorphism and $\delta^w$ is the variation due to the Weyl transformation. Let us number these contributions $\delta S_1$ through $\delta S_4$.



Polchinski performs the derivation in one line, blithely ignoring all terms except for $\delta S_4$. Meanwhile, di Francesco ignores all terms except for $\delta S_1$ (e.g. see Eq. 4.34). This is supposed to be analogous to an argument in chapter 2, which their own errata indicate are completely wrong, because they forgot to include $\delta S_3$. Unfortunately, they didn't correct chapter 4.


In any case, di Francesco claims that tracelessness of the energy-momentum tensor implies conformal invariance, which is the statement $\delta S = 0$. I've been unable to prove this. We know that $\delta S_1 + \delta S_3 = 0$ by diffeomorphism invariance, and $\delta S_4 = 0$ by tracelessness. But that doesn't take care of $\delta S_2$, which is the subject of this question. We cannot say it vanishes on-shell, because symmetries must hold off-shell.


I run into a similar problem trying to prove a converse. Suppose we have conformal invariance. Then $\delta S = 0$, and we know $\delta S_1 + \delta S_3 = 0$. At this point I can't make any further progress without assuming the matter is on-shell, $\delta S_2 = 0$. Then we know $\delta S_4 = 0$, but this does not prove the tracelessness of the energy-momentum tensor, because the Weyl transformation in a conformal transformation is not a general Weyl transformation, but rather is quite restricted.


In other words, I can't prove either direction, and I think all the proofs I've seen in books are faulty, forgetting about the majority of the terms in the variation. What is going on here?




classical mechanics - Does the second law of thermodynamics take into consideration of attractive interactions between particles?


If one searches Google or textbooks on 2nd Law of Thermodnamics, one usually finds a statement that is either equivalent or implies the following.


The entropy of the universe always increases.


But does that include intermolecular forces, or interactions among particles in general?


For example, suppose we have a planet with an atmosphere. The planet does not rotate around itself. For some reason, at this moment, the atmosphere is uniform in density up to 10km away from surface. Clearly, soon, we will find that the density of air molecules near the surface increases and the density far from the surface decreases, and the density probably ends up following an exponential decay in relation to altitude.


In the above scenario, this natural process decreases the entropy of the universe due to the gravitational field of the planet.


So what about the 2nd law of thermodynamics?





EDIT: For clarity, the gas molecules on this planet are assumed to be chargeless spheres that only collide elastically.


For clarify again, the above example assumes that the entropy in statistical thermodynamics is indeed the entropy referenced in 2nd law.




special relativity - Rotate a long bar in space and get close to (or even beyond) the speed of light $c$


Imagine a bar


spinning like a helicopter propeller,


At $\omega$ rad/s because the extremes of the bar goes at speed


$$V = \omega * r$$


then we can reach near $c$ (speed of light) applying some finite amount of energy just doing



$$\omega = V / r$$


The bar should be long, low density, strong to minimize the amount of energy needed


For example a $2000\,\mathrm{m}$ bar


$$\omega = 300 000 \frac{\mathrm{rad}}{\mathrm{s}} = 2864789\,\mathrm{rpm}$$


(a dental drill can commonly rotate at $400000\,\mathrm{rpm}$)


$V$ (with dental drill) = 14% of speed of light.


Then I say this experiment can be really made and bar extremes could approach $c$.


What do you say?


EDIT:


Our planet is orbiting at sun and it's orbiting milky way, and who knows what else, then any Earth point have a speed of 500 km/s or more agains CMB.



I wonder if we are orbiting something at that speed then there would be detectable relativist effect in different direction of measurements, simply extending a long bar or any directional mass in different galactic directions we should measure mass change due to relativity, simply because $V = \omega * r$


What do you think?



Answer



Imagine a rock on a rope. As you rotate the rope faster and faster, you need to pull stronger and stronger to provide centripetal force that keeps the stone on the orbit. The increasing tension in the rope would eventually break the it. The very same thing would happen with bar (just replace the rock with the bar's center of mass). And naturally, all of this would happen at speeds far below the speed of light.


Even if you imagined that there exists a material that could sustain the tension at relativistic speeds you'd need to take into account that signal can't travel faster than at the speed of light. This means that the bar can't be rigid. It would bend and the far end would trail around. So it's hard to even talk about rotation at these speeds. One thing that is certain is that strange things would happen. But to describe this fully you'd need a relativistic model of solid matter.


People often propose arguments similar to yours to show Special Relativity fails. In reality what fails is our intuition about materials, which is completely classical.


Thursday 21 December 2017

What is Diffraction?


Diffraction is just light interacting with small objects, and bending, but this seems like a very imprecise definition to me. What is diffraction, actually? I was confused because there are at least two diffraction gratings, that I know of. One being actually slits, through the standard diffraction I learned about in class, and then there are spaced grooves which can also diffract. This latter one, I thought, would just be reflection and interference, but I was told it's the same phenomenon. So again, what is diffraction?



Answer



Don't get too worried about fine meanings of the word: it is ultimately a little imprecise, and when you're thinking about real, physical problems, you're going to be working with equations.


The more fundamental concept is interference, which is simply a manifestation of the linear superposition principle. Amplitudes add, so magnitude and phase is important when summing up contributions to a field from different sources.


Diffraction works like this. Suppose you know a monochromatic field's values on one transverse plane. Now Fourier transform the values, to express the field on a transverse plane as a sum of plane waves. Plane waves running nearly orthogonal to the transverse plane have almost the same phase over wide transverse regions. So they show themselves as low spatial frequencies in the transverse plane field pattern. Plane waves running at steep angles to the transverse plane beget high spatial frequency components in that plane.



So we've resolved our field into a linear superposition of plane waves. Because these waves are propagating in different directions, they undergo different delays in reaching another transverse plane. The Fourier co-efficients take on different phases, so the same constituent plane waves interfere together to make a different field configuration on other transverse planes.


Diffraction is thus the interference of a field's (e.g. an electromagnetic field following the linear Maxwell equations) plane wave constituents. These constituent plane waves beat differently on different transverse planes because they undergo different phase delays by dint of their different directions. See my answer here and also here for more info.


Another equivalent (in the larger propagation distance limit) is Huygens's principle. Think of a single slit field. Diffraction is the interference on a farfield plane between the different fields arising from the different Huygens point sources at different positions in the slit.


group theory - What is the weight system for these ${rm SU}(5)$ representations?


I need to work out the weight systems for the fundamental representation $\mathbf{5}$ and the conjugate representation $\overline{\mathbf{5}}$. I'm not clear what this means. The $\mathbf{5}$ representation is of course just the representation of $SU\left(5\right)$ by itself. After picking a Cartan subalgebra as the diagonal matrices with zero trace, we can of course see that the roots are $L_i-L_j$ where $L_i$ picks out the $i^{th}$ element on the diagonal, and the weights are simply $L_i$ in this case.


It is supposed to be the case that I can use the weight systems of representations to show for instance that $\mathbf{5}\otimes \mathbf{5}=\mathbf{10}\oplus \mathbf{15}$.




Wednesday 20 December 2017

conservation laws - Why do the Lagrangian and Hamiltonian formulations give the same conserved quantities for the same symmetries?


The connection between symmetries and conservation laws can be viewed through the lens of both Lagrangian and Hamiltonian mechanics. In the Lagrangian picture we have Noether's theorem. In the Hamiltonian picture we have the so-called "moment map." When we consider the same "symmetry" in both viewpoints, we get the exact same conserved quantities. Why is that?


I'll give an example. For a 2D particle moving in a central potential, the action is



$$S = \int dt \Bigl(\frac{m}{2} ( \dot q_1^2 + \dot q_2^2) - V(q_1^2 + q_2^2)\Bigr).$$


We can then consider the $SO(2)$ rotational symmetry that leaves this action invariant. When we vary the path by a infinitesimal time dependent rotation,


$$\delta q_1(t) = - \varepsilon(t) q_2(t)$$ $$\delta q_2(t) = \varepsilon(t) q_1(t)$$ we find that the change in the action is


$$\delta S = \int dt \Bigl( m ( \dot q_1 \delta \dot q_1 + \dot q_2 \delta \dot q_2) - \delta V \Bigr)$$ $$= \int dt m (q_1 \dot q_2 - q_2 \dot q_1)\dot \varepsilon(t)$$


As $\delta S = 0$ for tiny perturbations from the actual path of the particle, an integration by parts yields


$$\frac{d}{dt} (m q_1 \dot q_2 - m q_2 \dot q_1) = \frac{d}{dt}L = 0 $$ and angular momentum is conserved.


In the Hamiltonian picture, when we rotate points in phase space by $SO(2)$, we find that $L(q,p) = q_1 p_2 - q_2p_1$ remains constant under rotation. As the Hamiltonian is $H$, we have


$$\{ H, L\} = 0$$ implying that angular momentum is conserved under time evolution.


In the Lagrangian picture, our $SO(2)$ symmetry acted on paths in configuration space, while in the Hamiltonian picture our symmetry acted on points in phase space. Nevertheless, the conserved quantity from both is the same angular momentum. In other words, our small perturbation to the extremal path turned out to be the one found by taking the Poisson bracket with the derived conserved quantity:


$$\delta q_i = \varepsilon(t) \{ q_i, L \}$$



Is there a way to show this to be true in general, that the conserved quantity derived via Noether's theorem, when put into the Poisson bracket, re-generates the original symmetry? Is it even true in general? Is it only true for conserved quantities that are at most degree 2 polynomials?


Edit (Jan 23, 2019): A while ago I accepted QMechanic's answer, but since then I figured out a rather short proof that shows that, in the "Hamiltonian Lagrangian" framework, the conserved quantity does generate the original symmetry from Noether's theorem.


Say that $Q$ is a conserved quantity:


$$ \{ Q, H \} = 0. $$ Consider the following transformation parameterized by the tiny function $\varepsilon(t)$: $$ \delta q_i = \varepsilon(t)\frac{\partial Q}{\partial p_i} \\ \delta p_i = -\varepsilon(t)\frac{\partial Q}{\partial q_i} $$ Note that $\delta H = \varepsilon(t) \{ H, Q\} = 0$. We then have \begin{align*} \delta L &= \delta(p_i \dot q_i - H )\\ &= -\varepsilon\frac{\partial Q}{\partial q_i} \dot q_i - p_i \frac{d}{dt} \Big( \varepsilon\frac{\partial Q}{\partial p_i} \Big) \\ &= -\varepsilon\frac{\partial Q}{\partial q_i} \dot q_i - \dot p_i \varepsilon\frac{\partial Q}{\partial p_i} + \frac{d}{dt} \Big( \varepsilon p_i \frac{\partial Q}{\partial p_i}\Big) \\ &= - \varepsilon \dot Q + \frac{d}{dt} \Big( \varepsilon p_i \frac{\partial Q}{\partial p_i}\Big) \\ \end{align*}


(Note that we did not use the equations of motion yet.) Now, on stationary paths, $\delta S = 0$ for any tiny variation. For the above variation in particular, assuming $\varepsilon(t_1) = \varepsilon(t_2) = 0$,


$$ \delta S = -\int_{t_1}^{t_2} \varepsilon \dot Q dt $$


implying that $Q$ is conserved.


Therefore, $Q$ "generates" the very symmetry which you can use to derive its conservation law via Noether's theorem (as hoped).



Answer



In this answer let us for simplicity restrict to the case of a regular Legendre transformation in a point mechanical setting, cf. this related Phys.SE post. (Generalizations to field theory and gauge theory are in principle possible, with appropriate modifications of conclusions.)





  1. On one hand, the action principle for a Hamiltonian system is given by the Hamiltonian action $$ S_H[q,p] ~:= \int \! dt ~ L_H(q,\dot{q},p,t).\tag{1} $$ Here $L_H$ is the so-called Hamiltonian Lagrangian $$ L_H(q,\dot{q},p,t) ~:=~\sum_{i=1}^n p_i \dot{q}^i - H(q,p,t). \tag{2} $$ In the Hamiltonian formulation there is a bijective correspondence between conserved quantities $Q_H$ and infinitesimal (vertical) quasi-symmetry transformations $\delta$, as showed in my Phys.SE answers here & here. It turns out that a quasi-symmetry transformation $\delta$ is a Hamiltonian vector field generated by a conserved quantity $Q_H$: $$ \delta z^I~=~ \{z^I,Q_H\}\varepsilon,\qquad I~\in~\{1, \ldots, 2n\}, \qquad \delta t~=~0,$$ $$ \delta q^i~=~\frac{\partial Q_H}{\partial p_i}\varepsilon, \qquad \delta p_i~=~ -\frac{\partial Q_H}{\partial q^i}\varepsilon, \qquad i~\in~\{1, \ldots, n\},\tag{3}$$




  2. On the other hand, if we integrate out the momenta $p_i$, we get the corresponding Lagrangian action $$ S[q] ~= \int \! dt ~ L(q,\dot{q},t),\tag{4} $$ cf. this related Phys.SE post. The Hamiltonian eqs. $$0~\approx~\frac{\delta S_H}{\delta p_i} ~=~\dot{q}^i-\frac{\partial H}{\partial p_i} \tag{5}$$ for the momenta $p_i$ yield via the Legendre transformation the defining relation $$p_i~\approx~ \frac{\partial L}{\partial \dot{q}^i}\tag{6}$$ of Lagrangian momenta. Eqs. (5) & (6) establish a bijective correspondence between velocities and momenta.




  3. If we take this bijective correspondence $\dot{q} \leftrightarrow p$ into account it is clear that Hamiltonian and Lagrangian conserved charges $$Q_H(q,p,t)~\approx~Q_L(q,\dot{q},t) \tag{7}$$ are in bijective correspondence. Below we will argue that the same is true for (vertical) infinitesimal quasi-symmetries on both sides.





  4. On one hand, if we start with a (vertical) infinitesimal quasi-symmetry in (Hamiltonian) phase space $$ \varepsilon \frac{df^0_H}{dt}~=~\delta L_H ~=~\sum_{i=1}^n\frac{\delta S_H}{\delta p_i}\delta p_i + \sum_{i=1}^n\frac{\delta S_H}{\delta q^i}\delta q^i + \frac{d}{dt}\sum_{i=1}^n p_i~\delta q^i ,\tag{8}$$ it can with the help of eq. (5) be restricted to a (vertical) infinitesimal quasi-symmetry within the (Lagrangian) configuration space: $$ \varepsilon \frac{df^0_L}{dt}~=~\delta L ~=~ \sum_{i=1}^n\frac{\delta S}{\delta q^i}\delta q^i + \frac{d}{dt}\sum_{i=1}^n p_i~\delta q^i ,\tag{9}$$ In fact we may take $$f^0_L(q,\dot{q},t)~\approx~f^0_H(q,p,t) \tag{10}$$ the same. The restriction procedure also means that the bare Noether charges $$Q^0_H(q,p,t)~\approx~Q^0_L(q,\dot{q},t) \tag{11}$$ are the same, since there are no $\dot{p}_i$ appearance.




  5. Conversely, if we start with an infinitesimal quasi-symmetry in (Lagrangian) configuration space, we can use Noether's theorem to generate a conserved quantity $Q_L$, and in this way close the circle.




  6. Example: Consider $n$ harmonic oscillators with Lagrangian $$ L~=~\frac{1}{2}\sum_{k,\ell=1}^n \left(\dot{q}^k g_{k\ell}\dot{q}^{\ell} - q^k g_{k\ell} q^{\ell}\right),\tag{12}$$ where $g_{k\ell}$ is a metric, i.e. a non-degenerate real symmetric matrix. The Hamiltonian reads $$H~=~\frac{1}{2}\sum_{k,\ell=1}^n \left( p_k g^{k\ell} p_{\ell} + q^k g_{k\ell} q^{\ell}\right) ~=~\sum_{k,\ell=1}^n z^{k \ast} g_{k\ell} z^{\ell},\tag{13}$$ with complex coordinates $$ z^k~:=~\frac{1}{\sqrt{2}}(q^k+ip^k), \qquad p^k~:=~\sum_{\ell=1}^ng^{k\ell}p_{\ell}, \qquad \{z^{k \ast},z^{\ell}\}~=~ig^{k\ell}. \tag{14}$$ The Hamiltonian Lagrangian (2) reads $$ L_H~=~\sum_{k=1}^n p_k \dot{q}^k - H ~=~\frac{i}{2}\sum_{k,\ell=1}^n \left( z^{k \ast} g_{k\ell} \dot{z}^{\ell} - z^{k} g_{k\ell} \dot{z}^{\ell\ast} \right) - H, \tag{15}$$ Hamilton's eqs. are $$ \dot{z}^k~\approx~-iz^k, \qquad \dot{q}^k~\approx~p^k, \qquad \dot{p}^k~\approx~-q^k. \tag{16}$$ Some conserved charges are $$ Q_H ~=~ \sum_{k,\ell=1}^n z^{k \ast} H_{k\ell} z^{\ell} ~=~\sum_{k,\ell=1}^n \left( \frac{1}{2}q^k S_{k\ell} q^{\ell} +\frac{1}{2}p^k S_{k\ell} p^{\ell}+ p^k A_{k\ell} q^{\ell}\right), \tag{17}$$ where $$ H_{k\ell}~:=~S_{k\ell}+i A_{k\ell}~=~H_{\ell k}^{\ast} \tag{18}$$ is an Hermitian $n\times n$ matrix, which consists of a symmetric and an antisymmetric real matrix, $S_{k\ell}$ and $A_{k\ell}$, respectively. The conserved charges (17) generate an infinitesimal $u(n)$ quasi-symmetry of the Hamiltonian action $$\delta z_k~=~ \varepsilon\{z_k , Q_H\} ~=~-i \varepsilon\sum_{\ell=1}^n H_{k\ell} z^{\ell},$$ $$\delta q_k ~=~ \varepsilon\sum_{\ell=1}^n \left( A_{k\ell} q^{\ell} +S_{k\ell} p^{\ell} \right), \qquad \delta p_k ~=~ \varepsilon\sum_{\ell=1}^n \left( -S_{k\ell} q^{\ell} +A_{k\ell} p^{\ell} \right). \tag{19}$$ The bare Noether charges are $$ Q^0_H ~=~\sum_{k,\ell=1}^n p^k \left( A_{k\ell} q^{\ell} +S_{k\ell} p^{\ell} \right). \tag{20}$$ Also $$ f^0_H~=~\frac{1}{2}\sum_{k,\ell=1}^n \left( \frac{1}{2}p^k S_{k\ell} p^{\ell}- q^k S_{k\ell} q^{\ell}\right). \tag{21}$$ The corresponding infinitesimal $u(n)$ quasi-symmetry of the Lagrangian action (1) is $$\delta q_k ~=~ \varepsilon\sum_{\ell=1}^n \left( A_{k\ell} q^{\ell} +S_{k\ell} \dot{q}^{\ell} \right), \tag{22}$$ as one may easily verify.





fermions - Supersymmetry transformations as coordinate transformations


Usually, a supersymmetry transformation is carried out on bosonic and fermionic fields which are functions of the coordinates (or on a superfield which is a function of real and fermionic coordinates). But, is it possible to interpret supersymmetry transformations as coordinate transformations on the set of coordinates $(x^0,\ldots,x^N,\theta_1,\ldots,\theta_M)$?


The problem I see is that the coordinates would transform something like $x^\mu\rightarrow x^\mu+\theta\sigma\bar{\theta}$ which is no longer a real (or complex) number, but a commuting Grassmann number. Can one make sens of a coordinate position no longer being a real number?


Edit: To clarify, this is NOT about confusion in what happens when adding real numbers with commuting grassmann numbers in general. That the lagrangian in QFT for example is not a real number, but a commuting grassmann number, is fine. What I am confused about is really how to make sense of coordinates that are grassmannian. Coordinates are supposed to describe a position in spacetime/on a manifold, and it seems to me that it is essential that a position is a standard real number.



Answer



Comments to the question (v3):





  1. Recall that a supernumber $z=z_B+z_S$ consists of a body $z_B$ (which always belongs to $\mathbb{C}$) and a soul $z_S$ (which only belongs to $\mathbb{C}$ if it is zero), cf. e.g. this Phys.SE post.




  2. An observable/measurable quantity can only consist of ordinary numbers (belonging to $\mathbb{C}$). It does not make sense to measure a soul-valued output in an actual experiment.




  3. Souls are indeterminates that appear in intermediate formulars, but are integrated (or differentiated) out in the final result.




  4. In a superspace formulation of a field theory, a Grasmann-even spacetime coordinates $x^{\mu}$ in superspace is promoted to a supernumber, and is not necessarily an ordinary number.





  5. A supersymmetry-translation of a Grasmann-even spacetime coordinate $x^{\mu}$ only changes the soul (but not the body) of $x^{\mu}$.




  6. Note that in the mathematical definition of a supermanifold, the focus of the theory is not on spacetime coordinates per se, but (very loosely speaking) rather on certain algebras of functions of spacetime. See also e.g. Refs. 1-3 for details.




References:





  1. Pierre Deligne and John W. Morgan, Notes on Supersymmetry (following Joseph Bernstein). In Quantum Fields and Strings: A Course for Mathematicians, Vol. 1, American Mathematical Society (1999) 41–97.




  2. V.S. Varadarajan, Supersymmetry for Mathematicians: An Introduction, Courant Lecture Notes 11, 2004.




  3. C. Sachse, A Categorical Formulation of Superalgebra and Supergeometry, arXiv:0802.4067.





Retrodiction in Quantum Mechanics


To focus this question let us consider first classical mechanics (which is time-symmetric). Given a final condition (and sufficient information) one can calculate the system conditions of an earlier time (retrodiction).


Given Quantum Mechanics (which is time-symmetric) and a final condition what is the status of retrodiction in that theory? Without choosing between them, here are three options:


(1) An earlier condition can be determined probabilistically exactly as a prediction can be.


(2) An earlier condition can be determined exactly (with enough accessible information) from a final condition.


(3) It is inappropriate to use QM for retrodictions: it is a prediction-only theory.


I have seen some thought experiments on all this in Penrose's books, but it remains inconclusive there, and standard QM texts are not interested in retrodiction.


EDIT AFTER 6 ANSWERS


Thanks for the effort on this question, which was challenging. I will continue to investigate it, but some initial thoughts on the Answers.


(a) I was expecting to receive comments on decoherence, chaos and something on Interpretations and such are amongst the answers. Perhaps the single most important sentence is Peter Shor:



The time-asymmetry in quantum mechanics beween retrodiction and prediction comes from the asymmetry between state preparation and measurement.


Lubos's introduction of Bayesian probability and response (c) is the most useful, although discussion of entropy does not seem immediately relevant. This response though suggests a different framework for retrodiction, with an apriori set of assumptions introduced for the calculation of initial state.


A complication that was not clearly enough addressed was the link with Classical Mechanics retrodiction. Statements that Quantum Retrodiction was "impossible" does not square easily with the fact that some will have corresponding classical systems easily retrodicted. Of course the fact that Quantum Prediction is probabilistic, allows Quantum Retrodiction to be probabilistic too (and thus different from classical retrodiction) was not followed up in some answers. As far as references to Chaos is concerned does not "retrodiction Chaos" result in an increase in ability to classically retrodict given that the trajectories will be converging?


On Peter Morgan's points I should say that the question is open to any interpretation of how the experimental apparatus is used - if it is relevant to giving an appropriate answer then so discuss the significance.


On Deepak's links I should note that these include references to Applications to this idea in Quantum Communication: ie what was the sent state given a received state? I think Lubos's Probability is relevant here too.


Feel free to EDIT your answers if you think of anything else!



Answer



Dear Roy, (3) is correct. More precisely, retrodictions have to follow completely different rules than predictions. This elementary asymmetry - representing nothing else than the ordinary "logical arrow of time" (the past is not equivalent to the future as far as the logical reasoning goes) - is confusing for a surprisingly high number of people including physicists.


However, this asymmetry between predictions and retrodictions has nothing to do with quantum mechanics per se. In classical statistical physics, one faces the very same basic problem. The asymmetry is relevant whenever there is any incomplete information in the system. The asymmetry occurs because "forgetting is an irreversible process". Equivalently, the assumptions (=past) and their logical consequences (=future) don't play a symmetric role in mathematical logic. This source of logical asymmetry is completely independent from the CPT-theorem that may guarantee a time-reversal symmetry of the fundamental laws of physics. But whenever there is anything uncertain about the initial or the final state, logic has to be used and logic has an extra asymmetry between the past and the future.


Predictions: objective numbers



In quantum mechanics, the probability of a future outcome is calculated from $|c|^2$ where $c$ is a complex probability amplitude calculated by evolving the initial wave function via Schrödinger's equation, or by an equivalent method. The probabilities for the future are completely "objective". One may repeat the same experiment with the same initial conditions many times and literally measure the right probability. And this measurable probability is calculable from the theory - quantum mechanics, in this case - too.


Retrodictions: subjective choices


However, the retrodictions are always exercises in logical inference and logical inference - and I mean Bayesian inference in particular - always depends on priors and subjective choices. There is no theoretical way to calculate "unique" probabilities of initial states from the knowledge of the final state. Also, there is no experimental procedure that would allow us to measure such retrodictions because we are not able to prepare systems in the same "final states": final states, by definition, are always prepared by the natural evolution rather than by "us". So one can't measure such retrodictions.


To estimate the retrodicted probabilities theoretically, one must choose competing hypotheses $H_i$ - in the case of retrodictions, they are hypotheses about the initial states. We must decide about their prior probabilities $P(H_i)$ and then we may apply the logical inference. The posterior probability of $H_i$ is this conditional probability: $$ P (H_i|F) = P(F|H_i) P(H_i) / P(F) $$ This is Bayes' formula.


Here, we have observed some fact $F$ about the final state (which may be, hypothetically, a full knowledge of the final microstate although it's unlikely). To know how this fact influences the probabilities of various initial states, we must calculate the conditional probability $P(F|H_i)$ that the property of the final state $F$ is satisfied for the initial state (assumption or condition) $H_i$. However, this conditional probability is not the same thing as $P(H_i|F)$: they are related by the Bayes formula above where $P(H_i)$ is our prior probability of the initial state $P(H_i)$ - our conclusions about the retrodictions will always depend on such priors - and $P(F)$ is a normalization factor ("marginal probability of $F$") that guarantees that $\sum_i P(H_i|F) = 1$.


Second law of thermodynamics


The logical asymmetry between predictions and retrodictions becomes arbitrarily huge quantitatively when we discuss the increase of entropy. Imagine that we organize microstates in ensembles - both for initial and final states; and this discussion works for classical as well as quantum physics. What do we mean by the probability that the initial state $I$ evolves to the final state $F$ if both symbols represent ensembles of microstates? Well, we must sum over all microstates in the final state $F$, but average over all microstates in the initial state $I$. Note that there is a big asymmetry in the treatment of the initial and final states - and it's completely logical that this asymmetry has to be present: $$ P ( F|I) = \sum_{i,j} P(F_j|I_i) P(I_i) $$ We sum over the final microstates because $P(F_1 {\rm or } F_2) = P(F_1)+P(F_2)$; "or" means to add probabilities. However, we must average over the initial states because we must keep the total probability of all mutually excluding initial states equal to one.


Note that $P(I_i)$ is the prior probability of the $i$th microstate. In normal circumstances, when all the initial states are considered equally likely - which doesn't have to be so - $P(I_i) = 1/N_{I}$ for each $i$ where $N_{I}$ is the number of the initial states in the ensemble $I$ (this number is independent of the index $i$).


So the formula for $P(F|I)$ is effectively $$ P ( F|I) = \frac{1}{N_{I}} \sum_{i,j} P(F_j|I_i) $$ Note that we only divide by the number of initial microstates but not the final microstates. And the number of the initial states may be written as $\exp(S_I)$, the exponentiated entropy of the initial state. Its appearance in the formula above - and the absence of $\exp(S_F)$ in the denominator - is the very reason why the lower-entropy states are favored as initial states but higher-entropy states are favored as final states.


On the contrary, if we studied the opposite evolution - and just to be precise, we will CPT-conjugate both initial and final state, to map them to $I', F'$ - the probability of the opposite evolution will be $$ P(I'|F') = \frac{1}{N_{F}} \sum_{i,j} P(I'_i|F'_j). $$ Now, the probability $P(I'_i|F'_j)$ may be equal to $P(F_j|I_i)$ by the CPT-theorem: they're calculated from complex amplitudes that are equal (up to the complex conjugation). But this identity only works for the individual microstates. If you have ensembles of many microstates, they're treated totally differently. In particular, the following ratio is not one: $$ \frac {P(I'|F')}{P(F|I)} = \exp(S_I-S_F) $$ I wrote the numbers of microstates as the exponentiated entropy. So the evolution from $F'$ to $I'$ isn't equally likely as the evolution from $I$ to $F$: instead, they differ by the multiplicative factor of the exponential of the entropy difference - which may be really, really huge because $S$ is of order $10^{26}$ for macroscopic objects. This entropy gets exponentiated once again to get the probability ratio!



This point is just to emphasize the people who claim that the evolution from a high-entropy initial state to a low-entropy final state is "equally likely" as the standard evolution from a low-entropy initial state to a high-entropy final state are making a mistake of a missing or incorrectly added factor of $\exp(10^{26})$ in their formulae, and it is a huge mistake, indeed. Also, there is absolutely no doubt that the inverse processes have these vastly different probabilities - and I would like to claim that I have offered the dear reader a full proof in the text above.


Their mistake may also be phrased as the incorrect assumption that conditional probabilities $P(A|B)$ and $P(B|A)$ are the same thing: their mistake is this elementary, indeed. These two conditional probabilities are not the same thing and the validity of the CPT-theorem in a physical theory can't change the fact that these two conditional probabilities are still very different numbers, regardless of the propositions hiding behind the symbols $A,B$.


Just to emphasize how shocking it is for me to see that those elementary issues about the distinction of past and future are so impenetrable for so many people in 2011, watch Richard Feynman's The Messenger Lecture number 5, "The Distinction of Past and Future" (Internet Explorer needed):



http://research.microsoft.com/apps/tools/tuva/index.html



The very first sentence - the introduction to this very topic is - "It's obvious to everybody that the phenomena in the world are self-evidently irreversible." Feynman proceeds to explain how the second law of thermodynamics and other aspects of the irreversibility follow even from the T-symmetric dynamical laws because of simple rules of mathematical logic. So whoever doesn't understand that the past and future play different roles in physics really misunderstands the first sentence in this whole topic - and in some proper sense, even the very title of it ("The Distinction of Past and Future").


thermodynamics - Is the reversible process possible?



When I was studying about heat engine, specifically Carnot cycle, I though the assumptions to be impossible. Then why one should study all these? What would reversibility mean in reality?




Tuesday 19 December 2017

thermodynamics - Determine the Dependence of $S$ (Entropy) on $V$ and $T$


Why can the equation $$dS= \frac{1}{T} dU + \frac{P}{T} dV$$ be expressed as $$dS= \left. \frac{\partial S}{\partial T} \right|_V dT + \left. \frac{\partial S}{\partial V} \right|_T dV \quad?$$





astrophysics - Formation of supermassive black holes


Scientists have found very bright source of light which they call quasar and the are found to be supermassive black holes. So these black holes are so massive that they cannot be formed by a supernova. So how are these formed?




Monday 18 December 2017

black holes - Could galactic rotation be similar to an irrotational vortex?


Could black holes' near light speed rotation cause galaxies to move like an irrotational vortices?




Why superposition is useful just for linear functions?


I saw a problem which said that we have a bar between two walls and we increase the temperature. and as you know walls push a force to the bar so the length of it does not change. in the solution I saw that it said we do superposition which means that we first imagine that there is no wall so we calculate the change in length then we calculate the wall effect (force.) and it was written too that this (superposition) is useful just for the functions that are linear. Why?



Answer



A linear system is one where, if we have two inputs $x_1$ and $x_2$, producing outputs $y_1$ and $y_2$, then the output for an input of $\alpha{}x_1 + \beta{}x_2$ is $\alpha{}y_1 + \beta{}y_2$. This is precisely the property we rely on when we apply superposition to solve a problem with inputs composed of sums of easier-to-analyze inputs.


This means, essentially, that if we have a system where superposition can be applied, then by definition we call that a linear system.


quantum mechanics - Bound states of the $V(x)=pm delta'^{(n)}(x)$ potential?


The $\delta(x)$ Dirac delta is not the only "point-supported" potential that we can integrate; in principle all their derivatives $\delta', \delta'', ...$ exist also, do they?



If yes, can we look for bound states in any of these $\delta'^{(n)}(x)$ potentials? Are there explicit formulae for them (and for the scattering states)?


To be more precise, I am asking for explicit solutions of the 1D Schroedinger equation with point potential,


$$- {\hbar^2 \over 2m} \Psi_n''(x) + a \ \delta'^{(n)}(x) \Psi(x) \ = E_n \Psi_n(x) $$


I should add that I have read at least of three set of boundary conditions that are said to be particular solutions:



  • $\Psi'(0^+)-\Psi'(0^-)= A \Psi(0)$ with $\Psi(0)$ continuous, is the zero-th derivative case, the "delta potential".

  • $\Psi(0^+)-\Psi(0^-)= B \Psi'(0)$ with $\Psi'(0)$ continuous, was called "the delta prime potential" by Holden.

  • $\lambda \Psi'(0^+)=\Psi'(0^-)$ and $\Psi(0^+)=\lambda\Psi(0^-)$ simultaneusly, was called "the delta prime potential" by Kurasov


The zero-th derivative case, $V(x)=a \delta(x)$ is a typical textbook example, pretty nice because it only has a bound state, for negative $a$, and it acts as a kind of barrier for positive $a$. So it is interesting to ask for other values of $n$ and of course for the general case and if it offers more bound states or other properties. Is it even possible to consider $n$ beyond the first derivative?



Related questions


(If you recall a related question, feel free to suggest it in the comments, or even to edit directly if you have privileges for it)


For the delta prime, including velocity-dependent potentials, the question has been asked in How to interpret the derivative of the Dirac delta potential?


In the halfline $r>0$, the delta is called "Fermi Pseudopotential". As of today I can not see questions about it, but Classical limit of a quantum system seems to be the same potential.


A general way of taking with boundaring conditions is via the theory of self-adjoint extensions of hermitian operators. This case is not very different of the "particle in 1D box", question Why is $ \psi = A \cos(kx) $ not an acceptable wave function for a particle in a box? A general take was the question Physical interpretation of different selfadjoint extensions A related but very exotic question is What is the relation between renormalization and self-adjoint extension? because obviosly the point-supported interacctions have a peculiar scaling


Comments


Of course upgrading distributions to look as operators in $L^2$ is delicate, and it goes worse for derivatives of distributions when you consider its evaluation $<\phi | \rho(x) \psi>$. Consider the case $\rho(x) = \delta'(x) = \delta(x) {d\over dx}$. Should the derivative apply to $\psi$ only, or to the product $\phi^*\psi$?




Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...