Monday 30 April 2018

thermodynamics - Hot water freezing faster than cold water


This question has puzzled me for a long time. There is already a question like this on Physics.SE. John's answer to the question seems quite satisfying. But when I googled the cause I found this and this explanation. They maybe wrong but I think How Stuff Works is a reliable source.



And here's the original paper.


I am quite confused now reading the different explanations. Can anyone please shed some light on the issue?



Answer



To start with, "water freezes faster when it starts out hot" is not terribly precise. There are lots of different experiments you could try, over a huge range of initial conditions, that could all give different results. Wikipedia quotes an article Hot Water Can Freeze Faster Than Cold by Jeng which reviews approaches to the problem up to 2006 and proposes a more precise definition of the problem:



There exists a set of initial parameters, and a pair of temperatures, such that given two bodies of water identical in these parameters, and differing only in initial uniform temperatures, the hot one will freeze sooner.



However, even that definition still has problems, which Jeng recognizes: first, there's the question of what "freeze" means (some ice forms, or the water freezes solid all the way through); second, the hypothesis is completely unfalsifiable. Even if you restrict the hypothesis to the range of conditions reasonably attainable in everyday life, to explain why the effect is so frequently noted anecdotally, there's literally an infinite number of possible experimental conditions to test, and you can always claim that the correct conditions just haven't been tested yet.


So, the fact that the internet is awash in a variety of different explanations makes perfect sense: there really are a bunch of different reasons why initially hotter water may freeze faster than initially colder water, depending on the precise situation and the definition of "freeze" that you use.


The paper you link to, O:H-O Bond Anomalous Relaxation Resolving Mpemba Paradox by Zhang et al., with results echoed by the HowStuffWorks video, attempts to solve the problem for a very specific sub-hypothesis. They eliminate the problem of defining freezing by treating freezing as a proxy for cooling in general, and directly measuring cooling rates instead. That experimental design implicitly eliminates one internet-provided explanation right off the bat: it can't possibly be supercooling, because whether the water supercools or solidifies when it gets to freezing temperature is an entirely different question from how quickly it cools to a temperature where it could freeze.



They also further constrain the problem by looking for explanations that cannot apply to any other liquid. After all, the Mpemba effect is about why hot water freezes faster; nobody is reporting anomalous freezing of, say, hot alcohol. That might just be because people freeze water a lot, and we don't tend to work with a lot of other exotic chemicals in day-to-day life, but choosing to focus on that restriction makes the problem more well-defined, and again implicitly rules out a lot of potential explanations ahead of time- i.e., it can't have anything to do with evaporation (because lots of liquids undergo evaporative cooling, and that's cheating anyway 'cause it changes the mass of the liquid under consideration) or conduction coupling to the freezer shelf (because that has nothing to do with the physical properties of the liquid, and everything to do with an uncontrolled experimental environment, as explained by John Rennie.


So, there really isn't just one answer to "why does hot water freeze faster than cold water", because the question is ill-posed. If you give someone a specific experimental set-up, then you can get a specific answer, and there are a lot of different answers for different set-ups. But, if you want to know "why does initially-hotter water cool faster through a range of lower temperatures than water that started out at those lower temperatures, while no other known liquid appears to behave this way" (thus contributing to it freezing first if it doesn't supercool), Zhang has your answer, and it's because of the weird interplay between water's intra- and inter-molecular bond energies. As far as I can tell, that paper has not yet been replicated, so you may consider it unconfirmed, but it's a pretty well-reasoned explanation for a very specific question, which is probably an influencing factor in a lot of other cooling-down-hot-water situations. There is a follow-up article, Mpemba Paradox Revisited -- Numerical Reinforcement, which provides additional simulation evidence for the bond-energy explanation, but it can't really be considered independent confirmation because it's by the same four authors.


classical mechanics - Why doesn't the wake angle of a boat obey the relation $tan(theta)=c/v$?



I set up the situation as a boat that leaves a trail of point like waves which expand out with radius depending on the speed $c$. Intuitively, I would naively expect the answer to be $$\tan(\theta)=\frac{c}{v},$$ because the outermost edges do not interfere destructively so make an angle $ct/vt$ where $v$ is the speed of the boat.


It is obviously a great deal more complicated than this, however the answer is also much stranger $\sin(\theta)=1/3$: see Ship Kelvin Wake at wikiwaves.


I have not yet grasped the level of calculus that this seems to require. All I'm asking is to pick apart any fundamental errors in derivation or assumptions in my answer.



Answer



The reason that your naive guess of $$ \tan(\theta)=\frac{c}{v} $$ fails is that deep-water waves are dispersive, or, in other words, the speed $c=c(\lambda)$ of the wave depends on the wavelength: longer waves are faster than shorter waves, a fact which is evident if you drop a stone into a sufficiently deep pond.


Now, the source for Kelvin wakes is the boat, which can mostly be considered as a point disturbance, and which certainly does not have a well-defined wavelength associated with it. The way you deal with this is via a Fourier transform (that's the calculus-looking maths in the page you linked to) which is just fancy language for saying



  1. that every point disturbance can be understood as a superposition of regular wavetrains of well-defined wavelength, and

  2. that once we've done that decomposition, we can propagate the individual waves on their own, which is usually much simpler, and then re-assemble them to form the disturbance's wake pattern at any later point.



The reason the Kelvin wake angle is independent of the velocity, over reasonable ranges, is that the boat's disturbance contains many different wavelengths, and the wave propagation picks out the wavelengths that travel at the correct speed to match the boat's motion. Those waves will interfere constructively and will form a set pattern, while on the other hand waves that are longer or shorter will be too fast or too slow, drift out of phase with the waves produced earlier, interfere destructively with themselves, and drop out of the race.


The rest is just math, but the essentials are now there: the calculation of the wake pattern is just a matter of taking all the component waves, moving at different speeds, and figuring out the only possible interference pattern that will be invariant with respect to the boat's motion at all times.


And, moreover, this has physical consequences that can be readily checked: going twice as fast is not going to make the wake angle smaller, but it will make the spatial scale of the pattern increase by a factor of $\sqrt{2}$.


electromagnetism - How to interpret the magnetic vector potential?


In electromagnetism, we can re-write the electric field in terms of the electric scalar potential, and the magnetic vector potential. That is:


$E = -\nabla\phi - \frac{\partial A}{\partial t}$, where $A$ is such that $B = \nabla \times A$.


I have an intuitive understanding of $\phi$ as the electric potential, as I am familiar with the formula $F = -\nabla V$, where $V$ is the potential energy. Therefore since $E = F/q$, it is easy to see how $\phi$ can be interpreted as the electric potential, in the electrostatic case.


I also know that $F = \frac{dp}{dt}$, where $p$ is momentum, and thus this leads me to believe that $A$ should be somehow connected to momentum, maybe like a "potential momentum". Is there such an intuitive way to understand what $A$ is physically?



Answer



1) OP wrote (v1):



[...] and thus this leads me to believe that ${\bf A}$ should be somehow connected to momentum, [...].




Yes, in fact the magnetic vector potential ${\bf A}$ (times the electric charge) is the difference between the canonical and the kinetic momentum, cf. e.g. this Phys.SE answer.


2) Another argument is that the scalar electric potential $\phi$ times the charge


$$\tag{1} q\phi$$


does not constitute a Lorentz invariant potential energy. If one recalls the Lorentz transformations for the $\phi$ and ${\bf A}$ potentials, and one goes to a boosted coordinate frame, it is not difficult to deduce the correct Lorentz invariant generalization


$$\tag{2} U ~=~ q(\phi - {\bf v}\cdot {\bf A})$$


that replaces $q\phi$. The caveat of eq. (2) is that $U$ is a velocity-dependent potential, so that the force is not merely (minus) a gradient, but rather on the form of (minus) a Euler-Lagrange derivative


$$\tag{3}{\bf F}~=~\frac{d}{dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}. $$


One may show that eq. (3) reproduces the Lorentz force


$$\tag{4}{\bf F}~=~q({\bf E}+{\bf v}\times {\bf B}), $$


see e.g. Ref. 1.



References:



  1. Herbert Goldstein, Classical Mechanics, Chapter 1.


pressure - The physics behind Balloons lifting objects?


Apologies for the super basic question, but we all have to start somewhere right?



Can somebody please explain exactly how you would calculate the number of helium balloons it would take to lift an object of mass $m$ here on earth, the variables I would need to take into account and any other physics that come into play.


I think I can roughly calculate it using the method below but would love somebody to explain how this is right/wrong or anything I have negated to include. This model is so simple, I am thinking it can't possibly be correct.



  • 1 litre of helium lifts roughly 0.001kg (I think?)


  • Assumption: an inflated balloon is uniform and has a radius $r$ of 0.1m




  • $\frac{4}{3}\pi r^3 = 4.189$ cubic metres $\approx$ 4 litres capacity per balloon





  • Lets say $m = 1$kg, therefore $\frac{m\div0.001}{4} = 250$ balloons to lift that object?




As you can tell, I haven't touched Physics since high school and would really appreciate any help. It seems like an easy question, but actually is probably more complex than I thought.


Thanks a lot.



Answer



The net upward force is, according to Wiki buoancy:


$$F_\mathrm{net}=\rho_\mathrm{air}V_\mathrm{disp}g-m_\mathrm{balloon} \cdot g$$ For helium, the $m_\mathrm{balloon}=\rho_\mathrm{helium}V_\mathrm{disp} + m_{shell} $, thus $$F_\mathrm{net}=\rho_\mathrm{air}V_\mathrm{disp}g-\left(\rho_\mathrm{helium}V_\mathrm{disp} + m_{shell} \right)\cdot g=\left(\rho_{air}-\rho_\mathrm{helium}\right)V_\mathrm{disp} \cdot g - m_{shell} \cdot g$$ With $V_\mathrm{disp}=N_\mathrm{balloon} V_\mathrm{balloon}$ and $F_\mathrm{net}=m_\mathrm{load} \cdot g$, you're able to calculate the number of balloons necessary.


EDIT: Some more steps how to actually solve the problem.



To isolate the value of $N_\mathrm{balloon}$, we plug in the volume expression to obtain:


$$F_\mathrm{net}=\left(\rho_{air}-\rho_\mathrm{helium}\right)N_\mathrm{balloon} V_\mathrm{balloon} \cdot g - m_\mathrm{shell} \cdot g$$


We can then isolate the value of $N_\mathrm{balloon} $ by adding $m_\mathrm{shell} \cdot g$ on both sides: $$F_\mathrm{net} + m_\mathrm{shell} \cdot g=\left(\rho_{air}-\rho_\mathrm{helium}\right)N_\mathrm{balloon} V_\mathrm{balloon} \cdot g $$


And then divide both sides by $\left(\rho_{air}-\rho_\mathrm{helium}\right)V_\mathrm{balloon} \cdot g$ to get:


$$\frac{F_\mathrm{net} + m_\mathrm{shell}\cdot g}{\left(\rho_{air}-\rho_\mathrm{helium}\right)V_\mathrm{balloon} \cdot g}=N_\mathrm{balloon} $$


particle physics - Double decay $betabeta$ observation and neutrino mass


If the double decay $\beta\beta$ will be detected this means the neutrino is a Majorana particle coincident with its antiparticle. At the moment the half life of this decay is put to $\tau\ge10^{25}$years. The limit of the Majorana neutrino mass is about 0.2–0.4 eV. What is the relation between the probability of $\beta\beta$ decay and the neutrino mass? Thanks.



Answer



Your statement is wrong in one point:


Regular $\beta\beta$ decay happens in nature, even with dirac neutrinos. However, this shows the continuous electron (or positron) energy distribution one expects from just two (largely independent) $\beta$ decays (see image below).


enter image description here


What's interesting is that, if neturinos are marjorana particles indeed, a different decay channel opens up: neutrinoless double beta decay, also denoted $\beta\beta 0\nu$. This would in theory lead to a spike at the upper end of the combined electron energy spectrum, since no unobservable particles carry away part of the decay energy. The Feynman diagram looks like this



enter image description here


The internal fermion line (the one with $\nu_e$ on it) is only allowed for majorana particles, since they do not carry a well-defined fermion number and thus have no arrows.


Note that the image of the spectrum above laaaaargly exaggerates the expected spike. Double beta decay is rare as it is and the neutrinoless version is supressed w.r.t. the $2\nu$ version. What the experiments looking for $\beta\beta 0\nu$ expect, are single digit event numbers after years of observation.


Now, to your actual question: The smaller the neutrino majorana mass, the less likely neutrinoless double beta decay is. Therefore, in the limit of zero majorana mass (thus only dirac mass), we have no $\beta\beta 0\nu$. Note that this is for left-handed neutrinos, i.e. the neutrinos that are part of the $SU(2)_L$ doublet only. Right-handed majorana masses may be arbitrarily large without influencing the $\beta \beta 0 \nu$.


Still, if the small neutrino masses originate from a seesaw mechanism, the induced majorana mass qualifies for $\beta \beta 0 \nu$.


homework and exercises - Calculating impact force for a falling object?



Good evening, I'm trying to calculate what kind of impact force a falling object would have once it hit something. This is my attempt so far:


Because $x= \frac{1}{2} at^2$, $t=\sqrt{2x/a}$
$v=at$, therefore $v=a \sqrt{2x/a}$
$E_k=\frac{1}{2} mv^2$, so $E_k=\frac{1}{2} m(2ax)=m \cdot a \cdot x$
Since $W=E_k=F_i s$, $F_i=E_k/s=(m \cdot a \cdot x)/s$


For an object weighing about as much as an apple, $0.182$ kg, falling $2.00$ m straight down and creating a dent of $0.00500$ m, this would result in:


$$F_i=(m \cdot a \cdot x)/s$$


$$F_i=(0.182 \cdot 9.81 \cdot 2.00)/0.00500=706 \, \text{N}$$


Does this make any sense? I wouldn't be surprised at all to find out I'm missing something here.


Any input would be appreciated,



thanks in advance!



Answer



If your apple falls $2m$ it's velocity is calculated using the equation you give:


$$ v^2 = 2as $$


and you get $v^2 = 39.24 \space m^2s^{-2}$ (I've haven't taken the square root for reasons that will become obvious). You know the apple is slowed to rest in $0.005m$, so you just need to work out what acceleration is needed when $v^2 = 39.24$ and $s = 0.005$. A quick rearrangement of your equation gives:


$$ a = \frac{v^2}{2s} $$


and plugging in $v^2 = 39.24$ and $s = 0.005$ gives $a = 3925 \space ms^{-2}$. To get the force just use Newton's equation:


$$ F = ma $$


where $m$ is the mass of the apple, $0.18 kg$, and you get $F = 706.32N$. So you got the correct answer (my answer differs from yours only because I used $g = 9.81 \space ms^{-2}$).


To get a more general result substitute for $v^2$ in the second equation to get:



$$ F = ma = m\frac{2gs_1}{2s_2} = mg\frac{s_1}{s_2}$$


where $s_1$ is the distance the apple falls and $s_2$ is the distance it takes to stop.


special relativity - If I move a long solid stick can I send message fastest than light?




Possible Duplicate:
Is it possible for information to be transmitted faster than light?




I mean by using a perfect solid stick long enough and moving it forward and backward can I send information fastest than light ? Can you imagine a solid stick long enough to reach the moon and using it to comunicate with the lunar base. Will be information faster than light? What are the theoretical reason other than technical reason for rules this as impossible? We should rule that a perfect unelastic solid exist ?



Answer



You give the answer yourself : special relativity forbids any perfectly rigid solid, or more quantitatively, give a bound on the elasticity a solid can have ($Y<\rho c^2$). If you have a real solid, with nonzero elasticity, you can compute the speed of sound within this solid as a function of the elasticity/stiffness (see e.g. on wikipedia for the formula). If you move an end of your big stick faster thant this speed of sound, it will compress the stick, and this deformation will take time to propagate to the other end. This question is one of the many non-working way of making faster than light communications. You have many of them debunked here.


To go into "technical reasons", if your stick is made of atoms, since the atoms see each other through electromagnetic interaction, there is no way the move of a bunch of atoms of your stick propagates to other of atoms faster than the speed of electromagnetic force. Of course, this is a technical reason, which is not valid if your "stick" is made by an exotic material where other forces play a key role (for example out of neutron-star mater, where nuclear force are important), but in this case the violation of relativity would come from the force themselves, which then would allow you to build a (too) stiff material.


electromagnetism - Proof that four-potential is a four-vector


My teacher proposed this "simple" proof that the 4-potential is a 4-vector which I am very skeptic about.


Since under gauge transformation the 4-four potential transforms as $$ A^\mu \mapsto A^\mu + \partial^\mu\lambda, $$ $\lambda$ being a scalar function, it follows that $A^\mu$ must trasform as a 4-vector under Lorentz transformation, since $\partial^\mu$ is one.


Is he right? What am I missing? I asked him for clarification but didn't get any more information other than this.




Sunday 29 April 2018

How the Lagrangian of classical system can be derived from basic assumptions?


It is well known that the Lagrangian of a classical free particle equal to kinetic energy. This statement can be derived from some basic assumptions about the symmetries of the space-time. Is there any similar reasoning (eg. symmetry based or geometrical) why the Lagrangian of a classical system is equal kinetic energy minus the potential energy? Or it is just because we can compare the Newton's equations with the Euler-Lagrange equation and realize how they can match?




Saturday 28 April 2018

quantum mechanics - Schrödinger equation and non-Hermitian Hamiltonians


Is the Schrödinger equation still valid if we use a non-Hermitian Hamiltonian with it? By this I mean does:


$$\hat{H}\psi(t) = i\hbar\frac{\partial}{\partial t}\psi(t)$$


if $\hat{H}$ is not Hermitian?



Answer



If you have some arbitrary linear operator $\hat A$, there's nothing stopping you from formulating the differential equation $$ i\partial_t \Psi = \hat A \Psi, $$ but you also have no guarantee that its solutions will play nicely or even exist.


In the simplest case, you can take $\hat A=-ia\mathbb I$, and your Schrödinger equation reads $\partial_t \Psi = -a\Psi$, giving you exponential decays of the form $\Psi(t) = e^{-at}\Psi(0)$. If you have a postive real part, as in e.g. $\hat A=+ia\mathbb I$, then you'll have an exponential growth as $\Psi(t) = e^{+at}\Psi(0)$, which isn't terrible. You can also have mixtures between these, such as e.g. a two-by-two matrix $$ A=\begin{pmatrix} ia&0\\0&ib\end{pmatrix}, $$ and you'll get different decay constants for the different coordinates. From here it's easy to extend to arbitrary finite complex matrices, where the solutions will obviously be a bit more complex. However, you need to be careful, because if you break the premise of hermiticity you also lose the guarantee that your operator will be diagonalizable, such as a Jordan block of the form $$ A=\begin{pmatrix} ia&1\\0&ia\end{pmatrix}, $$ which cannot be reduced further; as such, you probably want to demand that your operator be normal, or some similar guarantee of niceness.



You should be double wary, of course, in infinite dimensions, particularly if there is the chance that $\hat A$ will have a definite spectrum whose imaginary part is unbounded from above. A simple example of that type is $$ i\partial_t \Psi(x,t) = ix\Psi(x,t), $$ which is just about solvable as $\Psi(x,t) = e^{xt}\Psi(x,0)$, but here if your initial condition is at large positive $x$ the rate of growth becomes unbounded. From there, it isn't hard to envision the possibility that with slightly more pathological operators you could completely lose the existence of the solutions.




That said, non-hermitian hamiltonians are used with some frequency in the literature, particularly if you're dealing with resonances in a continuum or decaying states. The book referenced here might be a good starting point if you want to read about those kinds of methods. At a more gritty level, you can ask Google Scholar and it will yield ~20k results, with many of those related to something called PT-symmetric quantum mechanics, but that's probably rather more information than what you're after at the moment.


special relativity - If two observer don't agree about distance traveled and the Times it takes, why they agree about relative speed?


Lorentz factor depend on speed, but to measure the speed, we need to know the distance traveled and the times it takes to get the ratio. But according to special relativity theory, for two equivalent observer that moving relative to each other, each of them will see the other experiencing time dilation and lorentz contraction. So distance measured By one observer become smaller when seen from other observer by lorentz factor. And also time measured By one observer become longer when seen from other observer. And speed as measured By one observer must by multiplied By square of lorentz factor when calculated By other observer.


Update: With their average life time and speed, moun can reach a distance only 0.66 km, but when using time dilation formula it can reach distance 10 km, their average life span increase according ovserver on earth. But from moun frame it is not his time that dilated, but the distance to earth that become shorter. So with their average life span, they can cover a shorter distance compared the distance measured By observer on earth. So in this case, proper time is measured in moun frame but proper distance measured from earth frame. But for the case measuring speed like I mentioned in above equation, both the distance and time is proper according to one frame, and we need to calculated the same distance and time that being dilated and contracted By lorentz factor from other frame reference. So it is become multiplied By square of lorentz factor. (Note: this is verified By experiment)




quantum gravity - Is anyone studying how the general topology of spacetime arises from more fundamental notions?


Stephen Wolfram in his book A New Kind of Science touches on a model of space itself based on automata theory. That it, he makes some suggestions about modelling not only the behaviour of matter through space, but the space itself in terms of state machines (a notion from computing). Here, the general topology of space arises from a small-scale connection lattice.


I wondered whether any theoretical work is being undertaken along these lines within the physics community.


The reason for my interest in this regards one of the mysteries of quantum mechanics, that of quantum entanglement and action at distance. I wondered whether, if space is imagined as having a topology that arises from a notion of neighbourhood at a fine level, then quantum entanglement might be a result of a 'short circuit' in the connection lattice. That is two points at a distance through 'normal' space might also still be neighbours at a fundamental level; there might be a short strand of connectivity in addition the all the long strands relating the two.


(I think Richard Feynman also alluded to this sort of model with his take on quantum electro-dynamics.)



Answer




Space_cadet mentioned already work about deriving spacetime as a smooth Lorentzian manifold from more "fundamental" concepts, there are a lot of others -like causal sets, but the motivation for the question was:



The reason for my interest in this regards one of the mysteries of quantum mechanics, that of quantum entanglement and action at distance. I wondered whether, if space is imagined as having a topology that arises from a notion of neighbourhood at a fine level, then quantum entanglement might be a result of a 'short circuit' in the connection lattice.



I'm not convinced that such an explanation is possible or warranted, the reason for this is the Reeh-Schlieder theorem from quantum field theory (I write "not convinced" because there is some subjectivity allowed, because the following paragraph describes an aspect of axiomatic quantum field theory which may become obsolete in the future with the development of a more complete theory):


It describes "action at a distance" in a mathematically precise way. According to the Reeh-Schlieder theorem there are correlations in the vacuum state between measurements at an arbitrary distance. The point is: The proof of the Reeh-Schlieder theorem is independent of any axiom describing causality, showing that quantum entanglement effects do not violate Einstein causality, and don't depend on the precise notion of causality. Therefore a change in spacetime topology in order to explain quantum entanglement effects won't work.


Discussions of the notion of quantum entanglement often conflate the notion of entanglement as "an action at a distance" and Einstein causality - these are two different things, and the first does not violate the second.


Friday 27 April 2018

quantum field theory - Virtual particles and physical laws


Recently, I was reading about Hawking Radiation in A Brief History of Time. It says that at no point can all the fields be zero and so there's nothing like empty space(quantum fluctuation etc.). Now, the reason mentioned was that virtual(force-carrier) particles cannot have both a precise rate of change and a precise position(Uncertainty Principle).


So, my question is : This video says that virtual particles don't follow normal physical laws. So, how can we say that they obey the uncertainty principle?




astronomy - What would happen in the final days of the universe?


I would like to know the stages of how the universe would end and what would happen and what the possible scenarios are.


I understand that eventually all the stars would burn out and that would introduce a big freeze that would make it harder to sustain life. Forgive me if this is incorrect as I do not have any expertise in this area, I would appreciate any information on this.




quantum mechanics - With redshift, energy is lost. Where does it go?



A photon emitted by a distant source billions of light years away arrives here with a lower frequency hence less energy than it started with. What happened to the energy?




newtonian mechanics - Pulley system on a frictionless cart


Let's say you have a pulley set up as below on a cart, with a massless pulley and string. The mass hanging off the side is attached via a rail, and all surfaces & pulley are frictionless except between the tires and the ground (to allow for rolling, of course).


Furthermore, the mass of the weight hanging off the side is greater than that resting on top of the cart.



When the system is released from rest, will the cart begin to move or not?


enter image description here


I would think not, because there is no force that could cause the cart to move -- since all surfaces are frictionless, it is as if the pulley and the cart are two separate entities.


However, what about the tension in the string and thus in the pulley connected to the cart? Does that not exert a lateral force capable of accelerating the cart?


Additionally, how does conservation of momentum play into this.


Since the system is initially at rest, the sum of the momentum vectors of each of the objects must add to zero at any point in time, right? So if the block on top of the cart is accelerating due to the tension in the rope due the force of gravity acting on the hanging block, does that mean the cart must move in the opposite direction so momentum is preserved?


Finally, where do non-inertial reference frames play a part in this? Since the cart is (potentially) accelerating taking the reference frame of the cart will lead to the introduction of "fictitious" forces. Is there any (possibly simpler) way to determine what will happen to the cart from this (non-inertial) reference frame?



Answer



Yes, the cart will move, due to the force applied by the string to the pulley.


To solve, calculate the string tension while the weights are moving, and then note that the pulley has to provide an opposing force in order to change the string's direction. The reaction to that force acts upon the cart, accelerating it.



Momentum is conserved, because the resting weight is accelerated to the right, while the cart is accelerated to the left.


Calculating the actual numbers will be entertaining, as you must include the cart's acceleration while calculating the string tension. I'm guessing that including a new, accelerating reference frame won't be helpful, as you won't know the magnitude of the acceleration until the problem has been solved.


Thursday 26 April 2018

astronomy - Are the inner planets on planar orbits because there was more dust in the inner solar system (early on in planetary accretion)?



Question inspired by a question thread here.


So when there's lots of dust in a galaxy, the galaxy tends to collapse into a spiral galaxy (to maintain angular momentum and to minimize gravitational potential energy). Is this the same thing that happens in the inner regions of the solar system? The outer regions have less dust so the orbits of minor planets "out there" tends to be more elliptical.


And could this perhaps mean that the orbits of planets tend to be more coplanar around stars of higher metallicity?



Answer



Precisely- Angular momentum is very difficult to radiate efficiently, while energy is very easy. The net result of minimizing energy while mostly maintaining angular momentum is inevitably a disc. I doubt there will be much of a metallicity effect, since the overall flattening is so pronounced.


I expect elliptical galaxies have not become planar because they don't radiate well. The spiral density wave pattern of a spiral galaxy probably "stirs" them very efficiently, so bulk kinetic energy of stars gets dissipated well. Likewise, I think the Kuiper Belt is less coplanar and the Oort Cloud even less than that because of the lack of perturbations. They are relatively dynamically frozen, as well as the usual sense.


On the subject of different solar systems, I would expect tidal disturbances from close passes with neighboring stars to be the most dominant effect in determining how closely planets' orbital planes coincide. So... "urban" star areas would have more close passes than "rural" ones, and also more metal pollution. Ergo, if anything I would expect systems with higher metals to be less coplanar.


Caveat: These kind of dynamics are not my specialty. I am less confident about these speculations than most of my typical answers.


metric tensor - Causality and how it fits in with relativity


I was talking to my teacher the other day about Einstein's spacetime and there's one thing he couldn't explain about the nature of Cause. I may be being stupid or just unable to comprehend, thanks for any replies.


According to Einstein and relativity, two observers will agree on what things happened but not necessarily on the chronological order in which they happen. I understand how this radically alters our view of time into something that isn't the same or experienced equally by all. What I don't understand is how this fits in (or doesn't) with cause. If event A causes Event B, but we're saying that two people could experience them in a different order, how can event B happen before event A which caused it.



I'm intrigued?



Answer



It's a bit more complicated than that. Given any two events, there is a quantity, called the interval (also 'spacetime interval' or 'invariant interval'), denoted $\Delta s^2$, and which equals $\Delta s^2=c^2\Delta t^2-\Delta \mathbf r^2$, which determines how the two events can relate to each other causally.




  • If $\Delta s^2>0$, then we say $A$ and $B$ are "timelike separated" (or lightlike separated if $\Delta s^2=0$). In this case all observers will agree that (say) $A$ happened before $B$, and $A$ can causally influence $B$.




  • If $\Delta s^2<0$, then we say $A$ and $B$ are "spacelike separated". In this case $A$ and $B$ are causally disconnected, and neither can influence the other. Different observers will disagree on their temporal order, and in fact you can always find observers for whom $A$ happened before $B$, $A$ happened after $B$, and $A$ happened at the same time as $B$.





  • Finally, is $\Delta s^2=0$, then we say that $A$ and $B$ are "lightlike separated", or that the interval between them is "null". This is identical to timelike separations: all observers will agree that (say) $A$ happened before $B$, and $A$ can causally influence $B$; moreover, a light ray emitted at $A$ in the direction of $B$ will reach that position at the exact instant that $B$ is happening, and it will do so in all frames of reference.


    The set of all events $B$ which are at lightlike separations from $A$ is called the light cone of $A$, and it separates space in three regions: the interior, with timelike separations, itself split into the causal future and the causal past of $A$, and the exterior, with spacelike separations, which contains all events that are causally disconnected from $A$, and which are simultaneous with it in some frame of reference.




Thus, as you succinctly put it,



if $A$ and $B$ are linked (one causes the other), then they have to be timelike [or lightlike] separated and all observers will agree on their temporal order.



quantum mechanics - How do electrons exit LEDs? Aren't they in the valence energy state?


If a conducting electron reduces to the valence band in an LED, where does it get the energy to go back to the conductance band upon leaving the diode so current can flow? I'm confused as to how current can flow completely through an LED if all electrons are reduced to the valence electron state, meaning they are not conducting electricity. Do the electrons return back to the conductance band energy state?




Relation of General Relativity to Dark Matter and Dark Energy


I was reading an elementary book on dark matter (in fact, a historical perspective) and there were mentioned how the scientific community react to the idea of dark matter proposed as a solution to observed discrepancy between the actual mass of astronomical systems and the predicted mass from Newton's theory. I was wondering where Einstein theory stands in relation to dark matter, did it somehow predict it, or does dark matter prove the incompleteness of Einstein's theory? And what about dark energy?



Answer




I was wondering where Einstein theory stands in relation to dark matter, did it somehow predict it, or does dark matter prove the incompleteness of Einstein's theory? And what about dark energy?



First, good question. It's a very interesting bit of history. Photon's answer touches more on today's science, I'll go more into the history. Basically, the short answer is no. The longer answer is, not really, but a small relation, perhaps.


Einstein didn't predict dark matter. He didn't and his theories didn't even guess in the direction of it. Einstein's general relativity did imply the existence of black holes though, which could have been, at one time, considered part of dark matter. More on that later.


It's worth mentioning, when Einstein published his 2 relativity theories (special - 1905 and general - 1915), we didn't even know there were other galaxies. The discovery of other galaxies came in 1923 and it's not hard to see why, once other galaxies were discovered, that they would be of significant interest. Hubble discovered Red Shift in 1929 and Oort made his discovery in 1932 and Zwicky his in 1933 (more on that below).


There were two key pieces to the missing galaxy mass puzzle.



First observed was First was that the outer stars orbited faster than the ones closer to the center, called the Galaxy Rotation problem discovered in 1932 by Jan Hendrik Oort. I remember reading about this in the science times as late as the 1980s. It remained an unanswered puzzle for decades that many scientists looked into with better and better telescopes after Oort. It remained a puzzle for a long time.


Second, just 1 year later, Fritz Zwicky calculated the expected mass of a near-by galaxy and found it was much too low to explain the orbital speed (as you mentioned in your question). Zwicky's calculation was that the apparent mass was 400 times too low, which is needless to say, enormous. It's not hard to figure that there's some extra mass that might not be seen. Stars that have used all their fuel and burned out, large planets, clouds of dust, meteors, asteroids, etc, but it's very difficult to explain away 400 times too little mass.


Both of those observations are equally unexplained using Einstein's gravity or Newtons. There's very little difference for most orbital calculations. It only real difference starts to come in with very high gravity and enormously fast orbits. For these Galaxy mass calculations, there's essentially no difference.


Now, it's not hard to imagine some hugely massive objects on the inside of a galaxy and super-massive black holes were discovered later, but that only helps with Zwicky's observation, not Oort's. For Oort's discovery to work, you need lots of mass outside the galaxy and that was an enormous puzzle that Einstein's relativity didn't help one bit.


Now, the loose tie-in, was that some of the mass in galaxies that we can't see is black holes, which Einstein's general relativity predicted though Einstein personally didn't believe in them. But that's a small tie-in.


Dark Energy is something else and probably belongs in another question, if you want to tie Dark Energy to Einstein's cosmological constant, that's doable, but I don't put much stock in that tie-in myself. I think Einstein was fudging his physics to try to make something work and the fact that his guess resembled what was later observed was more dumb luck than good theory. Einstein's two relativity theories when he published them were absolutely brilliant and probably the greatest work of the 20th century. I don't consider his cosmological constant good science, in my humble opinion.


Dark Energy also has absolutely nothing to do with Dark Matter, except 4 letters. :-) They're completely different.


cosmology - CMB - Excess Energy?



Today in physics we were looking at how the energy of a photon is the product of Planck's constant and the frequency of the photon, therefore the lower the frequency, the lower the energy of the photon.


The Cosmic Microwave Background, which I understand is a literal stretching of space over time, therefore increasing the wavelength of the radiation emitted during the big bang, being electromagnetic waves (photons), which will of course decrease the frequency.


So, if over time an individual photon's frequency has decreased, so has its energy... Where has the excess energy gone?





electromagnetism - Fields cancellation and energy accounting


When fields cancel in superposition. Where does the energy go? Do they revert to their potential form? Please give an example.





Wednesday 25 April 2018

momentum - Velocity of Rocket Exhaust


i recently learned a bit of rocket propulsion.It wasn't much complex but was explained in simple terms.The only problem i had understanding it was that in calculating the thrust of rocket the velocity of the exhaust was taken relative to the rocket.My problem is : Shouldn't the velocity of the exhaust be taken as the relative to earth.In all the previous examples we had done so , why don't we use the velocity relative to the rocket.Thanks


PS:A simple explanation would be much appreciated




homework and exercises - Uhlmann's Theorem: proof of $text{tr}(A^{dagger} B) = langle m | A otimes B |mrangle $



In p228, Chapter 9 of Mark Wilde's text , in the course of proving Uhlmann's theorem for quantum fidelity, it claims $$\sum_{i,j} \langle i|^R \langle i|^A (U^R \otimes (\sqrt{\rho}\sqrt{\sigma})^A) |j\rangle^R |j\rangle^A $$ $$=\sum_{i,j} \langle i|^R \langle i|^A (I^R \otimes (\sqrt{\rho}\sqrt{\sigma}U^T)^A) |j\rangle^R |j\rangle^A $$ which are equations (9.97) and (9.98) in the aforementioned text.


Meanwhile, in Nielsen & Chuang's text, exercise 9.16 requires to prove that $$\text{tr}(A^{\dagger} B) = \langle m | A \otimes B |m\rangle $$ for $|m\rangle = \sum_{i} |i\rangle|i\rangle $ where $\{ |i\rangle \}$ is an orthonormal basis on some Hilbert space and A and B are operators on that space.


Each thing above is crucial in proof of Uhlmann theorem in respective textbook but I have no idea why they hold. $\text{tr}(A^{\dagger} B) = \sum_{i,j} {a_{ij}}^{*}b_{ij}$ whereas $ \langle m | A \otimes B |m\rangle = \sum_{i,j} {a_{ij}} b_{ij } $ so why they equal? Could anybody give me any hint?





special relativity - The real meaning of time dilation


Is this true or false: If A and B have clocks and are traveling at relative velocity to each other, then to B it APPEARS that A's clock moving slower, but A sees his own clock moving at normal speed. Similarly, to A it APPEARS that B's clock is moving slower, but B sees his own clock moving at normal speed.


If the above is true, then both A seeing his own clock moving at normal speed and B seeing his own clock moving at normal speed means that in reality both clocks are moving at normal speed, and neither has slowed down, whereas the other person's clock APPEARING to move slowly is merely an illusion.


Now is this true or false: If A and B have clocks and are traveling at relative velocity to each other, then A sees his own clock move slowly (compared to the speed of the clock when A was at rest with respect to B) and B sees his own clock move slowly, so that in reality both clocks are moving slowly, but they still remain synchronized (since both are slow by the same amount)


If the above statement is not true, then why do muons decay slowly when moving fast? [it could only be possible if the muon saw its own clock as moving slowly. If we saw the muon's clock moving slowly, but the muon saw its own clock moving at the normal rate, then the muon would decay at the normal rate, and not slowly]



Can anyone please explain where I went wrong?




Tuesday 24 April 2018

heat - Why does wet skin sunburn faster?


There is a popular belief that wet skin burns or tans faster. However, I've never heard a believable explanation of why this happens.



The best explanation I've heard is that the water droplets on the skin act as a lens, focusing the sunlight onto your skin. I don't see how this would affect an overall burn, because the amount of sunlight reaching the skin is the same (ignoring reflection).


Is this 'fact' true, and if so, what causes it?



Answer



I don't know of any research to find out if skin sunburns faster when wet, though someone did a comparable experiment to find out if plants can be burnt by sunlight focussed through drops of water after the plants have been watered.


You need to be clear what is being measured here. The total amount of sunlight hitting you, and a plant, is unaffected by whether you're wet or not. The question is whether water droplets can focus the sunlight onto intense patches causing small local burns.


The answer is that under most circumstances water droplets do not cause burning because unless the contact angle is very high they do not focus the sunlight onto the skin. Burning (of the plants) could happen if the droplets were held above the leaf surface by hairs, or when the water droplets were replaced by glass spheres (with an effective contact angle of 180º).


My observation of water droplets on my own skin is that the contact angles are less than 90º, so from the plant experiments these droplets would not cause local burning. The answer to your question is (probably) that wet skin does not burn faster. I would agree with Will that the cooling effect of water on the skin may make you unaware that you're being burnt, and this may lead to the common belief that wet skin accelerates burning.


spacetime - Special Relativity - Events "Coincide" is NOT a relative concept, Why?


Consider 1-D space. Let S and S' be two inertial reference frames. Let A and B be two events.


Co-ordinates of A and B under S are A = (xA,tA) and B = (xB,tB).


When we say events coincide - it simply means they have same space-time co-ordinates.


i.e. if (xA = xB) and (tA = tB), then w.r.t S, events A and B coincide.


Let me state a theorem: If A and B coincide in S, then they will the coincide in S' (hence in every and any IRF i.e. two events being coincident is NOT a relative concept)


Q1 - Why is this theorem? Is there a deeper assumption and understanding regarding space-time behind this concept? (I'm not looking for an answer basis Lorentz Transformation - but a more physical / maybe more basic argument). Or is this just an assumption of Special Relativity?



Q2 - If 2 balls A and B collide - they will collide in every IRF. How can I derive this basis above theorem? i.e. how can I "precisely" express collision of 2 balls as two events which coincide?


(I'm asking the above question to better understand space-time, events etc at a little conceptual level and I'm having difficulty in understanding them, Thanks for your help)



Answer



It's a lot simpler than you think. Suppose an event has coordinates $x$ in some reference frame, where $x$ contains both space and time coordinates within it. To get the coordinates $x'$ of the same event in some other reference frame, you apply some function, $$x' \equiv f(x).$$ This works in nonrelativistic physics, special relativity, and even general relativity. In special relativity the function is called a Lorentz transformation. The key (essentially only) assumption here is that the location of an event in spacetime is completely specified by its coordinates.


If two events $A$ and $B$ coincide, their coordinates are the same, $x_A = x_B$. You want a proof of the "theorem" that in any other reference frame, $x_A' = x_B'$. Now hold on to your seat, because this profound result can be beautifully proven in airtight, perfectly rigorous formal mathematics as: $$x_A' = f(x_A) = f(x_B) = x_B'.$$ That's it.


double slit experiment - Delayed Choice Quantum Eraser




All delayed choice quantum eraser experiments I've seen record the signal photons reaching detector D0 and then use the data of the idler photons recorded at detectors D1, D2, D3, D4 to "filter out" the photons with which-path information at detector D0 to be able to see the interference pattern.


Now, what would one see at detector D0 if the experiment was performed every 1 second and changed to send the entangled idler photons to the moon? It takes more than a second for the idler photons to reach the moon, which is enough time for the photon stream to form a COMPLETE interference pattern (or not) at detector D0. At the moon, instead of using 4 detectors where 2 do detect which-path information and 2 don't, a device with only 2 detectors would alternate randomly every second between recording and not-recording the which-path information of the incoming entangled idler photons.


Recording the which-path information at the moon should produce an interference pattern on earth. So would the observer looking at detector D0 see the pattern randomly change from interference to non-interference every second? If yes, can he thus predict 1 second in the future what the device at the moon will do?




Monday 23 April 2018

classical mechanics - Why are there only 3 Additive Integrals of Motion?


1. I was reading Landau & Lifschitz's book on Mechanics, and came across this sentence on p.19:


"There are no other additive integrals of the motion. Thus every closed system has seven such integrals: energy, three components of momentum, and three components of angular momentum".


However, no proof is given for this statement. Why is it true?



2. I find the statement somewhat counter-intuitive; it says at the beginning of the second chapter that for any mechanical system with $s$ degrees of freedom, there are at most $2s-1$ integrals of motion.


But the above statement would seem to imply that a system with three degrees of freedom has at least $2s+1$ integrals of motion. Why is this not a contradiction?


3. Finally, these integrals of motion correspond neatly to homogeneity of time (energy), homogeneity of space (momentum), and isotropy of space (angular momentum).


From this perspective it also makes sense why energy is "one-dimensional", since there is only one time dimension, and why momentum and angular momentum are "three-dimensional", since space has three dimensions.


However, why do the only additive integrals of motion correspond to these properties? What is special about them which guarantees that they have additive integrals of motion and that no other property can?


Even if you don't know the answer to all of these questions, I would really appreciate any help or insight you could give me. I was really enjoying this book until I thought of this question and now I am hopelessly confused. Thank you very much time and please enjoy the rest of your week!



Answer



OK, as per your request…. My sense is you want to learn everything about integrability from here, and combine issues which confuses them, instead of separating them…. How about you supplement L&L with Arnold’s book?





  1. The seven additive integrals of L&L are the additive conservation laws of the isolated center of mass system, and standard center of mass conservation theorems dictate that they are fixed in the absence of external forces and torques, and so work in/outputs, by Newton’s action-reaction laws: you sum all energies, or momenta, or angular momenta, and their sums,since the system is “closed” are preserved (just as in a black hole!). But… they need not be independent, as, e.g. for one free particle in the box, $E\propto \vec{P}^2$, i.e., & is not an absolute lower bound on the number of conserved integrals. (and for one free particle, J being 0/meaningless, reduces the indep conserved integrals to 3.) To compare with 2., I’ll use the much simpler isolated system of 2 particles in 2d, so the rotation group is one dimensional, and the conserved additive integrals, instead of 7 are now just 4: E, $\vec{P}$, and J.




  2. This is a broad abstract statement for an upper bound to the number of independent integrals of phase-space motion, not necessarily additive . In a 2 s-dimensional phase space, each independent conserved integral specifies an independent hypersurface on which trajectories lie, and specify the phase-space point must run on their intersection. The most restrictive case is 2 s - 1 hypersurfaces, whose common intersection is a line, the trajectory of a (multidimensional) phase space point; one more constraint and the line is intersected to a point, so the point does not move in time! Systems with this maximal number of constraints are called maximally superintegrable, like the Kepler problem, or most baby freshman physics problems. As an overkill aside, all these problems are described much more symmetrically by the equivalent Nambu mechanics picture: the classical part expresses some of it in PB language. For invariants in involution, see this.




    • So, now, consider two particles connected by a spring, in 2d, starting with the k=0 limit, so free particles. $$L=\frac{1}{2}M\left(\dot{X}^2+\dot{Y}^2\right)+\frac{1}{2}m\left(\dot{x}^2+ \dot{y}^2 \right)-\frac{1}{2}k\left(r-d\right)^2= \frac{1}{2}M\left(\dot{X}^2+\dot{Y}^2\right)+\frac{1}{2}m\left(\dot{r}^2+ r^2\dot{\theta}^2 \right)-\frac{1}{2}k\left(r-d\right)^2.$$ for capitals being the center of mass coordinates, $r=\sqrt{x^2+y^2}$, and θ the angle between the relative coordinates. Now the equations of motion for θ are already integrated to constant $r^2\dot{\theta}\equiv J$, so we can drop its kinetic term in favor of a term $mJ^2/(2r^2)$ relocated into the potential part of the lagrangian.




    • Let us count overall conserved quantities, first for k=0, the free case: in Cartesian coords, we have 2 comps of momentum for 2 particles, so 4 in total; plus J and each of the two energies E and ε for the capital and lower case variables ? Not quite, since E is not independent of the c.m. momenta, and ε of the internal ones. The independent integrals appear to be 5. However, the external, additive ones, are E+ε, J, and $\vec{P}$, so fewer than the independent ones. Turning on the interaction (spring, nonvanishing k) destroys the conservation of the two components of the internal momentum, but $\epsilon_x$, $\epsilon_y$ and J are still preserved (2×2-1 for x,y, oscillators being maximally superintegrable) and the independent integrals overall are 5, yet again, forestalling your paradox.





    • Finally a word on your Poisson theorem question. Being totally schematic and cavalier about factors, you can see that, given the invariants $\epsilon_x, \epsilon_y, J$ of this double oscillator, $\{ \epsilon_x, J\} \sim K\equiv p_x p_y +xy$, also easy to confirm to be time independent, as per the Jacobi identity. Is there a 4th invariant? It can't be: we saw above maximal superintegrability only allows for 3. But note, fixing signs, factors, etc, that $\epsilon_x \epsilon_y=J^2+K^2$, so one of the four is dependent on the other three, nonlinearly. Phew!....






nuclear physics - Use of fission products for electricity generation



Why can't we use fissions products for electricity production ?


As far has I know fissions products from current nuclear power plants create enough 'waste' heat to boil water; and temperature decreases too slowly for an human life. So why can't we design a reactor to use this energy.



Answer



Here are some "order-of-magnitude" arguments:


Quoting https://en.wikipedia.org/wiki/Decay_heat#Spent_fuel :



After one year, typical spent nuclear fuel generates about 10 kW of decay heat per tonne, decreasing to about 1 kW/t after ten years



Now since this is heat, you can't convert it to electricity with 100% efficiency, the maximum possible efficiency is given by the Carnot efficiency $\eta$:


$$ \eta \le 1 - \dfrac{T_\mathrm{cold}}{T_\mathrm{hot}} $$



where $T_\mathrm{hot}$ would be the temperature of the spent fuel rods (in Kelvin) and $T_\mathrm{cold}$ would be the temperature of a cold reservoir against which a generator would work. One would have to do another calculation what a reasonable temperature of the fuel rods would be (in practice they currently seem to be kept at 50 degrees C).


With 'primary' fuel typically 55 Gigawatt days per tonne can be produced, i.e. a 1 Gigawatt powerplant would use 365.25 / 55 = 6.6 tonnes per year.


Even assuming you would be able to convert this to electricity with 100% efficiency and assuming an average 5 kilowatts per tonne over 10 years, this would yield about 18'000 Kilowatt days or 0.018 Gigawatt days, about 0.03% of the primary energy production.


You'll also see from the Carnot efficiency above that higher temperatures imply a higher possible efficiency, i.e. if one can spend some energy to extract the still fissionable material to be used in a reactor again, that is likely to be more efficient in terms of electricity generation.


It's true on the other hand that radioisotope thermoelectric generators (radioactive sources combined with thermocouples) have been used on satellite missions.


cosmology - What has been proved about the big bang, and what has not?


Ok so the universe is in constant expansion, that has been proven, right? And that means that it was smaller in the past.. But what's the smallest size we can be sure the universe has ever had?


I just want to know what's the oldest thing we are sure about.



Answer



Spencer's comment is right: we never "prove" anything in science. This may sound like a minor point, but it's worth being careful about.


I might rephrase the question like this: What's the smallest size of the Universe for which we have substantial observational evidence in support of the standard big-bang picture?


People can disagree about what constitutes substantial evidence, but I'll nominate the epoch of nucleosynthesis as an answer to this question. This is the time when deuterium, helium, and lithium nuclei were formed via fusion in the early Universe. The observed abundances of those elements match the predictions of the theory, which is evidence that the theory works all the way back to that time.


The epoch of nucleosynthesis corresponds to a redshift of about $z=10^9$. The redshift (actually $1+z$) is just the factor by which the Universe has expanded in linear scale since the time in question, so nucleosynthesis occurred when the Universe was about a billion times smaller than it is today. The age of the Universe (according to the standard model) at that time was about one minute.



Other people may nominate different epochs for the title of "earliest epoch we are reasonably sure about." Even a hardened skeptic shouldn't go any later than the time of formation of the microwave background ($z=1100$, $t=400,000$ years). In the other direction, even the most credulous person shouldn't go any earlier than the time of electroweak symmetry breaking ($z=10^{15}$, $t=10^{-12}$ s.)


I vote for the nucleosynthesis epoch because I think it's the earliest period for which we have reliable astrophysical evidence.


The nucleosynthesis evidence was controversial as recently as about 10 or 15 years ago, but I don't think it is anymore. One way to think about it is that the theory of big-bang nucleosynthesis depends on essentially one parameter, namely the baryon density. If you use the nucleosynthesis observations to "measure" that parameter, you get the same answer as is obtained by quite a variety of other techniques.


The argument for an earlier epoch such as electroweak symmetry breaking is that we think we have a good understanding of the fundamental physical laws up to that energy scale. That's true, but we don't have direct observational tests of the cosmological application of those laws. I'd be very surprised if our standard theory turns out to be wrong on those scales, but we haven't tested it back to those times as directly as we've tested things back to nucleosynthesis.


Sunday 22 April 2018

quantum mechanics - Commutator $[hat{p},F(hat{x})]$ of Momentum $hat{p}$ with a Position dependent function $F(hat{x})$?


I heard from my GSI that the commutator of momentum with a position dependent quantity is always $-i\hbar$ times the derivative of the position dependent quantity. Can someone point me towards a derivation, or provide one here?



Answer




You start from this


$[p,F(x)]\psi=(pF(x)-F(x)p)\psi$


knowing that $p=-i\hbar\frac{\partial}{\partial x}$ you'll get


$[p,F(x)]\psi=-i\hbar\frac{\partial}{\partial x}(F(x)\psi)+i\hbar F(x)\frac{\partial }{\partial x}\psi=-i\hbar\psi\frac{\partial}{\partial x}F(x)-i\hbar F(x)\frac{\partial}{\partial x}\psi+i\hbar F(x)\frac{\partial }{\partial x}\psi$


from where you find that $[p,F(x)]=-i\hbar\frac{\partial}{\partial x}F(x)$


nuclear physics - How is the spin of the odd-odd nucleus 6Li explained?




Spins of odd-odd nuclei is difficult to predict. But $^{6}Li$ is light - only 6 nucleons. $^{6}Li$ should have spin $(\frac{3}{2} + \frac{3}{2}) = 3$ by shell model, as should have one proton on $p_{3/2}$ level and one neutron on $p_{3/2}$ level. How it is explained that it has spin 1?


In the answers to question like that we see frequently formalism from some theory like $Ï€p_{3/2}⊗νp_{3/2}$ and $3/2⊗3/2$. Please, explain to what theory that formalism belongs and explain how to understand it and where to read about.



Answer



The way to understand why 6-Li has $J^P = 1^+$ ($J$ the toal angular momentum and $P$ the parity) is through the measure of its magnetic momentum,


$$ \mu^{exp} \simeq 0.88\mu_N $$


Where $\mu_N$ is the Bohr's magnetic momentum.


This result can be understood assuming that 6-Li behaves as an alpha particle plus a deuteron. The alpha particle has $J^P = 0^+$ so taking $\mu^{exp}$ as the mean value of the magnetic momentum operator of the remnant deuteron one has $$ \mu^{exp} = \langle \mu \rangle = \langle \sum_{p, n}g\mu_N J_z \rangle = \sum_{p, n}g\mu_N m_J = (2·2.79 - 2·1.91)m_J\mu_N \equiv 0.88\mu_N \tag1$$


$m_J$ is the eigenvalue of $J_z$, $\sum_{p, n}$ is the sum over the deuteron's proton and neutron, $g[p] = 2·2.79$ and $g[n] = -2·1.91$. Since from the point of view of Nuclear Physics -nuclear shell model- there is no difference between $n$ and $p$, we can assume that $p$ and $n$ contributes equally as $m_J = +1/2$ and with this value you can see that Eq. (1) is fullfilled. Now, only with this, you could say that $J$ is zero or 1.


Nevertheless, you know that isospin of deuteron is null (see below) wich means that this part of the wave function is antisymmetric. The rest of it must be, therefore, symmetric: spatial plus $J$ parts. For the spatial one we can select $L = 0$ wich implies symmetric spatial state and positive parity


$$P = (-1)^L = +1$$



Finally, $J$ part must be symmetric, i.e., $J = 1$


Therefore,


$$ ^{6}Li \sim \alpha +\ ^{2}_1H $$




Deuteron's isospin equals to zero, $T = 0$


For $T = 1$ you have the triplet while for $T = 0$ the singlet isospin state,


$$ \cases{ T = 1: \ pp, nn, (pn + np)/\sqrt{2}\ \leftarrow\ symmetric\ states\\ T = 0: \ (pn - np)/\sqrt{2}\ \leftarrow\ anti-symmetric\ state } $$


Since we are considering isospin symmetry, all triplet states have (up to electromagnetic corrections) the same energy and therefore they are equally probable to exist in nature. But we have not seen $pp, nn$ nucleous so for deuteron the isospin state must be the singlet, which is anti-symmetric.


quantum field theory - What does a QFT particle state have to do with a classical point particle?


In the question Can one define a “particle” as space-localized object in quantum field theory? it is said that in quantum field theory, a particle state is a state with well defined energy and momentum, related with dispersion relation $E^2=p^2+m^2$. This thing is localized in momentum space, which means it must be delocalized in coordinate space.


On the other hand, in classical mechanics, the most striking feature of a point particle is being localized in coordinate space.


On the first glance, these two objects may seem very different with no obvious link between them. It seems that it is usually just postulated that the above defined QFT states are particles, without any clear justification why should they be related to the point-like classical objects. Satisfying the same dispersion relation isn't good enough justification, as it is not obvious that e.g. some classical field configurations can't satisfy it.


So my question is: Why do we call them both particles, how do we see that they behave similarly (in some appropriate limit)?




gravity - How do spiral arms form?


Why aren't all spinning galaxies shaped as discs as my young mind would expect? I understand how the innermost parts of a galaxy spin faster than the outer parts, and that could explain why some galaxies are more spiraled than others based on age. Though, this doesn't explain how the arms came into existence in the first place. Might it have something to do with an imperfect distribution of mass, and therefore an imperfect distribution of gravity, causing a split in the disc from which point gravity, centripetal force, and inertia could take over? Or does something happen earlier in a galaxy's life?



Answer



user6972's answer is great, but I thought I'd add a somewhat more technical footnote. If the mathematics are lost on you, skip to the end where I give a simple physical interpretation.


The dispersion relation for a differentially rotating fluid disk (i.e. the rotation frequency changes with radius, as opposed to a uniformly rotating disk) is:



$(\omega-m\Omega)^2 = \kappa^2-2\pi G\Sigma|k| + v_s^2k^2$



  • $\omega$ is the angular frequency of a perturbing wave

  • $m$ is an integer $\geq 0$ and describes the rotational symmetry of the disk (so $m=2$ for a bar structure, for instance)

  • $\Omega$ is the rotation frequency of the disk

  • $\kappa$ is the epicyclic frequency of the perturbation

  • $\Sigma$ is the surface density of the disk (mass per unit area)

  • $k$ is the wavenumber of the perturbation

  • $v_s$ is the sound speed in the fluid



This may be a bit intimidating, but as I'll show in a minute it has a nice simple physical interpretation.


First though, a couple words on the assumptions that go into that dispersion relation (the full derivation is in Binney & Tremaine's Galactic Dynamics 2$^\mathrm{nd}$ Edition... it's quite involved so I won't try and outline it here).



  • The disk is approximated as being two-dimensional (infinitely thin).

  • Perturbations to the disk are small.

  • The "tight-winding" approximation, or "short wavelength" approximation - very roughly speaking, the derivation fails if the spiral arms are not tightly wound. This is actually analogous to the WKB approximation.

  • The sound speed $v_s$ is much less than the rotation speed $\Omega R$.


So, are these approximations reasonable? Checking typical disk galaxies it turns out that they are (as long as we're not talking about colliding galaxies or anything like that, which would lead to large perturbations). Besides, the idea with this analysis is not to get a nice clean result showing the theory of spiral arm formation, but rather convince ourselves that a disk is naturally unstable under certain conditions and will "want" to form spiral arms (and gain insight into what drives the instability), and we can check later that they do in fact form with simulations such as those mentioned by user6972.


Ok so with a dispersion relation based on some reasonable assumptions, we can do the usual stability analysis, requiring $\omega^2>0$ for stability. This gives:



$\mathrm{stable~if~}\dfrac{v_s\kappa}{\pi G\Sigma} > 1$


The analysis for a disk made of stars instead of a fluid disk (in reality a galaxy is a disk composed of a mixture of stars and gas) is very similar but with a couple of extra gory details... the result is nice, though:


$\mathrm{stable~if~}\dfrac{\sigma_R\kappa}{3.36G\Sigma}>1$


where $\sigma_R$ is the radial velocity dispersion of the stars in the disk; this is a measure of the distribution of radial velocities and can be thought of somewhat like a sound speed, in a certain sense it carries information about how fast the stars react to carry an impulse. This is a reasonably famous result called "Toomre's stability criterion".


Ok so now for the simple physical interpretation of the stability criteria. First, I should point out that $v_s$ (sound speed), $\sigma_R$ (velocity dispersion) and $\kappa$ (epicyclic frequency) are all similar quantities; they describe the ability of a system to respond to a disturbance. If I poke one side of a cloud of gas, the other side only finds out about it through pressure as fast as the speed of sound (or velocity dispersion/epicyclic frequency) can carry the message.


Now imagine I have a rotating disk of gas with nice smooth properties and I squeeze a little piece of gas (or group of stars) a bit. Two things happen - the squeezed piece of gas will "push back" outwards since I've increased the pressure, but I've also caused a slight increase in density, which will exert a bit of extra gravitational force. It turns out the gravitational force is proportional to $G\Sigma$, and the pressure force is proportional to $v_s\kappa$ (or $\sigma_R\kappa$). The same argument applies in reverse if I stretch the gas/stars a bit - the pressure drops, but so does the gravitational force. So the interpretation of the stability criteria above is that if, when I squeeze a bit of gas (or stars) a little bit, if the increase in pressure is sufficient to balance the increase in gravity, the gas will un-squeeze itself; it is stable. On the other hand, if gravity wins out against pressure, the disk is unstable and collapses locally.


Ok, so how does this lead to spiral arms? Well, you can show that spirals are a natural structure to form under this sort of instability with the parameters of a typical galaxy (depending on the details, a bar is also a possibility). It's a lot of work, though, and I'm not sure it brings a lot more insight - at this point, in my opinion, it's time to switch to simulations and see that yes, indeed, spirals seem to form because of this instability.


time - The example of relativity of simultaneity given by Einstein


I understand (supposedly) the mathematics concerning the relativity of simultaneity in Special Relativity, but I have a nagging question regarding the original example given by Einstein supporting it (I'm only disagreeing with this specific example, not the concept).


It is normally given as a person on an embankment and a person on a train. There is a relative speed between them (usually presented as the train passing the embankment). Now, when both people are at the same x-position (x=0), there is a flash of light at x = +dx and x = -dx. The argument as I keep seeing it is that the person on the embankment will say that both flashes reach him at the same time, whereas the person in the train will say that the flash in front of him reaches him before the other because he was moving toward it, and thus the observers will disagree on the simultaneity of the flashes.


But given that the flashes occurred at the same distance from each of them, the speed of light is constant in both frames, and either one can claim to be at rest, then won't they, according to SR, necessarily see the flashes as simultaneous (both flashes have to travel the same distance in both frames since at the time of emission, the sources of both flashes were equidistant from both observers). I agree that the person on the embankment will say that the person on the train shouldn't see them as simultaneous (and vice versa) since either observer will see the other moving relative to the sources, but in each of there own frames, they must see the flashes as being simultaneous shouldn't they? Am I just misunderstanding the example?


Thanks.



Answer




I agree that the person on the embankment will say that the person on the train shouldn't see them as simultaneous




Well, then the person on the train shouldn't see them as simultaneous. Some things change between reference frames, but conclusions of the form "in frame $S$, an observer will see..." do not change, since the statement itself specifies which frame you have to be in to understand what it is saying.


The observer on the embankment could easily see the train observer intercept the forward flash before the rear flash. (Of course, the embankment observer couldn't do this in real time; one has to wait until after one's hypothetical grid of rulers and clocks reports back what happened when and where.) One nice thing about SR is that time-ordering is invariant. That is, two events $A$ and $B$ can have one of three relations to one another: $A$ is in $B$'s past light cone (and $B$ is in $A$'s future), the reverse of that statement, or $A$ and $B$ are spacelike separated. Whichever one of these holds will hold for all observers.


So we know, just from the embankment analysis, that "in the frame of the train, the forward flash reaches the observer's eyes first," and this statement is always true for anyone who speaks it in its entirety.


What about the train observer? Indeed, as you say,



the flashes occurred at the same distance from each of them, the speed of light is constant in both frames, and either one can claim to be at rest



Suppose two people, $C$ and $D$, stand equal distances from you and are known to pitch balls at exactly the same speed. With everyone standing at rest, $C$ and $D$ each toss you a ball. You get the ball from $C$ before the one from $D$. This is not a logical inconsistency. It simply means $C$ threw a ball before $D$ in your reference frame. That is, the person on the train, operating under the SR assumption of "the speed of light is constant," and using the data (retroactively obtained from a ruler-clock grid, or maybe obtained in real time based on brightnesses) that the flashes were equidistant, must conclude that the forward flash went off first.


Saturday 21 April 2018

cosmology - Are White Holes the inside of Black Holes?


I read about a theory that says that The Big Bang could be actually considered a White Hole.


Than I started thinking. White Hole: an unreachable region from which stuff can come out. Black Hole: a reachable region from which no stuff can come out.


Well one seems to be the boundary of each other. If i am inside a black hole, the surface outside does fit in the White Hole description. So what if a white hole is actually not a hole but is the external surface of a black hole from inside?



What if its true that our Big Bang is actually a white hole, or actually the inside of a Black Hole?


Can there exist a universe in a Black Hole? and Are we in a Black hole ourselves?




quantum mechanics - Explanation of equation that shows a failed approach to relativize Schrodinger equation


I'm reading the Wikipedia page for the Dirac equation:



$\rho=\phi^*\phi\,$


......


$J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$


with the conservation of probability current and density following from the Schrödinger equation:



$\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0.$


The fact that the density is positive definite and convected according to this continuity equation, implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that the space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace by probability density by the symmetrically formed expression


$\rho = \frac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*).$


which now becomes the 4th component of a space-time vector, and the entire 4-current density has the relativistically covariant expression


$J^\mu = \frac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$


The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite - the initial values of both ψ and ∂tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time.



I am not sure how one gets a new $\rho$ and $J^\mu$. How does one do to derive these two? And can anyone show me why the expression for density not positive definite?



Answer



Paul,



This particular writing of the problem in the article I have always thought was sloppy as well. The most confusing part of the discussion is the statement "The continuity equation is as before". At first one writes the continuity equation as:


$$\nabla \cdot J + \dfrac{\partial\rho}{\partial t} = 0$$


Although the del operator can be defined to be infinite dimensional, it is frequently reserved for three dimensions and so the construction of the sentence does not provide a clear interpretation. If you look up conserved current you find the 4-vector version of the continuity equation:


$$\partial_\mu j^\mu = 0$$


What is important about the derivation in the wikipedia article is the conversion of the non time dependent density to a time dependent density, or rather:


$$\rho = \phi^*\phi$$


becomes


$$\rho = \dfrac{i\hbar}{2m}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$


the intent is clear, the want to make the time component have the same form as the space components. The equation of the current is now:


$$J^\mu = \dfrac{i\hbar}{2m}(\psi^*\partial^\mu\psi - \psi\partial^\mu\psi^*)$$



which now contains the time component. So the continuity equation that should be used is:


$$\partial_\mu J^\mu = 0$$


where the capitalization of $J$ appears to be arbitrary choice in the derivation.


One can verify that this is the intent by referring to the article on probability current.


From the above I can see that the sudden insertion of the statement that one can arbitrarily pick $$\psi$$ and $$\dfrac{\partial \psi}{\partial t}$$ isn't well explained. This part the article was a source of confusion for me as well until one realized that the author was trying to get to a discussion about the Klein Gordon equation


A quick search of web for "probability current and klein gordan equation" finds good links, including a good one from the physics department at UC Davis. If you follow the discussion in the paper you can see it confirms that the argument is really trying to get to a discussion about the Klein Gordon equation and make the connection to probability density.


Now, if one does another quick search for "negative solutions to the klein gordan equation" one can find a nice paper from the physics department of the Ohio University. There we get some good discussion around equation 3.13 in the paper which reiterates that, when we redefined the density we introduced some additional variability. So the equation:


$$\rho = \dfrac{i\hbar}{2mc^2}(\psi^*\partial_t\psi - \psi\partial_t\psi^*)$$


(where in the orginal, c was set at 1) really is at the root of the problem (confirming the intent in the original article). However, it probably still doesn't satisfy the question,




"can anyone show me why the expression for density not positive definite?",



but if one goes on a little shopping spree you can find the book Quantum Field Theory Demystified by David McMahon (and there are some free downloads out there, but I won't link to them out of respect for the author), and if you go to pg 116 you will find the discussion:



Remembering the free particle solution $$\varphi(\vec{x},t) = e^{-ip\cdot x} = e^{-i(Et- px)}$$ the time derivatives are $$\dfrac{\partial\varphi}{\partial t} = -iEe^{-i(Et- px)}$$ $$\dfrac{\partial\varphi^*}{\partial t} = iEe^{i(Et- px)}$$ We have $$\varphi^*\dfrac{\partial\varphi}{\partial t} = e^{i(Et- px)}[-iEe^{-i(Et- px)}] = -iE$$ $$\varphi\dfrac{\partial\varphi^*}{\partial t} = e^{-i(Et- px)}[iEe^{i(Et- px)}] = iE$$ So the probability density is $$\rho = i(\varphi^*\dfrac{\partial\varphi}{\partial t} - \varphi\dfrac{\partial\varphi^*}{\partial t}) = i(-iE-iE) = 2E$$ Looks good so far-except for those pesky negative energy solutions. Remember that $$E = \pm\sqrt{p^2+m^2}$$ In the case of the negative energy solution $$\rho = 2E =-2\sqrt{p^2+m^2}<0$$ which is a negative probability density, something which simply does not make sense.



Hopefully that helps, the notion of a negative probability does not make sense because we define probability on the interval [0,1], so by definition negative probabilities have no meaning. This point is sometimes lost on people when they try to make sense of things, but logically any discussion of negative probabilities is non-sense. This is why QFT ended up reinterpreting the Klein Gordan equation and re purposing it for an equation that governs creation and annihilation operators.


electromagnetism - Will a change in reference frame produce light?



Let's say I have a charged particle in front of me. If I start spinning in place, the charged particle will appear accelerated to me from my reference frame. If the laws of physics pertaining to this scenario are valid in all reference frames, if I spin with with an appropriate angular velocity to bring the emitted waves into the visible light spectrum, will I see light? If there are many such particles, will the entire room light up for me and remain dark for a stationary observer next to me?



Answer




The acceleration that you plug into the Larmor formula to calculate the radiation is the proper acceleration.


It is certainly true that if you start spinning then in your frame the charge is circling you and therefore it is accelerating towards you. However this is what we call a coordinate acceleration i.e. the velocity changes in time when measured using your coordinates. The acceleration you measure is not because the particle is accelerating, it is because your coordinates are accelerating. If you calculate the proper acceleration of the charge, using your rotating coordinates, then you'll find the proper acceleration comes out zero and hence the charge won't radiate.


Friday 20 April 2018

newtonian mechanics - What is the maximum efficiency of a trebuchet?


Using purely gravitational potential energy, what is the highest efficiency one can achieve with a trebuchet counter-weight type of machine? Efficiency defined here as transformation of potential energy in the counterweight to kinetic energy of the trajectory.




Edit: To be more specific, we can use the following idealization:




  • no friction





  • no air resistance




  • no elastic/material losses




But any of the "standard" trebuchet designs are allowed: simple counterweight, hinged counterweight, vertical counterweight, etc.



Answer



The Wikipedia page on trebuchets links to a PDF paper which discusses exactly this question. It considers several models of varying complexity and finds a maximum range efficiency of 83% for a 100 pound counterweight, 1 pound projectile, a 5 foot long beam pivoted 1 foot from the point of attachment of the counterweight, and a 3.25 foot long sling. Here range efficiency is defined as the horizontal range of the realistic trebuchet model relative to the range of a "black box model" which is able to completely convert the gravitational potential energy of the counterweight into kinetic energy of the projectile.



In order to find the energy efficiency, defined as the fraction of the counterweight's gravitational potential energy that actually gets transferred to the projectile, you would need to use the relation


$$\frac{\epsilon_R}{\epsilon_E} = 2\sin\alpha\cos\alpha = \sin 2\alpha$$


where $\alpha$ is the angle of release of the projectile above the horizontal. Unfortunately, the paper doesn't give the value of $\alpha$ corresponding to the simulation that produced the maximum efficiency, so I can't give you a specific number without running the simulations myself. (Perhaps I'll do that when I have time; if anyone else gets to it first, feel free to edit the relevant numbers in.)


particle physics - What is the correct definition of the Jarlskog invariant?


In this lecture on neutrino physics, Prof. Feruglio defines the Jarlskog invariant as $$J=\text{Im}(U_{\alpha i}^{*} U_{\beta i} U_{\alpha j} U_{\beta j}^{*})\tag{1}$$ where $U$ is the neutrino mixing matrix with elements $U_{\alpha i}$. Here, $\alpha$ labels neutrino flavours ($e,\mu$ or $\tau$) and $i$ labels neutrino mass eigenstates such that $$|\nu_\alpha\rangle=\sum\limits_{i=1}^{3}U^*_{\alpha i}|\nu_i\rangle.$$ On the other hand, this well-cited paper defines $$J=\text{Im}(U_{e2}U_{e3}^{*} U_{\mu 2}^{*}U_{\mu 3}).\tag{2}$$


$\bullet$ Clearly, these two definitions are different because, in general, none of the entries of $U$ is zero. Which one of these definitions is correct and why?


$\bullet$ Moreover, what does the expression (1) mean? But it implies a sum over $\alpha,\beta, i$ and $j$? The expansion of this term would be different upon whether there are these sums in the definition or not.



Answer



Nonono! Absolutely no sums in (1).


(1) is the same as (2), or rather, the 9 equivalent ways of writing (1) include (2) as well. I'll only anchor this to M Schwartz's text (29.91-2) for the combinatorically identical quark sector, which I know you have based essentially this question on, before.


Greek indices denote flavor and Latin ones mass eigenstates, so, then e~1, μ~2, τ~3. I'll also tweak your (1) a bit to comport with Schwartz's cycle. Again, do not sum over repeated indices!



Define the 4-tensor $$(\alpha,\beta;i,j)\equiv \text{Im}(U_{\alpha i} U_{\beta j} U^*_{\alpha j} U_{\beta i}^{*})~,$$ so it is evident by inspection that $$ (\beta,\alpha;i,j)=-(\alpha,\beta;i,j)=(\alpha,\beta;j,i). $$ You then see that, up to antisymmetry, there are only 3×3 non-vanishing components, which, remarkably, from the unitarity of U, can be shown to be all identical in magnitude, to wit, $$ (\alpha,\beta;i,j)= J ~ \begin{bmatrix} 0 & 1 & -1 \\ -1 & 0 & 1 \\ 1 & -1 & 0 \end{bmatrix}_{\alpha \beta} \otimes \begin{bmatrix} 0 & 1 & -1 \\ -1 & 0 & 1 \\ 1 & -1 & 0 \end{bmatrix}_{ij}, $$ such that, $$ J=(e,\mu;2,3)=(e,\mu;1,2)=(e,\mu;3,1)=(\mu,\tau;2,3)=(\mu,\tau;1,2)=(\mu,\tau;3,1)\\ =(\tau,e;2,3)=(\tau,e;3,1)=(\tau,e;1,2). $$



  • Unitarity, $\sum_i U_{\alpha i}U^*_{\beta i}=\delta ^{\alpha \beta}$, enters and controls by imposing all rows and columns of the above written matrix to sum to zero, so instead of 3 independent parameters there is only one, and ditto for the left matrix in the tensor product: they must necessarily both be of the type $\sum_k \epsilon^{ijk}$.


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...