Thursday 30 November 2017

homework and exercises - Damped Simple Harmonic Motion Proof?



I was reading about damped simple harmonic motion but then I saw this equation:


$$-bv - kx = ma$$


$b$ is the damping constant. Then it said by substituting $dx/dt$ for $v$ and $d^2x/dt^2$ for $a$ we will have:



$$ m\frac{\mathrm d^2x}{\mathrm dt^2}+b\frac{\mathrm dx}{\mathrm dt}+kx=0 $$


Then it says the solution of the equation is: (this is my problem)


$$ x(t)=x_m \mathrm e^{-bt/2m}\cos(\omega't+\phi) $$


I don't understand the last part. How can we reach the $x(t)$? I know very little of calculus can you please explain how to solve this?



Answer



The differential equation you quote is fairly standard in university physics/engineering course but definitely requires some calculus to solve. As a first step, if you know how to differentiate products and chains, you can substitute the given solution into the differential equation and verify that it is indeed a solution. It contains two arbitrary constants (here $x_m$ and $\phi$) as you would expect of a second order differential equation (DE).


If you wanted to solve it, you still need some kind of guess as to what the function might look like; here the trial function would be:


$$ x(t) = Ae^{\lambda t} $$


and then substitute this parametrised solution into the original DE to obtain a quadratic in $\lambda$.


$$ x(t) = Ae^{\lambda t} \implies {dx\over dt} = \lambda Ae^{\lambda t} = \lambda x(t) $$ and $$ {d^2x\over dt^2} = \lambda^2 Ae^{\lambda t} = \lambda ^2x(t) $$



so that


$$ m\lambda^2 + b\lambda+k = 0. $$


as $x(t), A\neq 0$


Depending on the relative values of $m$, $k$ and $b$, you will get a quadratic with two, one or no real (i.e. one or two complex) solutions.


Given that you have two solutions $\lambda_1$ and $\lambda_2$, the intermediate result for $x(t)$ will be then


$$ x(t) = Ae^{\lambda_1 t} + Be^{\lambda_2 t} $$ For the solution you have been given, the corresponding quadratic in $\lambda$ will have no real solutions, and so the $\lambda$s will be complex, and the real part of the solution will give you the damped exponential at the front of the solution, and the imaginary parts will give you a wave-like term. $x_m$ and $\phi$ will be related to $A$ and $B$ and are determined by boundary conditions.


quantum field theory - Complex Gaussian integral with different source terms


Do the source terms multiplying a complex field and its conjugate need to be conjugates for the Gaussian identity to hold? E.g. is



$$\int D({\phi,\psi,b}) e^{-b^\dagger A b +f(\phi, \phi^\dagger,\psi, \psi^\dagger )b +b^\dagger g(\phi, \phi^\dagger,\psi, \psi^\dagger )} = \int D(\phi,\psi) \det(A^{-1}) e^{f(...) A^{-1} g(...)} $$


valid when $f \ne g^* $?


If I change to real and imaginary coordinates in the $b$ it seems fine, but I'm worried that I'm screwing up the measure in $D(...)$ without realizing it.


Edit:


Let's say $A$ is a $c$-number. To do the integral I can write $b = x +iy$ etc. Then the integral is


$$\int D(...) e^{- Ax^2 - A y^2 +x(f + g) + i y(f-g)} = \frac{\pi}{A}\int D(...) e^{(4A)^{-1}((f+g)^2 - (f-g)^2)}$$ $$=\frac{\pi}{A}\int D(...) e^{A^{-1} fg}.$$


But then this implies that Hubbard Stratonovich transformations don't need to be of squares.. so I can decouple any interaction $$e^{2fg} = \int d \phi d\phi^\dagger e^{-|\phi|^2 +f\phi + \phi^\dagger g}.$$ This can't be right?



Answer




Theorem: Given a normal$^1$ $n\times n$ matrix $A$ where ${\rm Re}(A)>0$ is positive definite, then the complex Gaussian integral is$^2$ $$\begin{align} I&~:=~\int_{\mathbb{R}^{2n}} \! d^nx ~d^ny~ \exp\left\{-z^{\dagger}Az +f^{\dagger}z +z^{\dagger}g\right\}\cr &~=~\exp\left\{f^{\dagger}A^{-1}g\right\}\int_{\mathbb{R}^{2n}} \! d^nx ~d^ny~ \exp\left\{-(z^{\dagger}-f^{\dagger}A^{-1})A(z-A^{-1}g)\right\}\cr &~=~\frac{\pi^n}{\det(A)}\exp\left\{f^{\dagger}A^{-1}g\right\}, \qquad z^k~\equiv~ x^k+iy^k.\end{align}$$




Sketched proof:




  1. The normal matrix $A=U^{\dagger}DU$ can be diagonalized with a unitary transformation $U$. Here $D$ is a diagonal matrix with ${\rm Re}(D)>0$. Next change integration variables$^3$ $w=Uz$. The absolute value of the Jacobian determinant is 1. So it is enough to consider the case $n=1$, which we will do from now on.




  2. There exist two complex numbers $x_0,y_0\in\mathbb{C}$ such that$^4$ $$ x_0-iy_0~=~f^{\dagger}A^{-1}\qquad\text{and}\qquad x_0+iy_0~=~A^{-1}g.$$





  3. We can shift the real integration contour into the complex plane $$\int_{\mathbb{R}} \! dx \int_{\mathbb{R}} \! dy~ \exp\left\{-(z^{\dagger}-f^{\dagger}A^{-1})A(z-A^{-1}g)\right\}$$ $$~=~\int_{\mathbb{R}+x_0} \! dx \int_{\mathbb{R}+y_0} \! dy~ \exp\left\{-z^{\dagger}Az\right\}~=~\frac{\pi}{A},$$ with no new non-zero contributions arising from closing the contour, cf. Cauchy's integral theorem.$\Box$




--


$^1$ The Gaussian integral is presumably also convergent for a pertinent class of non-normal matrices $A$, but in this answer we consider only normal matrices for simplicity.


$^2$ Recall that the notation $\int_{\mathbb{C}^n}d^nz^{\ast} d^nz$ means $\int_{\mathbb{R}^{2n}} \! d^nx ~d^ny$ up to a conventional factor, cf. my Phys.SE answer here. Here $z^k \equiv x^k+iy^k$ and $z^{k\ast} \equiv x^k-iy^k$.


$^3$ More generally, under a holomorphic change of variables $u^k+iv^k\equiv w^k=f^k(z)$, the absolute value of the Jacobian determinant in the formula for integration by substitution is $$ |\det\left(\frac{\partial (u,v)}{\partial (x,y)} \right)_{2n\times 2n}|~=~ |\det\left(\frac{\partial w}{\partial z} \right)_{n\times n}|^2. $$


$^4$ The underlying philosophy in point 2 is similar to my Phys.SE answer here: One can in a certain sense treat $z$ and $z^{\dagger}$ as independent variables! And therefore it is possible to consider OP's case where $f,g\in\mathbb{C}^n$ are independent complex constants.


Error propagation rounding



hope I am right in this section.



I am unsure with error propagation. When calculation the error in a titration, many errors has to be taken into account:


Error in Glassware/ Error in Balance/ Error in Burette etc.


I learned that the absolute and relative error have only 1 significant figure and that the total amount is rounded to the decimal place of the error.


Therefore 5.34532g ± 0.001428g would be 5.345g ± 0.001g


The relative error is 0.001g/5.345g = 0.00018709 = 0.0002 If there is an experiment with a lot of steps and error propagation wouldn't the rounding of all the errors in every single step change the result a lot? Wouldn't rounding the error just in the end make more sense?


Many thanks in advance.




electricity - Will a battery connected to the Earth eventually deplete?


I've been thinking about this, and my answer is leaning towards yes, but I couldn't find any definitive answer by Googling.


If the higher potential side of a typical battery is connected by wire to a spike driven into the ground, will it eventually deplete?


My thought process is that the voltage difference will cause current to flow between the electrode and the earth, causing the battery to eventually "die," just as if it was connected in a circuit.


However, I also wonder what the difference is between this situation and shorting the battery. Connecting the anode and cathod of a battery together will cause it to heat up and damage itself, but will the same happen if its connected to ground? Isn't it (physically) the same situation?



Answer



Here is the thing about potential/voltage. Potential/voltage is a measure of difference in the, say, potential energy between two points in space. So the correct term is actually: "potential/voltage difference" when we talk about that stuff. So when you talk about batteries, the higher potential side of the battery is higher, with respect to, the lower side. It has nothing to do with anything else, you, me, your computer, the Sun or Earth. Technically, the higher potential side of the battery is not higher/lower than earth.



Here is what happens when you touch the "+" side of the battery to ground: enter image description here


Soil resistance might be very low depending on the type of soil. However, air resistance is much higher which will prevent battery discharge.


By the way, both soil and air can conduct current with free ions. (Higher voltages cause big ionization i.e. lightnings) (Pouring salt water in soil also helps lower soil resistance)


Now, if you throw your battery in soil and pour some saltwater in there, you'll definitely be seeing a discharge!


What is the relation between General Relativity and Newtonian Mechanics?



What is the relationship of General Relativity and Newtonian Mechanics? Namely, which laws does GR replace of Newtonian Mechanics, and which laws of Newtonian Mechanics are incorporated into it. Or is GR a complete replacement and overhaul?




Wednesday 29 November 2017

electromagnetism - Integration for finding potential inside uniformly charged solid sphere


I'm working the following problem:



Use equation 2.29 to calculate the potential inside a uniformly charged solid sphere of radius R and total charge q.



Equation 2.29 is as follows:


$$ V(r) = \frac {1}{4\pi \epsilon_0} \int \frac {\rho(r \prime) }{\mu} d \tau\prime$$


In which $ \mu $ is what I've used to denote the separation vector, because I don't know what script r is in MathJax, and the primes are used by the author to avoid confusion over similar variables rather than indicate derivatives.


So I tried to work this and got the wrong expression, and then decided to take a peek at the solution (attached below). I understand what he's doing up until he integrates over $d\theta$. What is he doing? How does he get integrate and then after that, how does he arrive at the absolute value expression? After that interval I pick up his trail again but between those two questions, I'm completely lost.


(from Introduction to Electrodynamics 4th Ed by Griffiths)enter image description here




Answer



A good explanation may be found at: http://solar.physics.montana.edu/qiuj/phys317/sol7.pdf


In more depth, what you're basically asking about is what's the substitution used to do the integration. This requires a little bit of art, but the answer is in the linked PDF and is explained sufficiently well that I won't repeat most of it here. Simply, you perform a substitution where u is equal to the argument of your square root. From here, the integration process is just turning a crank using a standard result.


relativity - What would happen if the light-speed was higher?




I came across a rather interesting passage in a book attempting to debunk Darwin's Theory of Evolution from a Christian viewpoint. One thing the book suggested, was that various scientific ways to measure age of fossils and rocks could not be trusted, because they assume that the rate of decay to be constant - something the book the suggest may not be the case. I assuming it's referring to C-14 dating and similar methods...


The book then quote "an exciting theory(sic) in this context" by Barry Setterfield and Trevor Norman that the speed of light may not always have been constant. That they by comparing 160 measurements of the speed of light from the 1600's all the way to today (2000), got results that suggested that "the speed of light 8000 years ago ("surprisingly", just around the time God's supposedly created the Earth and then the universe...), was 10^7 times higher than what it is today". And according to the book; this change in the speed of light would change the decay-rate, thus invalidating the result of C-14 dating.


Personally, I think the change in "measured" c is rather due to initial wrong assumptions (eg. "the ether"), and measuring-errors due to primitive technology (especially if we start in 1600) - but what do I know...?


So, just out of curiosity; what would happen if the speed of light was 10^7 higher than it is?



  • Would an increase of the speed of light change decay-rates?

  • How would the decay-rate - and results depending on it (like C-14 dainging) - change?

  • Would it change time - eg. would the length of a second increase?

  • What other manifestation would such a huge increase in the speed of light cause (in physics)? Anything devastating and cataclysmic?

  • What would happen if the increase was (a lot) less - and perhaps more survivable - which effects would it have on our lives?



I could perhaps also add that I once read a book trying to explain relativity and such to a layman in very simple terms, using alternate worlds with changed physics and then exploring what the result would be. In one of these worlds the speed of light was a lot less - so the books hero could actually bike fast enough to get the effect of closing in on c... and taking the train, caused time-distortion effects for the passengers (like "twin-in-rocked-going-close-to-c-doesn't-age" paradox).


As a side-note; the author of the book "debunking evolution", seems to have shifted to writing fantasy-books for kids/young-adults - probably a much better use of his "talents"...




homework and exercises - Calculate Spring Constant



I am editing the question because it was misunderstood to be a homeowrk question.


enter image description here


I am modeling a Stumps equipment for a game called Cricket.Typically the game consists of 3 wooden stumps positioned upright by hammering them into the ground.They were setup behind the batsmen,batter in baseball analogy.The pitcher scores a point if he was able to hit the stump and causing it to move.For the stump to move,i.e to overcome the static friction, he should be throwing atleast at speed of 10mph.


The inverse conical section at the base of each stump,in the left side of the image, is not seen in the middle image because it was hammered into the ground.


Sometimes it is played for recreation on a concrete ground like a tennis court.So there is no possibility of standing them upright by hammering into the concrete.So I am planning to design an equipment similar to the one like the right side of the image.


The side with the spring will be facing the wicketkeeper standing behind the batter,in the picture above.The movement of the stump is restricted by the spring connected to each of them.



My problem is to find the right type of spring which restricts the movement the same way as the ground resists the movement of the stump when hammered into the ground.The gound resists the stump from moving for balls hitting less than 10mph.The spring should be behaving the same way.


The picture shown below has a ball of 5 oz hitting the brick of negligible mass,negligible static and kinetic friction,resting on the table,connected to a spring.The brick is placed just to make sure the ball has sufficient surface area to make contact.


For the ball to cause a compression in spring ,it should be travelling at least 10 mph.I would like to know the initial tension and the spring constant of the spring.


A simple analogy to this problem would be to compute the static friction of a brick resting on a surface when it takes a ball of 5 oz traveling at 10 mph to overcome static friction.In my case I need the spring constant or initial tension instead of static friction.


Let me know if I am not clear


enter image description here



Answer



I cannot comment on the full design because the question is lacking on details, but I can explain more about the situation where you hit a mass with a ball and a spring reacts to it (like the sketch shown)





  1. Consider the friction force as $f=\mu m g$ and the equations of motion $$m \frac{{\rm d}^2x}{{\rm d}t^2} + k x \pm f = 0$$ The sign of $f$ depends on the direction of motion, but since I only consider what happens initially I have to use the $+f$ side.




  2. The general solution given initial conditions $x(t=0)=X$ and $\dot{x}(t=0)=V$ is $$ x(t) = X \cos(\omega t) + \frac{V}{\omega} \sin(\omega t)+ f \frac{\cos(\omega t)-1}{m \omega^2} $$



  3. The frequency of natural oscillation $\omega$ is critical to the solution and for a simple mass spring system it is $$\omega = \sqrt{\frac{k}{m}}$$

  4. Use the natural frequency to estimate the average impact time. A full cycle occurs during time $$\Delta t = \frac{2 \pi}{\omega}$$

  5. The collision with the ball causes a momentum transfer (impulse) that equals with $$J = \frac{ (1+\epsilon) v_{ball}} { 1/m+ 1/m_{ball} } $$

  6. The average force of impact is $$F_{ave} = \frac{J}{\Delta t} = \frac{(1+\epsilon) \omega v_{ball}}{2 \pi \left(\frac{1}{m}+\frac{1}{m_{ball}}\right)} $$ where $\epsilon <1$ is the coefficient of restitution. If the ball doesn't bounce back a lot make it small, close to zero; if it bounces back very elastically, it approaches one.

  7. Finally set the average impact force to friction $f$ at $v_{ball}$ $$ \omega = \frac{2 \pi \mu g (m+m_{ball})}{(1+\epsilon)m_{ball}v_{ball}}$$ and find the spring stiffness by $$k=m \omega^2$$



APPENDIX


For a slender beam of diameter $d$ the 1st natural frequency is $$\omega_1 = \frac{4.73^2}{4} \frac{c d}{\ell^2} $$ where $c$ is the longitudinal wave speed and it calculated by $c^2 = \frac{E}{\rho}$. The $i$-th frequency is $$\omega_i = 0.1103 (2i+1)^2 \omega_1$$


The impact calculation needs adjusting a the effective lumped mass of a slender beam of length $\ell$, mass $m_{rod}$ when impacted a distance $c$ from the center of mass is: $$m = \frac{\ell^2}{\ell^2+12 c^2} m_{rod}$$


homework and exercises - Young's Double Slit Experiment : What would happen if the "first slit" was too wide?



A picture showing Young's Double Slit experiment


I was wondering what would happen to the fringe pattern displayed on the screen if the first slit (as shown in the picture), which is also known as "single slit", was made a bit wider. I read it in my book. I don't understand it. Anyway, it quotes



If the single slit is too wide, each part of it produces a fringe pattern which is displaced slightly from the pattern due to adjacent parts of the single slit. As a result, the dark fringes of double slit pattern become narrower than the bright fringes, and contrast is lost between the dark and the bright fringes.



Please answer the question in your own words and try to explain to me what the quote is trying to say. Also, when it says "parts", what does it mean?




quantum mechanics - Learn QM algebraic formulations and interpretations



I have a good undergrad knowledge of quantum mechanics, and I'm interesting in reading up more about interpretation and in particular things related to how QM emerges algebraically from some reasonable real world assumptions. However I want to avoid the meticulous maths style and rather read something more meant for physicists (where rigorous proofs aren't needed and things are well-behaved ;) ) I.e. I'd prefer more intuitive resources as opposed to the rigorous texts.


Can you recommend some reading to get started?



Answer



An excellent book which does more or less what you ask for is Asher Peres' "Quantum theory:concepts and methods". It starts from the Stern-Gerlach experiments and logical reasoning to develop the basic principles of quantum mechanics. From there, it develops the necessary algebra.


Another interesting book for an approach of the conceptual side of quantum mechanics is "Quantum Paradoxes" by Aharonov and Rohrlich. But to fully appreciate this one, I think you will need to go through a standard curriculum first.


Then, there is "Quantum computation and Quantum Information" by Nielsen and Chuang, which is meant as an introduction to the ideas of QM as applied to information theory for people with an informatics background mostly. So it also starts from an algebraic and conceptual approach.


group theory - Coadjoint orbits in physics


I am looking for some application of coadjoint orbits in physics. If you know some of them please let me know.



Answer



The Wilson loop observables inside 3d Chern-Simons gauge field theory are secretly themselves the quantization of a 1d field theory in terms of coadjoint orbits.


This possibly still surprising-sounding statement was hinted at already on p. 22 of the seminal




  • Edward Witten, Quantum Field Theory and the Jones Polynomial Commun. Math. Phys. 121 (3) (1989) 351–399. MR0990772 (project EUCLID)


    A detailed discussion of how this works is in section 4 of





  • Chris Beasley, Localization for Wilson Loops in Chern-Simons Theory, in J. Andersen, H. Boden, A. Hahn, and B. Himpel (eds.) Chern-Simons Gauge Theory: 20 Years After, , AMS/IP Studies in Adv. Math., Vol. 50, AMS, Providence, RI, 2011. (arXiv:0911.2687)




following



  • S. Elitzur, Greg Moore, A. Schwimmer, and Nathan Seiberg, Remarks on the Canonical Quantization of the Chern-Simons-Witten Theory, Nucl. Phys. B 326 (1989) 108–134.


The idea is indicated on the nLab here.


As also discussed there, the statement that there is a coadjoint orbit 1d quantum field theory sort of "inside" 3d Chern-Simons theory has a nice interpretation from a point of view of extended quantum field theory. This we have discussed in section 3.4.5 of




So given the ubiquity of Chern-Simons theory in QFT, and the fact that much of what is interesting about it is encoded in its Wilson loop observables, this means that quantization of coadjoint orbits plays a similarly important role. For instance given that all of rational 2d conformal field theory is dually encoded, via the FRS theorem, by 3d Chern-Simons theory in such a way that CFT field insertions are mapped to the CS Wilson loops, this means that quantized coadjoint orbits are at work behind the scenes in much of 2d CFT.


newtonian mechanics - Velocity in a turning reference frame


I often see the relation that $\vec v=\vec v_0+ \vec \omega \times \vec r$ in a turning reference frame, but where does it actually come from and how do I arrive at the acceleration being $$\vec a=\vec a_0+ 2\vec \omega \times\vec v+ \vec \omega \times(\vec \omega \times \vec r)+\dot{\vec \omega} \times \vec r\,\,\text{?}$$


Is there a simple method to see this? All approaches that I saw use some non-intuitive change of differential operators and so on $\left(\frac{d}{dt} \rightarrow \frac{d}{dt}+\vec \omega \times{}\right.$ and so on$\left.\vphantom{\frac{d}{dt}}\right)$.



Answer




I don't think you can do much better than getting your head around the identity $$\frac{d}{dt} \rightarrow \frac{d}{dt}+\vec \omega \times,$$ which holds when the former is applied to vectors. The essential point of the identity is that even if a vector is stationary in one reference frame, it will have some rotational motion in the rotating frame.


It may help to rephrase this in matrix language: for any vector $\vec u$, it reads $$\frac{d}{dt} \begin{pmatrix}u_x\\u_y\\u_z\end{pmatrix} \rightarrow \frac{d}{dt} \begin{pmatrix}u_x\\u_y\\u_z\end{pmatrix} + \begin{pmatrix}0 & -\omega_z & \omega_y \\ \omega_z & 0 & -\omega_x\\ -\omega_y & \omega_x & 0\end{pmatrix} \begin{pmatrix}u_x\\u_y\\u_z\end{pmatrix} = \begin{pmatrix} \frac{du_x}{dt} +\omega_y u_z-\omega_z u_y\\ \frac{d u_y}{dt}+\omega_z u_x-\omega_x u_z\\ \frac{du_z}{dt} +\omega_x u_y-\omega_y u_x\end{pmatrix} .$$ Thus, the rate of change of each vector component gets added a linear multiple of the other components, as they "rotate into it".


(For example, if $\vec \omega=\omega \hat{e}_z$, then $$\frac{d}{dt} \begin{pmatrix}u_x\\u_y\\u_z\end{pmatrix} \rightarrow \begin{pmatrix} \frac{du_x}{dt} -\omega u_y\\ \frac{d u_y}{dt}+\omega u_x\\ \frac{du_z}{dt} \end{pmatrix} ,$$ so that $u_x$ and $u_y$ transform into ($\pm$) each other as the frames rotate and the $x$ and $y$ axes rotate into ($\pm$) each other.)


That's the intuition behind the identity. Operationally, it is the easiest to apply (just substitute for $\frac{d}{dt}$), and it gives an unambiguous way to connect rates of change of vector components from one frame to another. What's not to love?


Tuesday 28 November 2017

homework and exercises - "Sweet Spot" of Rod-Pendulum - Problem Clarification


I came across this problem in a book (shortened for brevity):



Consider a rod of mass $m$ pivoted about one end, with the other end to rotate. Let the center of mass be a distance $a$ from the pivot point $I$ be the moment of inertia of the rod about an axis which we will consider rotations in. A particle comes in and hits the rod at a distance $b$ below the pivot point, imparting an impulse $F\Delta t=\xi$ on the rod. (a) Find the linear and angular momentum of the rod right after the time $\Delta t$, and (b) Calculate the impulse imparted on the pivot point.




My problem is with (b). What the does "impulse imparted on the pivot point" even mean? I would think the pivot point is fixed, so it should have experienced no net impulse, but that's incorrect.



Answer



In order to maintain the constraint of the pivot during the impact, a reaction impulse is needed. See the figure below for what I mean.


Figure


At the center of mass the velocity is $v = a\,\omega$. This is a result of the two impulses $$(F-R) \Delta t = m\, a\, \omega$$


If the angular velocity is $\omega$ then the net impulsive moments at the center of mass are


$$ (b F + a R) \Delta t = I \omega $$


These two equations are solved for the unknown reaction $R$ and motion of the rod $\omega$.


$$\begin{aligned} R \Delta t & = \frac{I-m\,ab}{I+m a^2} F \Delta t \\ \omega & = \frac{a+b}{I+m a^2} F \Delta t \end{aligned}$$


Only when $b=\frac{I}{m\,a}$ the pivot reaction is zero. That is considered the instant axis of percussion of the rod about the pivot (sweet spot).



special relativity - Confusing time dilation - proper time is higher?


The problem states that 2 rockets of proper length 100m are going in opposite directions. From the system of rocket A, the tip of B took 5 microseconds to pass the rocket A. If a clock on the tip of B marked t=0 when their tips met, what does the clock says when rocket B reaches the end of A. (I assume that all of this is measured from rocket A)


First, I computed the relative velocity (dividing the length travelled by the time it took), $v= 2 \times 10^7$ m/s. So $\gamma=1.00223$.


Then I used the Lorentz transform of times: $t' = \gamma(t-(v \times 100)/c^2)$, then $t' = 4.989 \times 10^{-6}$ seconds.


I understand the math but this doesn't match with the statement "proper time is always the lowest" because this proper time $5 > 4.989$ microseconds.



Answer



Your calculation is correct but you muddled which one is the proper time. Proper time is the time elapsed between events as observed in the frame in which those events are at the same spatial location. In other words, it is the time registered by a clock that is carried from one event to the other. This is the clock on the tip of B in this example. It registers the time $t'$.


quantum field theory - Two math methods apply the same loop integral lead different results! Why?


I tried to adopt the cut-off regulator to calculate a simple one-loop Feynman diagram in $\phi^4$-theory with two different math tricks. But in the end, I got two different results and was wondering if there is a reasonable explanation.



The integral I'm considering is the following $$ I=\int^\Lambda\frac{d^4 k}{(2\pi)^4}\frac{i}{k^2-m^2+i\epsilon} \qquad\text{where}\qquad \eta_{\mu\nu}=\text{diag}(-1,1,1,1) $$ $\Lambda$ is the cu-off energy scale and $\epsilon>0$. Then I do the calculations.


Method #1 - Residue Theorem:


Since $$ I=i\int\frac{d^3\vec{k}}{(2\pi)^3}\int_{-\infty}^{+\infty}\frac{dk^0}{2\pi}\left[\frac{(2k^0)^{-1}}{k^0+z_0}+\frac{(2k^0)^{-1}}{k^0-z_0}\right] \qquad\text{where}\qquad z_0=\sqrt{|\vec{k}|^2+m^2}-i\epsilon $$ choosing the upper contour in $k^0$-complex plane which encloses the pole, $-z_0$, we have $$ \begin{align} I &= \int\frac{d^3\vec{k}}{(2\pi)^3}\frac{1}{2\pi i}\oint dk^0\frac{(-2k^0)^{-1}}{k^0+z_0}\\ &= \frac{1}{2}\int\frac{d^3\vec{k}}{(2\pi)^3}\frac{1}{\sqrt{k^2+m^2}}\\ &= \frac{1}{4\pi^2}\int_0^\Lambda \frac{k^2dk}{\sqrt{k^2+m^2}}\\ &= \frac{1}{8\pi^2}\left[\Lambda^2\sqrt{1+\frac{m^2}{\Lambda^2}}-m^2\ln\left(\frac{\Lambda}{m}\bigg)-m^2\ln\bigg(1+\sqrt{1+\frac{m^2}{\Lambda^2}}\right)\right]\\ & \approx \frac{1}{8\pi^2}\left[\Lambda^2-m^2\ln\left(\frac{\Lambda}{m}\right)-m^2\ln2\right] \end{align} $$


Method #2 - Wick Rotation:


Drawing the poles, $-z_0, z_0$, one finds the integration contour can be rotated anticlockwise so that, $$ \begin{align} I &= i\int\frac{d^3\vec{k}}{(2\pi)^3}\int_{-i\infty}^{+i\infty}\frac{dk^0}{2\pi}\frac{1}{k^2-m^2+i\epsilon}\\ &= -i\int\frac{d^3\vec{k}}{(2\pi)^3}\int_{-\infty}^{+\infty}\frac{idk_4}{2\pi}\frac{1}{k^2_E+m^2} \end{align} $$ where $k_4=-ik^0$ and $k_E^2=-k^2$, which are $4d$ Euclidean variables. So we have $$ \begin{align} I&=\int\frac{d^4k_E}{(2\pi)^4}\frac{1}{k^2_E+m^2}\\ &=\frac{1}{16\pi^2}\int_0^{\Lambda^2}\frac{k_E^2 d(k_E^2)}{k^2_E+m^2}\\ &=\frac{1}{8\pi^2}\left[\frac{\Lambda^2}{2}-m^2\ln\left(\frac{\Lambda}{m}\right)-\frac{m^2}{2}\ln\left(1+\frac{m^2}{\Lambda^2}\right)\right] \end{align} $$ Comparing the results obtained from the above two methods, we will find only the $\ln\Lambda$ dependent parts are the same; Other two parts ($\Lambda^2$-dependence and finite piece) are different. Since I use the same regulator, it's a bit wired to me how could the math tricks affect the results.



Answer



I don't think it is exactly the same regulator: In the first method, you integrate $\int_{-\infty}^\infty dk^0 \int^\Lambda d^3k$, but in the second calculation you integrate $\int^\Lambda d^4 k_E$.


Monday 27 November 2017

quantum mechanics - Is there a formalism for talking about diagonality/commutativity of operators with respect to an overcomplete basis?


Consider a density matrix of a free particle in non-relativistic quantum mechanics. Nice, quasi-classical particles will be well-approximated by a wavepacket or a mixture of wavepackets. The coherent superposition of two wavepackets well-separated in phase space is decidedly non-classical.


Is there a formalism I can use to call this density matrix "approximately diagonal in the overcomplete basis of wavepackets"? (For the sake of argument, we can consider a specific class of wavepackets, e.g. of a fixed width $\sigma$ and instantaneously not spreading or contracting.) I am aware of the Wigner phase space representation, but I want something that I can use for other bases, and that I can use for operators that aren't density matrices e.g. observables. For instance: $X$, $P$, and $XP$ are all approximately diagonal in the basis of wavepackets, but $RXR^\dagger$ is not, where $R$ is the unitary operator which maps


$\vert x \rangle \to (\vert x \rangle + \mathrm{sign}(x) \vert - x \rangle) / \sqrt{2}$.


(This operator creates a Schrodinger's cat state by reflecting about $x=0$.)


For two different states $\vert a \rangle$ and $\vert b \rangle$ in the basis, we want to require an approximately diagonal operator $A$ to satisfy $\langle a \vert A \vert b \rangle \approx 0$, but we only want to do this if $\langle a \vert b \rangle \approx 0$. For $\langle a \vert b \rangle \approx 1$, we sensibly expect $\langle a \vert A \vert b \rangle$ to be proportional to a typical eigenvalue.




homework and exercises - X-ray: Observation of absorption edge



So I am working on this experiment on X-ray and kinda stuck on this last and following section.



enter image description here


I have done the measurements and plotted the graph as required in 1. and 2. See below,


enter image description here


However, I am afraid my graph doesn't look much similar to the graph from Kaye and Laby[3,p. 4.2.2 graph], the one below.


enter image description here


I would be very grateful if someone can please help me with and explanation to point 3. of the task and also the following 2 questions, Question 2 and Question 3. Thanks in advance.


Link to Kaye and Laby [3,p. 4.2.1 table]: http://www.kayelaby.npl.co.uk/atomic_and_nuclear_physics/4_2/4_2_1.html



Answer



The $\rm K_\alpha$ X-ray emission is due to an $\rm L$-shell electron falling into the $\rm K$-shell with the emission of a photon.
The $\rm K_\beta$ X-ray emission is due to an M-shell electron falling into the $\rm K$-shell with the emission of a photon with a higher energy than that of a $\rm K_\alpha$ photon.



As the atomic number increases so does the energy of the $\rm K$-photons.


In the reverse process an incoming photon, if it has enough energy, can knock out an electron which is in the $\rm K$-shell.
You will note that the energy of an incoming photon has to be greater than $\rm K_\alpha$, $\rm K_\beta$ etc photons because the incoming photon has to remove the electron in the $\rm K$-shell from the atom completely not just promote the electron into a higher energy level.
So for a given element, the energy of the $\rm K$-edge is greater than the $\rm K$-photons.


quantum mechanics - Diagonalization of Hubbard model for spinless fermions in 1D k-space


In real space we write basis vector for spinless fermions in binary notation for example if there are M=4 sites in system and N=2 fermions then basis vectors will be: $0011, 0101, 0110, 1001, 1010, 1100$. Hamiltonian in numerical form ($H=-t\sum_{}(c_j^\dagger c_{j+1}+h.c.)+U\sum_{}n_jn_{j+1}$) can be written simply using bitwise operations of C/C++, Fortran or MATLAB. One can see hopping part of H is off-diagonal and interaction part is diagonal in real space.


When we work in Fourier space Hamiltonain become $$\tilde{H}=\sum_k\epsilon_k\tilde{c_k^\dagger}\tilde{c_k} + \sum_k\tilde{U_k}\tilde{n_k}\tilde{n_{-k}}$$ with $\epsilon_k=-2t\cos{k}$ and $\tilde{U}_k=\frac{1}{L}\sum_j U(j) e^{-ik.j}$ as explained in this pdf.




What I can't understand is that how do we define our basis vector in fourier space?





My understanding about it:


What I have understood from this so for is that let we have a 1D line from $-\pi$ to $+\pi$ (first brillion zone) on which $k$ points are discreetly define. If we have M=4 and N=2 then set of $k$-points is $-\pi$, $-\frac{\pi}{2}$,$+\frac{\pi}{2}$, $+\pi$
Now considering these 4 points as sites on which fermions can reside our basis vectors can be again given as they were given in real space i.e. $0011, 0101, 0110, 1001, 1010, 1100$.
For simplicity I take limit $U=0$ and calculate Hamiltonian for both real and fourier space case.
REAL SPACE:
$$H_{R}=-t\begin{bmatrix} 0 & 1 & 0 & 0 & -1 & 0 \\ 1 & 0& 1& 1& 0& -1\\ 0 & 1& 0& 0& 1& 0\\ 0 & 1& 0& 0& 1& 0\\ -1 & 0& 1& 1& 0& 1\\ 0 & -1& 0& 0& 1& 0\\ \end{bmatrix}$$ Let t=1 then Eigenvalues=[-2, -2, -4.4e-16, 0, 2, 2] (using MATLAB function eig())


FOURIER SPACE:


$\tilde{c_k^{\dagger}}\tilde{c_k}=\tilde{n_k}=$ number operator in k-space. So our hamiltonian for U=0 should be diagonal with values $$ H_{F}= -2t*diagonal[\cos{(\pi/3)}+\cos{\pi}, \cos{(-\pi/3)}+\cos{\pi}, \cos{(-\pi/3)}+\cos{(\pi/3)}, \cos{(-\pi)}+\cos{\pi}, \cos{(-\pi)}+\cos{(\pi/3)}, \cos{(-\pi)}+\cos{(-\pi/3)}] $$


$$=-t\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 4 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}$$


for t=1 eigenvalues=[-2, 1, 1, 1, 1, 4].





results are not matching, I consider there is any fault in my method of defining basis vectors in k-space. So, please guide be how to properly build basis vectors in k-space.



Answer



I think you've made a couple of mistakes in your allowed k-vectors.


First, the allowed k-vectors are not $-\pi,-\frac{\pi}{2},\frac{\pi}{2},\pi$. The allowed k-vectors are $-\frac{\pi}{2},0,\frac{\pi}{2}, \pi$. In the Brilloin zone, $k=\pi$ and $k=-\pi$ are the same state, so you double counted this state while neglecting $k=0$.


Second, for some reason when you computed $H_F$, you wrote terms like $\cos(\frac{\pi}{3})$ on the diagonal. This is clearly an error, since $\frac{\pi}{3}$ is not an allowed k-value. If you write out $H_F$ more carefully, with the correct k-values, you should get the energies to match like you want.


(Note there could also be an error in your $H_R$, I didn't check it too closely. But fix the k-error and see!)


Sunday 26 November 2017

classical mechanics - Physical meaning of the moment of inertia about an axis


In the context of rigid bodies, the inertia tensor is defined as the linear map that takes angular velocity to angular momentum, that is, the linear map $I : \mathbb{R}^3\to \mathbb{R}^3$ such that


$$\mathbf{L}=I\boldsymbol{\omega}.$$


Now, given one unit vector $\hat{\mathbf{n}}$ characterizing the direction of a line, one can define


$$I_{\mathbf{n}}=\hat{\mathbf{n}}\cdot I(\hat{\mathbf{n}}),$$


which is the moment of inertia about that axis.


In that setting, if $\boldsymbol{\omega}= \omega \ \hat{\mathbf{n}}$ one gets, for instance, the nice looking formula for kinetic energy:


$$T = \dfrac{1}{2}I\omega^2,$$



where $I$ is the moment of inertia about the axis of rotation.


Now, although I grasp mathematically what is going on, I have no idea whatsoever about the physical meaning of the moment of inertia about an axis.


What is the physical meaning of the moment of inertia about an axis? What it really is, and how this physical significance relates to the actual mathematical definition I gave?




visible light - What determines the form of the intensity curves in Laser-Induced Fluorescence (LIF) measurements?


What determines the form of the intensity spectra of different particle species in Laser-Induced Fluorescence (LIF) measurements? See e.g.


I figure that bigger particles have more ways to get excited and so the intensities accumulate and make the curve wider? But how exactly do I derive an expected curve for a given molecule type? Why the steep rise and the slower fall?


enter image description here



Answer



In short: the spectra can be explained by considering vibronic transitions, the Franck-Condon principle (http://en.wikipedia.org/wiki/Franck%E2%80%93Condon_principle), and the uncertainty principle. The explanation is as follows:


The intensity of a transition is determined by its probability amplitude $P$ \begin{equation} P = \langle \psi ' | \hat{\mu} | \psi \rangle \end{equation} where $\hat{\mu}$ is the molecular dipole moment operator and $\psi$ and $\psi'$ are the wavefunctions of the initial and final states, respectively. The Franck-Condon principle tells you that electronic transitions to and from the lowest vibrational states (0-0 transitions) are most probable. These transitions are responsible for the steep rise in the signals that you mention. The "slower fall" corresponds to multiple vibronic transitions from your excited electronic state in the lowest vibrational state to excited vibrational states in a lower electronic energy level. These transitions are noticeable in fluorescence spectroscopy because, after the molecule is excited by the laser, there is enough time for some of the the energy to be dissipated as heat (i.e. rotations, translations and vibrations) before it emits a photon.



All of these transitions are of course strictly quantized in energy and, in principle, you should be able to observe individual, discretized, lines instead of a single broad signal. However, because of the uncertainty principle, spectral lines always show line broadening. This uncertainty (in energy units) is given approximately by \begin{equation} \Delta E = \hbar \tau^{-1} \end{equation} where $\tau$ is the lifetime of the chemical species. This lifetime can be increased by reducing the temperature and, if you take the spectra that you show above at very low temperatures, you should be able to resolve the lines of the different vibronic transitions that conform your broad signals.


cosmology - Looking out into the universe means looking back in time - how does that work?


This is a question that has been gnawing on me for many years now. Back a long time ago, as I recall in reference to a scene in a popular science show on TV, I was asked the following.


The claim is that when you look out into the universe, you see the universe as it appeared at some past time. (This follows from the speed of light being finite, and looking at objects by observing their emitted light or other EM radiation.) The amount of time that one looks backwards is equal to the distance to the observed object in light-distance (so looking at something ten lightyears away means you are observing it as it was ten years ago).


Matter moves at a rate of speed less than the speed of light (it has to, since matter has mass, even if miniscule at the atomic level). So $v_{mass} \lt v_{EM}$ (probably significantly less, since $v_{EM} = c$).


So let's say you're looking at an object $10^{10}$ lightyears away by observing something for which $v=c$. To the observer, that object appears as it was $10^{10}$ years ago. But the Earth is much younger than $10^{10}$ years and we established that matter moves at a slower speed than EM radiation, so wouldn't the radiation that was emitted $10^{10}$ years ago long since have passed Earth's current position?


If, say, $v_{mass} = 0.5 \times c$ (a big assumption, but bear with me for a second), it would seem that the radiation emitted $10^{10}$ years ago would have passed Earth's current position $10^{10} - (0.5 \times 10^{10}) = 5 \times 10^{9}$ years ago, around the time when the solar system was still forming. Setting $v_{mass} = 0.25 \times c$ (which seems more realistic) means the radiation would have passed "us" around $7.5 \times 10^{9}$ years ago. So how could we be observing it now?


I'm not sure I'm posing this question in the best possible way (I'll admit I do find the concept somewhat confusing), and I'm sure that there's a simple explanation for it all. Just what am I missing?


I did find Is it possible to look into the beginning of the Universe? which seems peripherally related but not quite the same thing. Qmechanic brought up Seeing cosmic activity now, really means it happens millions/billions of years ago? but that question seems to be about whether it is so, not why it is so.



Answer



I think you're missing out on one of the basic results that motivated special relativity.



Light moves at the speed of light with respect to everything, regardless of the speed of that object relative to the source of the light. It doesn't matter how fast the source of the light is moving, the light emitted from the source is always going to move at the same speed light always moves at with respect to everything else in the universe: $c$.


If this sounds contradictory, it's because the equations we use for figuring out the relative speeds of things on earth don't work for speeds close to the speed of light.


higgs - When do we consider states under a $U(1)$ transformations to be physically different?


Consider the Goldstone model of a complex scalar field $\Phi$. It has $U(1)$ global symmetry, so if we apply the transformation $\Phi \to e^{i\alpha} \Phi$ the Lagrangian is left invariant.


Furthermore, we have an infinite set of possible vacua all with the same non-zero vacuum expectation value. But the vacuum changes under a $U(1)$ transformation, so $U(1)$ symmetry is spontaneously broken.




  • In this case we assume that for each value $\alpha$, $e^{i\alpha} |0 \rangle$ corresponds to a different state, right?

  • But since we can't measure a phase, wouldn't it be more natural to consider them the same states? On the other hand, if I think of the real and imaginary parts as two independent fields, I would say that they shouldn't be the same states.


Let's now couple $\Phi$ to a gauge field, such that the Lagrangian is invariant under local $U(1)$ transformations. Then we do consider $e^{i\alpha} | \Psi \rangle$ to be all the same states, right? But then, doesn't this mean that the Higgs vacuum is unique?




quantum mechanics - Can two photons annihilate?


This is a question about definitions. When two photons interact to create an electron/positron pair, does this process 'count' as annihilation of the photons? I've struggled to find a good definition of the term. Some places say that annihilation requires the end state to be electromagnetic radiation. But, on the other hand, I have found several text books which give annihilation processes ending in hadrons.




Saturday 25 November 2017

electromagnetism - Three questions and explanations for the Lorentz invariant $E^2-c^2B^2$


It is demonstrated that the square trace of the electromagnetic tensor is nothing and it is valid: $$ \mathrm{Tr}\,{F}^2_{\mu\nu}=\frac{2}{c^{2}}(E^2-c^2B^2). $$ Proof: $F_{\mu\nu}=-F_{\nu\mu}$, hence $$ \mathrm{Tr}\,{F}^2_{\mu\nu}=\sum_{\mu}\left(F^{2}\right)_{\mu\mu}=-\sum_{\mu\nu}F_{\mu\nu}F_{\nu\mu}=-\sum_{\mu\nu}F_{\mu\nu}^{2}= $$ $$ =-2\left[B_{1}^{2}+B_{2}^{2}+B_{3}^{2}-\frac{1}{c^{2}}\left(E_{1}^{2}+E_{2}^{2}+E_{3}^{2}\right)\right]= $$



$$=-\frac{2}{c^{2}}\left(B^2-\frac{E^2}{c^{2}}\right)=\frac{2}{c^{2}}\left(E^2-c^{2}B^2\right)$$


I have seen, also, this explanation of Lorentz invariant $E^2-c^2B^2$:


enter image description here


After, on the site Why is this invariant in Relativity: $E^2−c^2B^2$? there are limited informations, mathematical and physical, for the following relationships:




  1. $E^2-c^2B^2=0$




  2. $E^2-c^2B^2>0$





  3. $E^2-c^2B^2<0$




For item 2.) $E^2-c^2B^2>0$ in $\Sigma$. Then there will be a reference system of $\Sigma'$ such that $\overline{B}'=\textbf{0}$ i.e. the interaction is purely electric. Why?


For item 1.) $E^2-c^2B^2=0$ in $\Sigma$ is the case with a plane wave: why? We can also say that if we have a plane wave in an inertial reference $\Sigma$ we will still find a plane wave in any other inertial reference $\Sigma'$.


For item 3.) $E^2-c^2B^2<0$ in $\Sigma$. Both $\overline{E}$ and $\overline{B}$ are different from zero in each reference system (otherwise both must be null and therefore there would be no electromagnetic wave). An example is a wire with current? It is correct and why?




atomic physics - What enables protons to give new properties to an atom every time one is added?


How does adding one more particle to the nucleus of an atom give that atom new properties? I can see how it changes it's mass, that's obvious... But how does it give that new atom different properties like color?


A good example would be: start with a copper atom (Cu), with the atomic number 29, thus Cu has 29 protons, and you add one proton to the nucleus you are left with an atom of Zinc (Zn) with the atomic number 30, thus 30 protons. The first element mentioned is a totally different color than the second, and conducts electricity better etc.


Not only protons, but neutrons, which are the same type of particle (Baryon) affect the properties of the element in a much different and much less important manner. Adding a neutron only creates an isotope of that element, not a different one all together, unlike adding a proton.


Also, it is obvious that adding (or subtracting) electrons does not make a difference. For example, if you remove 28 electrons (I know that would take huge amounts of energy, but lets ignore that) that "orbit" the copper atom, we are still left with a copper atom, although a ion, but still a copper atom.


So, its apparent that only protons play a major role in "making" elements different from each other. How and why? Also, the same can be asked about the protons themselves and quark flavor.



Answer



You are not correct in your latter part of the analysis; the chemical properties (which is mostly what matters in ordinary matter) almost only depend on the electron shell, and in particular the outermost electrons (called the valence electrons).


So more protons mean more electrons and a different electron shell, meaning different chemical properties.



Why there is such a diversity of properties just by changing around the electron shell, is one of the wonders of chemistry! Due to quantum mechanics, the electrons don't simply spin around the nucleus like planets around the sun, but arrange themselves in particular, complicated patterns. By having different patterns, you can achieve a lot of different atom<->atom binding geometries, at a lot of different energies. This is what gives the diversity of chemical properties of matter (see the periodic table).


You can add or remove electrons to an atom to make the electron shells look more like the shells of another atom (with a different number of protons), but then the atom as a whole is then no longer electrically neutral, and due to the strength of the electromagnetic force, the resulting ion does not imitate the other atom type very well (I'm not a chemist - I'm sure there are properties that indeed could become similar).


Many physical properties are also mostly due to the electron shells, like photon interactions including color. Mass obviously is almost only due to the nucleus though, and I should add that in many chemical processes the mass of the atoms are important for the dynamics of processes, even if it isn't directly related to the chemical bindings.


This was just a small introduction to chemistry and nuclear physics ;)


newtonian mechanics - How to calculate a collision which is partly elastic and partly inelastic?


(For the purpose of this question, "calculating a collision" means: given the velocities and masses of two objects in a collision, figuring out the new velocities of both objects after the collision).


I know how to calculate a totally elastic collision, and how to calculate a totally inelastic collision.


But I don't know how to calculate a collision which is part elastic and part inelastic. I don't know where to start.


Guidance will be appreciated.


(Go easy on the math please).




fluid dynamics - Does water turn solid under deep ocean because of high pressure?



I know that we can make water solid with high pressure, so I think water will be solid in the deep ocean?


If that is true, the depth of the ocean would be limited because water will become ice? Anyone know that maximum depth?



Answer



You are mistaken. Actually, you can melt ice by applying pressure. This is why ice is so slippery, when you step on a frozen lake, you are melting the very first layer of water, and thus creating a very good instant lubricant for you to slide on. It is a common knowledge false fact, see comments.


Ok, granted, at very high pressures water does become solid. From the phase diagram, to get solid at around 0C you need around 650 MPa. How much is that? Pressure depends with depth as:


$$P = \rho g h$$


Assuming constant density, you need a column of water of $66\ km$ for ice to be formed. That is about six times the depth of Challenger Deep, in Mariana trench.


So the answer is no... on Earth. You will not find enormous amounts of more or less pure liquid water anywhere else in the Solar System, but if you are happy with hydrogen, helium, and other gases, you may find it around Jupiter's core. Definitely, liquid H and He.


When water is mixed with other elements, the phase diagram is perturbed. For example, salt in the sea at atmospheric pressure lowers the freezing point about a couple of degrees (depending on the concentration). If water is mixed with hydrogen, helium, methane, and company as in a gas giant, the diagram will be drastically changed, so more detailed computations would be needed.


renormalization - Could the full theory of quantum gravity just be a nonrenormalizable quantum field theory?


This may be more of a philosophical question than a physics question, but here goes. The standard line is that nonrenormalizable QFT's aren't predictive because you need to specify an infinite number of couplings/counterterms. But strictly speaking, this is only true if you want your theory to be predictive at all energy scales. As long as you only consider processes below certain energy scales, it's fine to truncate your Lagrangian after a finite number of interaction terms (or stop your Feynman expansion at some finite skeleton vertex order) and treat your theory as an effective theory. Indeed, our two most precise theories of physics - general relativity and the Standard Model - are essentially effective theories that only work well in certain regimes (although not quite in the technical sense described above).


As physicists, we're philosophically predisposed to believe that there is a single fundamental theory, that requires a finite amount of information to fully specify, which describes processes at all energy scales. But one could imagine the possibility that quantum gravity is simply described by a QFT with an infinite number of counterterms, and the higher-energy the process you want to consider, the more counterterms you need to include. If this were the case, then no one would ever be able to confidently predict the result of an experiment at arbitrarily high energy. But the theory would still be completely predictive below certain energy scales - if you wanted to study the physics at a given scale, you'd just need to experimentally measure the value of the relevant counterterms once, and then you'd always be able to predict the physics at that scale and below. So we'd be able to predict that physics at arbitrarily high energies that we would have experimental access to, regardless of how technologically advanced our experiments were at the time.


Such a scenario would admittedly be highly unsatisfying from a philosophical perspective, but is there any physical argument against it?



Answer



You suggest that we can use a nonrenormalizible theory (NR) at energies greater than the cutoff, by meausuring sufficiently many coefficients at any energy.


However, a general expansion of an amplitude for a NR that breaks down at a scale $M$ reads $$ A(E) = A^0(E) \sum c_n \left (\frac{E}{M}\right)^n $$ I assumed that the amplitude was characterized by a single energy scale $E $. Thus at any energy $E\ge M$, we cannot calculate amplitudes from a finite subset of the unknown coefficients.


On the other hand, we could have an infinite stack of (NR) effective theories (EFTs). The new fields introduced in each EFT could successively raise the cutoff. In practice, however, this is nothing other than discovering new physics at higher energies and describing it with QFT. That's what we've been doing at colliders for decades.



Friday 24 November 2017

quantum field theory - Why can't gauge bosons have mass?


Clearly, a mass term for a vector field would render the Lagrangian not gauge-invariant, but what are the consequences of this? Gauge invariance is supposed to be crucial for the renormalisation of a vector field theory, though I have to say I'm not entirely sure why.


As far as removing unphysical degrees of freedom - why isn't the time-like mode $A_0$ a problem for massive vector bosons (and how does gauge invariance of the Lagrangian ensure that this mode is unphysical for gauge bosons)?




nuclear physics - What does the Atomic Form Factor mean?



I was reading Nuclear Physics and the author mentioned something about the atomic form factor, something in relation to the Fourier transform of the spatial distribution of the electric charge. but I don't know what it means. What is the physical interpretation of the atomic form factor?



Answer



There are a variety of ways one could answer this. As you note, the form factor is the Fourier transform of the spacial distribution of the electric charge density. The subtlety with this definition is that for a composite system such as a nucleus (or atom, or nucleon), defining a "net charge density" that is just a function of spacial coordinates, i.e. $\rho(x, y, z)$, is an approximation. The exact description of an atom or nucleus is given by a many-body wavefunction.


That aside, $\rho(x, y, z)$ for an atom or nucleus can be roughly interpreted as the charge density associated with an effective potential that is arrived at by averaging over the motion of the constituent particles. The form factor $f(Q)$ is Fourier transform of this.


Consequently, one should expect the form factor associated with an atom to deviate from zero around $1/(10^{-10} \textrm{m})$ and $1/(10^{-15} \textrm{m})$. The former corresponds to electronic degrees of freedom (e.g. the "wavefunction" of the electrons) and the latter corresponds to nuclear degrees of freedom.


The form factor $f(Q)$ will provide a measure of the interaction of between the atom and an incident photon with momentum $Q$. Consequently, photons with a wavelength $\sim 10^{-10} \textrm{m}$ (i.e. visible or UV light) or $\sim 10^{-15} \textrm{m}$ (i.e. gamma rays) will interact strongly with the atom by coupling to the electronic or nuclear degrees of freedom, respectively.


One can also define other kinds of form factors that correspond to different sorts of interactions (e.g. magnetic, the strong force, etc.).


homework and exercises - 5D Ricci Curvature


As part of a hw problem for a class, we're supposed to be deriving the equivalence given in equation 2.3 of this paper http://arxiv.org/abs/1107.5563. I was wondering if there is some special relation involving the Ricci Curvature in 5d's relationship to one in 4d. Since with a general metric like the one given in 2.1, calculating the Christoffel symbols would seem to be an enormous and not particularly smart idea.




newtonian mechanics - Deriving torque equation from Newton's 2nd Law


I'm trying to understand the derivation of the torque equation $\vec{r} \times \vec{F} = I \alpha$. My textbook derives this easily enough from Newton's 2nd Law for a single point with mass $m$ and radial distance $r$, with the force applied at the same distance $r$, as (if we drop the vector notation for simplicity) $F=ma=m(r\alpha)$, so $rF=mr^2 \alpha = I\alpha$. (Note, there are both 'ay's and 'alpha's there; they look similar.)


The textbook stops there and concludes that the equation holds for any rotating body. But then I tried this derivation for two points positioned along a rigid, massless rod (which points perpendicular to an axis about which it rotates). If there's a point-mass $m_1$ a distance $r_1$ from the axis of rotation and another point-mass $m_2$ at a distance $r_2$, and the force is applied at (for example) distance $r_2$, I get $F=m_1 a_1 + m_2 a_2 = m_1 r_1 \alpha + m_2 r_2 \alpha$, so $r_2 F= (m_1 r_1 r_2 + m_2 {r_2}^2 )\alpha$. But $m_1 r_1 r_2 + m_2 {r_2}^2$ isn't the correct value for $I$ here. What am I missing?




cosmology - What fraction of baryonic matter is in stars?


We know from big bang nucleosynthesis that baryonic matter accounts for about 5% of the universe's total mass-energy density. What is the current best estimate of how much of this is in the form of stars? I'm guessing that this would be known only very roughly. It seems like you would just have to survey large volumes of space for stars and larger structures made out of stars. Although galaxies may pretty efficiently cycle their hydrogen and helium through stars, I would assume that there is a lot of hydrogen out there in the spaces between superclusters that has never had a chance to form a star and never will.


related: How do we estimate $10^{23}$ stars in the observable universe?




Thursday 23 November 2017

bosons - Time of flight images of Bose-Hubbard model


on the website of Immanuel Bloch, you can find time of flight images of bosonic particles inside an optical lattice for different values of the depth of the lattice.


(http://www.quantum.physik.uni-mainz.de/bec__experiments__mottinsulator.html.en)


enter image description here


Can someone precisely explain this picture? To what does the distance between the interference peaks refer to?



Answer



This is a time-of-flight image, and it therefore refers to the momentum distribution of the atoms. As explained in the original Nature paper (10.1038/415039a), the distance between the peaks (in the picture, i.e., in the momentum space) is $2\hbar k$, where $k = 2\pi/\lambda$ is the wavenumber corresponding to the periodicity of the lattice with $\lambda$ being the laser wavelength.


On the top left the system is in the superfluid (Bose-Einstein condensed) phase, as indicated by the peaks. All the atoms are completely delocalized over the whole lattice, and they thus are coherent.


On the bottom rightmost picture, the peaks are gone, thus there is no phase coherence. The system is in the Mott phase, where each atom is trapped in a single site.


In between the two extremes the Superfluid/Mott phase transition takes place. There, as the depth of the lattice is increased, the atoms first become localized a tiny bit (indicated with the extra peaks appearing and gaining strength), and then the incoherent background takes over (the system forms the layered-cake structure of several Mott states with different occupancy).



thermodynamics - Flame shape and size (length) depending on gravity




How would the shape and size of a flame, e.g. from a simple candle depend on gravity? Suppose all the relevant information is known, including candle dimensions and chemical composition, atmosphere properties (chemical composition, pressure etc.), and anything else.



Answer



After some searches I found this little paper. I will give only a short answer to the question, for details you can read the paper.


So, according to the results obtained for the model in the paper:




  1. the flame length:



    • increases as the gravity level increases from $0g_e$ to $3g_e$

    • decreases from $3g_e$ to $60g_e$


    • and blows off at higher gravity levels




  2. the maximum width of the flame decreases as the gravity level increases.




In the next figure can be observed the evolution of the flame from $0g_e$ to $5g_e$.  Flame shape contours contours at various gravity levels


However, if the top surface of the wick is made inert, the model predicts that the flame blows off at $6g_e$.


$g_e$ - gravity at the level of Earth surface.



quantum mechanics - Is the molecular term ${}^1Sigma^-$ possible in a molecule?


The old question How to understand this symmetry in the wavefunctions of a diatomic molecule? explores how it is possible for a quantum state to have zero angular momentum about a given axis (giving it a term $\Sigma$) while using multi-electron effects to stay parity-odd with respect to reflections in planes that contain that axis.



One of the things that fall out of that analysis is that, if this is done using a two-electron system, then the orbital part of the state needs to be odd under exchange, which forces the spin part to be even and therefore forces the spin representation to be a triplet, making the full term symbol ${}^3\Sigma^-$.


My question here is: is it possible to use more than two electrons, coupled in some clever way, to whittle that spin representation down to a singlet, for a full term symbol of ${}^1\Sigma^-$? If so, what is an explicit example? What is the minimal number of electrons needed for such a term? Or are there other restrictions in place that make that term impossible no matter how you try?



Answer



From a chemistry perspective (so I am afraid this might not be as rigorous as you guys usually like, but it is what I can offer), we would usually use group theory to determine the term symbol for the molecule. tom has kindly provided the examples of dioxygen and dinitrogen. Since these molecules are centrosymmetric, the term symbol technically should include a gerade/ungerade label as well, i.e. $^1\Sigma_\mathrm g^-$ or $^1\Sigma_\mathrm u^-$, but it's not particularly important.


Using dioxygen as an example ($D_{\infty\mathrm h}$ point group, character table here) and ignoring the core 1s-electrons, the ground state has the electronic configuration $(1\sigma_\mathrm g)^2(1\sigma_\mathrm u)^2(2\sigma_\mathrm g)^2(1\pi_\mathrm u)^4(1\pi_\mathrm g)^2$. A quick sketch of the MO diagram is provided below.


MO diagram of dioxygen


The first ten electrons are all paired up and collectively transform as the totally symmetric irreducible representation, i.e. $\Sigma_\mathrm g^+$, so from a symmetry point of view we only need to consider the top two electrons. Both electrons are in a $\pi_\mathrm g$ orbital, and collectively these transform as the direct product of $\Pi_\mathrm g$ with itself. (Irreducible representations are labelled with capital Greek letters, whereas small letters are used for MO symmetry labels.)


$$\Pi_\mathrm g \times \Pi_\mathrm g = \Sigma_\mathrm g^+ + [\Sigma_\mathrm g^-] + \Delta_\mathrm g$$


This means that a $(\pi_\mathrm g)^2$ configuration can have an overall symmetry of either of these three, depending on how the electrons are configured. Now, the square brackets come into play: these represent antisymmetrised direct products, i.e. spatial wavefunctions which are antisymmetric upon permutation of the two $\pi_\mathrm g$ electrons. In order to satisfy the Pauli exclusion principle, this must be paired with the symmetric spin wavefunction, i.e. a triplet spin function. Likewise, the symmetric spatial wavefunctions $\Sigma_\mathrm g^+$ and $\Delta_\mathrm g$ must be paired with the antisymmetric singlet spin function.


All in all, for the ground state electronic configuration of dioxygen $(1\sigma_\mathrm g)^2(1\sigma_\mathrm u)^2(2\sigma_\mathrm g)^2(1\pi_\mathrm u)^4(1\pi_\mathrm g)^2$, we have the permissible term symbols $^1\Sigma_\mathrm g^+$, $^3\Sigma_\mathrm g^-$, and $^1\Delta_\mathrm g$. Obviously, this does not contain the desired $^1\Sigma_\mathrm g^-$, and that is why it does not appear as one of the lower-energy terms in the NIST webbook. There's a good chance you know all of this already, but I thought it helpful to set the scene (and maybe future readers do not know it, so it can't hurt).





The way to circumvent these symmetry-restricted direct products, and hence to obtain the $^1\Sigma_\mathrm g^-$ term symbol, is to make sure that the two electrons are not in the same $\pi_\mathrm g$ orbital. As long as this is the case, there is no longer any restriction on which spatial wavefunctions can be paired with which spin wavefunctions. If you (hypothetically) promote one electron to the next-highest $\pi_\mathrm g$ orbital, such that you have an electronic configuration of $(1\sigma_\mathrm g)^2(1\sigma_\mathrm u)^2(2\sigma_\mathrm g)^2(1\pi_\mathrm u)^4(1\pi_\mathrm g)^1(2\pi_\mathrm g)^1$, then the $^1\Sigma_\mathrm g^-$ term symbol is no longer symmetry-forbidden. [This $2\pi_\mathrm g$ orbital would be formed from overlap of the 3p orbitals of oxygen.]


Why is this so? I actually don't know how to explain this intuitively, but I can demonstrate it with a concrete wavefunction. Let's label the two lower-energy $\pi_\mathrm g$ orbitals by $\psi_{a+}$ and $\psi_{a-}$, and the two higher-energy $\pi_\mathrm g$ orbitals by $\psi_{b+}$ and $\psi_{b-}$. The sign $\pm$ indicates the direction of the angular momentum projection along the internuclear axis: $\pi$-type orbitals come in pairs of $+1$ and $-1$ units of these angular momentum. Now consider the following wavefunction (I ignore normalisation).


$$\Psi = \psi_{a+}(1)\psi_{b-}(2) - \psi_{a-}(1)\psi_{b+}(2) + \psi_{b-}(1)\psi_{a+}(2) - \psi_{b+}(1)\psi_{a-}(2)$$


The maths is easy, but fiddly. The effect of reflection in a plane causes the interchange of $+$ and $-$ labels, and you can verify that this turns $\Psi$ into $-\Psi$, i.e. the term symbol has a minus sign. The net projection of angular momentum onto the internuclear axis is zero, hence $\Sigma$. Lastly, the overall wavefunction is symmetric with respect to interchange of the two particles (interchange the labels 1 and 2). Consequently, it must be paired with the antisymmetric spatial spin function, which is the singlet wavefunction $\alpha(1)\beta(2) - \beta(1)\alpha(2)$. This is the $^1\Sigma_\mathrm g^-$ that you are looking for.


Return for a while to the case where the two electrons were in the same $\pi_\mathrm g$ orbitals. If you replace all the $b$'s with $a$'s in the above wavefunction, it simply vanishes to zero: this wavefunction is no longer physically permissible if we are talking about two electrons in the same $\pi_\mathrm g$ orbitals.




This state would no doubt be ridiculously high in energy. It's certainly not listed in the NIST WebBook, which instead lists a state of symmetry $^1\Sigma_\mathrm u^-$, as opposed to $^1\Sigma_\mathrm g^-$ which I've been talking about. This is much easier to access. All you need to do is to promote one $1\pi_\mathrm u$ electron to the $1\pi_\mathrm g$ orbitals, which gets you to an electronic configuration of $(1\sigma_\mathrm g)^2(1\sigma_\mathrm u)^2(2\sigma_\mathrm g)^2(1\pi_\mathrm u)^3(1\pi_\mathrm g)^3$.


From a symmetry perspective, the $\pi$ orbitals with three electrons may be thought of as having one hole instead of three electrons. So, the allowed term symbols are simply given by


$$\Pi_\mathrm g \times \Pi_\mathrm u = \Sigma_\mathrm u^+ + \Sigma_\mathrm u^- + \Delta_\mathrm u$$



and again, since the holes are in different $\pi$-type orbitals, there is no need to account for the antisymmetrisation in square brackets. This is how you obtain the $^1\Sigma_\mathrm u^-$ state which is $33\,057~\mathrm{cm^{-1}}$ above the $^3\Sigma_\mathrm g^+$ ground state.


For dinitrogen you need to promote one electron from the $\pi_\mathrm u$ orbital to the empty antibonding $\pi_\mathrm g$ orbital, such that you have one $\pi_\mathrm u$ hole and one $\pi_\mathrm g$ electron. Again following the same formula you can obtain the $^1\Sigma_\mathrm u^-$ state.


What is the difference between solutions of the diffusion equation with an imaginary diffusion coefficent and the wave equation's?


The diffusion equation of the form:


$$ \frac{\partial u(x,t)}{\partial t} = D\frac{\partial ^2u(x,t)}{\partial x^2} $$


If one chooses a real value for $D$, the solutions are usually decaying with time.


However, in some situations in physics, most notably the time-dependent Schrödinger equation, one sees an equation of similar form to the diffusion equation, but with a complex diffusion coefficient, i.e $D=i\,D'$.


This causes the equation's solutions to osculate instead of decay with time because


$$ \exp(-Dt)=\exp(-iD't) $$


Which is why the Schrödinger equation has wave solutions like the wave equation's.



It seems like one can transform the diffusion equation to an equation that can replace the wave equation since the solutions are the same.


This does not make much intuitive sense to me, so I think my understanding of the solutions of the wave and diffusion equation is not complete. What is the difference, if any, in the set of solutions of the diffusion equation with an imaginary diffusion coefficient and the wave equation's?



Answer



Both Schrödinger and Wave Equation have plane wave solutions, that's right. The difference is the dispersion relation, which is quadratic for the Schrödinger equation and linear for the wave equation. This is important, because the Schrödinger equation was designed to correctly reproduce the quadratic dispersion relation that was observed for electrons.


(You can show that by fourier-transforming both equations both in space and time and solving for $\omega(k)$.)


cosmology - Universe expands ... compared to what?


I keep hearing that universe expands. Since all is relative, the expansion must be relative to something. And since universe is all there is ... You got my point.


The only points of reference are inside the universe itself. Bonds forming, their length measured in wavelengths, etc. Red shifting of light tells us things fly away. So our point of reference is only light. (am I missing something?)



Here is my question. In a thought experiment, would it be valid to say that properties of light are somehow changing around us if we keep the size of the universe constant? If yes, what are the consequences of this line of thought? Would it give us any additional insight into what's going on?




Proof of conservation of energy?


How is it proved to be always true? It's a fundamental principle in Physics, that is based on all of our currents observations of multiple systems in the universe, is it always true to all systems? Because we haven't tested or observed them all. Would it be possible to discover/create a system that could lead to a different result?



How are we 100% sure that energy is always conserved? Finally, why did we conclude it's always conversed? What if a system keeps doing work over and over and over with time?



Answer




How is it proved to be always true? It's a fundamental principle in Physics, that is based on all of our currents observations of multiple systems in the universe, is it always true to all systems? Because we haven't tested or observed them all. Could it possible that we discover/create a system that could lead to a different result?



A physical theory, and also the postulates on which it is based can only be validated, that means that every experiment done shows that the theory holds and in this case energy conservation holds. A physical theory can be proven only to be false even by one datum, and then the theory changes. Example: classical mechanics fails at relativistic energies, when mass turns into energy, and classical energy is not conserved. A relativistic mechanics was developed that still has conservation of a more generally defined energy.



How are 100% sure that energy is always conserved? Finally, why did we conclude it's always conversed?



We cannot be sure, as I said above. If we find even one case where the newer energy definition fails than the postulate fails and new propositions will be studied. It has not failed up to now in our laboratory and observational experiments.



In any case, the framework is important, classical conservation of energy still holds for non relativistic energies, for example.



In general relativity conservation of energy-momentum is expressed with the aid of a stress-energy-momentum pseudotensor. The theory of general relativity leaves open the question of whether there is a conservation of energy for the entire universe.



For cosmological matters see another entry in this forum on the law of conservation of energy.


Wednesday 22 November 2017

electrostatics - Electric potential vs potential difference


What is the difference between electric potential and potential difference? In our course book, they are given as separate topics but their definition is given the same.



Answer



What is the difference between "electric potential" and "potential difference"?




What is the difference between age and age difference?


If $\text{age}(\text{person})$ is the function so that $\text{age}(\text{you})$ is your age, $\text{age}(\text{mom})$ is your moms age and $\text{age}(\text{dad})$ is your dads age, then $\Delta\text{age}:=\text{age}(\text{dad})-\text{age}(\text{mom})$ is the age difference of your parents.





The elecrical potential $\Phi$ refers to a quantity with some numberic value. It is usually dependent on space and time $\Phi(\vec x,t)$, so it's a field where for every place and moment you get some number.


By potential difference $\Delta\Phi$ one denotes the difference between two such values taken at different positions. For example, $$\Delta\Phi:=\Phi(\vec x_2,t_0)-\Phi(\vec x_1,t_0)$$ is the potential difference of the field $\Phi(\vec x,t)$ for the two points $\vec x_2$ and $\vec x_1$ at the particular moment $t_0$.


So for example, if you have a one-dimensional capacitor with electical potential $\Phi(l)$, with one plate at the position $l=0$ and the other at position $l=L$, then the potential difference $\Delta\Phi$ for these two points is the number you compute via $\Phi(L)-\Phi(0)$.




I think your question might arise because only the potential difference is the physical quantity which determines the electical field and therefore the acceleration of charges. While as age difference between persons as well as age of one person are both interesting quantities with practical value (Did I miss my mom's bithday? Am I allowed to drive? How much years will I get for homicide?), the value of the electical potential as such will eventually only be used to compute potential differences.


homework and exercises - Preventing a block from sliding on a plane (with friction)


Assume a small square block $m$ is sitting on a larger wedge-shaped block of mass $M$ at an upper angle $\theta$ such that the little block will slide on the big block if both are started from rest and no other forces are present. The large block is sitting on a frictionless table. The coefficient of static friction between the large and small blocks is µs. With what range of force $F$ can you push on the large block to the right such that the small block will remain motionless with respect to the large block and neither slide up nor down?


This question really is not too complicated without friction acting upon the small block (Preventing a block from sliding on a frictionless inclined plane)


But what happens when we add friction to the system? Why is it considered static, if at rest, the mass would slide down the plane?





astronomy - What objects/states of objects with absolute magnitude do we know of?


For measuring distances the knowledge of absolute magnitude or luminosity is often crucial, especially for very big distances. Unfortunately we can't measure the diameter of far distant objects and calculate and derive absolute magnitude due to resolution limits.


That's why objects or better named states in the life cycle of specific objects, like Type Ia supernovas, are so important.


What additional objects do we know of sharing this property? Are there objects theoretically predicted to have a absolute magnitude but until now not yet discovered? Name the object and the spectral range of emitted light or particles.



Answer



The jargon for what you are looking for is "standard candles": things whose luminosities we can determine without knowing their distance. They are of particular interest to astronomers because they can be used to measure distances.


There are many such objects, but all of them should be treated with some caution. In no case is our knowledge of the luminosity perfect, and in many cases there is large intrinsic scatter. Generally, our knowledge is not of the form "all objects of type x have luminosity y", but more of the form "for objects of type x, the luminosity is correlated with parameters a, b, and c according to complicated equation foo." The physical origin of complicated equation foo is much better understood in some cases than in others, and in all cases needs to be empirically calibrated. Particularly if the physical origin of the correlation is poorly understood, we may not know if or how the calibration changes with the age of the universe. Because we see very distant objects as they were when the universe was younger, this limits our ability to use them as distance measurements to great distances.


In all cases one needs to be careful to take the redshift into account, as the part of an objects rest spectrum which, say, appears blue nearby, may appear red or even IR when the same object is more distant. (See k-correction.) In many cases, a range of wavelengths may be used (at least in the visual or IR), but the calibration may be different for different rest wavelengths. If you observe the all objects through the same filter, you will be observing different objects at different rest wavelengths.


Here are some standard candles:





  • Cepheid variable stars (see 2000ApJS..128..431F) are very bright, and their luminosity is strongly correlated with their luminosity, making them excellent standard candles.




  • RR Lyrae variable stars also follow such a relationship (2003LNP...635...85B), but are fainter.




  • Type Ia supernova are very bright, and their peak luminosity can be estimated from their change in luminosity over time.





  • The tip of the red giant branch in the HR diagram (2000ApJS..128..431F) is one bright feature of the HR diagram that can be used. Blue supergiants have also been proposed as possible standard candles (see 2003LNP...635..123K).




  • The simple surface brightness of a galaxy is useless as a standard candle: the number of stars per square arcsecond rises as the distance squared, while the luminosity of an individual star falls as the distance squared, so the surface brightness is independent of distance. However, even in a galaxy where the stars are distributed according to some smooth function (as in an elliptical galaxy like M87), the surface brightness isn't perfectly smooth, because the stars are of finite brightness: the stars are randomly distributed according to the smooth function, and by chance some places have more stars than others. The roughness of the galaxy can therefore be used to measure the luminosity weighted mean luminosity of the stars in the galaxy, and this can be used as a standard candle of sorts. This is the "surface brightness fluctuation" (SBF) method of distance measurement, introduced in 1988AJ.....96..807T.




  • Large clusters of galaxies usually have a bright giant elliptical galaxy near the center. These are called "Brightest Cluster Galaxies" (BCGs). BCGs have a fairly consistent luminosity; see 1995ApJ...440...28P.




  • Planetary nebulae can have a wide range of luminosities, but there is a well defined upper limit to how bright they can be (see 1989ApJ...339...39J and associated articles). So, if you measure the number of planetary nebulae in a galaxy as a function of luminosity, the "planetary nebula luminosity function" (PNLF), the cutoff at the bright end can be used as a standard candle.





  • The peak of the globular cluster luminosity function (GCLF) seems to be consistent across different galaxies, so the luminosity at which there are the most globular clusters in a given galaxy can be used as a standard candle. The physical reason for this consistency is not well understood. See 2006AJ....132.2333S.




  • For spiral galaxies, there is a relationship between the the rotation curve and luminosity, the "Tully-Fisher" relation (1977A&A....54..661T). See also the Faber-Jackson relation (1976ApJ...204..668F) and Fundamental plane for elliptical galaxies.




  • There may be a relationship between the radius of the broad line region of an active galactic nucleus and its luminosity. See Watson el al. (2011).





electromagnetism - How many possible electromagnetic wavelengths are possible?


Disclaimer: Please keep in mind that I am a young highschool student with no background in physics, this research was done in the course of an hour, and my reasoning could very well be wrong.


After a short conversation about color, I did some thinking, and read this question. The answer claims that there are $\infty^{\infty}$ colors


My first problem is the definition of colors, I'm going to simplify this (although it's not the same thing) as electromagnetic frequencies. This is what I mean when I refer to color in the rest of this question


The problem with $\infty^{\infty}$ colors is that the plank length exists. This causes two problems to arise with $\infty^{\infty}$. If $l_p$ is assumed to exist (as it is), this means the smallest distance in the universe is $l_p$. So, all wavelengths of light can be divided into a set, of the wavelengths produced by objects at $0^{\circ}$ Kelvin to $\infty^{\circ}$ Kelvin. According to this logic, there are simply $\infty$ colors.


Things are complicated further, when you introduce the Plank Temperature ($P_t$). $P_t$ is the theoretical limit temperature of the universe (at least for the purposes of light and color) Because an object hotter than $P_t$ will produce light with wavelengths shorter than $l_p$. This would introduce an upper limit on colors too.



Wouldn't this mean that the total number of possible colors is equal to however many plank lengths difference there is between an electromagnetic wave produced by a 0 degree object and a $l_p$ object? And therefor far less than $\infty$? Does this reasoning make sense at all?



Answer



I will give a speculative answer - open to suggestions for improvement:


While mathematicians understand the difference between $\infty$ and $\infty^{\infty}$, I am not sure such distinction is terribly helpful for physicists - or how you would prove one versus the other. And if you are worrying about the distinction, you are not a typical high school student...


Frequency of a photon is an ill-defined property: in order to measure the frequency to an accuracy $\Delta \omega$, I need to measure for a length of time $t=\frac{1}{\Delta \omega}$. Since the universe has a finite age, it is simply not possible to determine (or define) the energy of any photon to greater precision than that - which effectively means that the number of distinct frequencies that exist in the EM spectrum (in the sense that they could be distinguished) is limited.


To claim there are more possible frequencies is something that I don't believe could be proven... Note also that the question you linked in your question (and the answers thereto) touch on the fact that "color" as perceived is in principle any combination of wavelengths; and if you can have "any number of photons with any infinite number of wavelengths", you do indeed end up with $\infty^{\infty}$ combinations. But the question in your title is just about "possible wavelengths" - and if we accept that a given photon has just one wavelength (one energy - within the bounds of uncertainty) then you go back to "countably infinitely many".


Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...