Monday, 17 August 2015

Do individual rays of light lose energy via the inverse square law?


We've all heard of the inverse square law, but apparently that refers to the flux or intensity or number of photons hitting an imaginary surface area. This is not exactly what I want to ask about.


I'm asking about the energy level of an individual photon, or individual ray of light. The light could be any frequency or wavelength.


If I'm not mistaken, the amplitude of the light ray is directly related to its energy.


Consider a ray of light travelling through a vacuum. Does its energy decrease over time according to inverse square law? Maybe inverse linear? Or not at all?


This question comes from a related comment. If it's true that the energy doesn't diminish at all, then that would mean a laser beam shot into space (and never hits anything) would never lose any energy and arrive at the other side of the Universe with the same energy it started with. That's kinda hard to believe.


EDIT: It appears "energy" and "amplitude" are not synonyms for light rays. Apparently the concept of waves is tricky for light, since after all, a vacuum has no medium to propagate through in the normal wave sense. So let the question be, "Does an individual photon lose energy solely from the act of traveling through a vacuum?"



Answer




SUMMARY:


This is a very good question. In a lossless medium, fundamentally the answer to your question is "no, an individual ray does not lose energy in propagating" because it represents a plane wave (in photon language, a momentum eigenstate), whose intensity does not vary as it propagates. Intensity information is encoded in the flux density of rays through the target surface in a raytracing simulation. You can't see intensity information in a lone ray, because this information is encoded in the relationship between a ray and its neighbors, i.e. by a notion of how much a tube of rays swells and shrinks laterally as it propagates.


With these two statements, you should be able to see the difference between the laser case and the diverging wave case.


But this statement must be qualified in practice according to the exact way you are interpreting the notion of "ray" in. In particular, let's look at the various conceptions of rays in a software implementation.




LOCALIZED RAYS


A a localized ray is an approximate abstraction representing light when the Eikonal equation holds (slowly varying envelope approximation) and we must make our abstraction yield the right answers in calculations and answers to physical questions. The answer depends, therefore, on the application.


Mostly a ray is a unit normal to a phasefront, and tracing rays simply lets us visualize phase fronts; we see where they converge to near focusses and so forth. No amplitude information is needed here.


Now we get to more sophisticated calculations, where we try to answer questions about intensity and phase of the local light field from traced rays. How you encode amplitude data in rays depends on how you combine your rays to get this intensity and phase information. Note that we can only ask for intensity / phase information from individual localized rays in regions where the slowly varying envelope approximation holds. This approximation therefore rules out the naive use rays to find phase and amplitude information about fields near focusses for example where the amplitude varies swiftly over a few wavelengths. The contribution to the field there is from many rays at once. There is a way around this difficulty in software, so read on to find out how this actually comes about through the right notion of addition of ray contributions.


TRUE RAYS



Most fundamentally, a conception of a ray that has no approximation is as the definition of a plane wave: the ray does this by being a unit normal to a plane wavefront. So, suppose we assign a complex amplitude to our ray to represent intensity and phase: the magnitude of this quantity does not change with propagation, only the phase does. We can even assign two complex amplitudes to account for polarization. The entity propagates by multiplying the complex quantities by $\exp(i\,\vec{k}\cdot\vec{r})$. Here $\vec{k}$ is the wavevector, and, strictly speaking, it is the classic example of a one form (covector, or covariant vector) rather than a vector: a linear map $\mathbb{R}^3\to\mathbb{R}$ that takes as input the displacement $\vec{r}$ and returns how many phasefronts a displacement in this direction and of this magnitude pierces. It is really helpful to keep this fundamental geometry in mind when thinking of what a ray really stands for: displacements standing for how we move about in space, and parallel stacks of phasefronts pierced by the former as we do so (see reference [[1]]).


I'll call this entity a "true ray", and it behaves a little differently from rays in most raytracing software. In particular, since it stands for a plane wave, it can be slidden anywhere on the planar phasefront and encode exactly the same plane wave. So suppose we have a bunch of these rays converging in a raytracing simulation to an imperfect focus and we wish to know the field phases and amplitudes at the point $P$, somewhere near the focus:


Imperfect Focus


Since any ray can slide anywhere orthogonal to itself along its tail, we slide all the rays as shown:


Slidden Rays


then propagate them to the point $P$ and tally up all the field vector components implied by the propagated polarization complex amplitudes. Note that, in theory, this works for any point $P$, if the rays truly represented plane waves, and if you did this rigorously, calculating the plane wave decomposition of any source, this ray combination technique is equivalent to solving the Helmholtz equation by Fourier analysis, so it is time-consuming. In practice, furthermore, rays in simulations are localized rays: they stand for fields that are well approximated by plane waves only in a small neighborhood. So in most simulations, you can only safely slide rays in thus way ten microns or so (tens of wavelengths, say). This is well good enough if you propagate all your rays to a spherical surface centered near a focus: a sideways slide of ten microns of all the rays lets you compute the field vector amplitudes well good enough to get a good picture of most point spread functions.


THE INTERMEDIATE CASE


By now it should be fairly clear what it going on: rays encode locally plane waves, they can propagate phase information but intensity information is encoded by the flux density of waves. Near focusses, we need full Fourier analysis to extract the implied amplitude and phase distributions, as above. But away from focusses there is a good, intermediate notion that both lets you calculate intensities, relieves you of the need to propagate millions of rays to work out flux densities accurately and also will yield amplitude and phase distributions on spherical surfaces centered on focusses so you can make a good approximation to the Fourier analysis above. This is an object which I call a "Ray Tubelet", and it comprises a triplet of localized rays. The triplet begins from a divergence point (point source) and each ray can keep track of its phase delays as it propagates, whilst the divergence between the triplet of rays can be used to extract the intensity information. Suppose we wish to calculate the light intensity at a point within the tublet. This intensity varies inversely with the area of a triangle defined by the three intersections of tubelet rays with the least squares best fit to a surface that is orthogonal to all three passing through the point in question (to be trully orthogonal to all three is impossible unless they are parallel, that's why we use the least squares best fit). WE define the tublet's position, after applying the same propagation operations to all three members, as the mean of the ray head positions. The area in question is then the cross product of any pair of differences between the three head positions.


Another way to tackle this problem is to propagate wavefront curvature information as well as amplitude with each ray. In effect, you are decomposing a wavefront into a great number of Gaussian beams, propagting them through a system and then summing their contributions at the end of the simulation.





[1]: Two of the best descriptions of one forms for physicists are in chapter 1 of Misner, Thorne and Wheeler, "Gravitation" and Bernard Schutz, "A First Course in General Relativity".


No comments:

Post a Comment

Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...