Monday, 24 November 2014

experimental physics - Are random errors necessarily Gaussian?


I have seen random errors being defined as those which average to 0 as the number of measurements goes to infinity, and that the error is equally likely to be positive or negative. This only requires a symmetric probability distribution about zero. However typing this question into Google, I did not find a single source that suggested random errors could be anything other than gaussian. Why must random errors be gaussian?



Answer




Are random errors necessarily gaussian?



Errors are very often Gaussian, but not always. Here are some physical systems where random fluctuations (or "errors" if you're in a context with the thing that's varying constitutes an error) are not Gaussian:





  1. The distribution of times between clicks in a photodetector exposed to light is an exponential distribution.$^{[a]}$




  2. The number of times a photodetector clicks in a fixed period of time is a Poisson distribution.




  3. The position offset, due to uniformly distributed angle errors, of a light beam hitting a target some distance away is a Cauchy distribution.






I have seen random errors being defined as those which average to 0 as the number of measurements goes to infinity, and that the error is equally likely to be positive or negative. This only requires a symmetric probability distribution about zero.



There are distributions that have equal weight on the positive and negative side, but are not symmetric. Example: $$ P(x) = \left\{ \begin{array}{ll} 1/2 & x=1 \\ 1/4 & x=-1 \\ 1/4 & x=-2 \, . \end{array}\right.$$



However typing this question into Google, I did not find a single source that suggested random errors could be anything other than gaussian. Why must random errors be gaussian?



The fact that it's not easy to find references to non-Gaussian random errors does not mean that all random errors are Gaussian :-)


As mentioned in the other answers, many distributions in Nature are Gaussian because of the central limit theorem. The central limit theorem says that given a random variable $x$ distributed according to a function $X(x)$, if $X(x)$ has finite second moment, then given another random variable $y$ defined as the average of many instances of $x$, i.e. $$y \equiv \frac{1}{N} \sum_{i=1}^N x_i \, ,$$ the distribution $Y(y)$ is Gaussian.



The thing is, many physical processes are the sums of smaller processes. For example, the fluctuating voltage across a resistor is the sum of the voltage contributions from many individual electrons. Therefore, when you measure a voltage, you get the underlying "static" value, plus some random error produced by the noisy electrons, which because of the central limit theorem is Gaussian distributed. In other words, Gaussian distributions are very common because so many of the random things in Nature come from a sum of many small contributions.


However,




  1. There are plenty of cases where the constituents of an underlying error mechanism have a distribution that does not have a finite second moment; the Cauchy distribution is the most common example.




  2. There are also plenty of cases where an error is simply not the sum of many small underlying contributions.





Either of these cases lead to non-Gaussian errors.


$[a]$: See this other Stack Exchange post.


No comments:

Post a Comment

Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...