Monday, 16 April 2018

experimental physics - Why do we divide the standard deviation by $sqrt{n}$?



I've been studying experimental physics on the book "The art of experimental physics" and on the chapter about error analysis there's something that has been bothering me. The author says:



Now that we have determined the "best value" for the measurement, that is, $\bar{x}$, we need to estimate the uncertainty or error in this value. We start with defining one way in which the spread of data about the mean value can be characterized.


The standard deviation $s$ is defined as


$$s = \sqrt{\dfrac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2}$$


If the standard deviation is small then the spread in the measured values about the mean is small; hence, the precision in the measurement is high. Note that the standard deviation is always positive and that it has the same units as the measured values.


The error or uncertainty in the mean value, $\bar{x}$, is the standard deviation of the mean, $s_m$, which is defined to be



$$s_m = \dfrac{s}{n^{1/2}}$$


where $s$ is the standard deviation and $n$ is the total number of measurements.


The result to be reported is then


$$\bar{x}\pm s_m.$$



Now, why to get the error on the quantity measured we must divide the standard deviation by $\sqrt{n}$ instead of just using the standard deviation?


Why things are done in this way?




No comments:

Post a Comment

Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...