I want to assign an error to the standard deviation computed with a Monte Carlo error propagation method.
Now, I explain better.
If we have a random variable $x$, with mean value $x_0$ and standard deviation $\Delta x$, and a function $f(x)$, we know the mean value of $f(x)$ to be $f(x_0)$, while the standard deviation of $f(x)$ can be computed through the first-order formula:
$${\displaystyle \sigma_{f}={\sqrt {\left({\left.\frac {\partial f}{\partial x}\right|_{x=x_0}} \right)^{2}\Delta x_{}^{2} }}} $$
But this formula is not really useful when $\Delta x / x_0$ is large or when $f$ is not linear around $x_0$. Hence, we can estimate the standard deviation of $f(x)$ with a Monte Carlo simulation: we generate $N$ random numbers with mean value $x_0$ and standard deviation $\Delta x$, and we apply the function $f(x)$ to each of them. Then we compute the standard deviation of $f(x)$ this way:
$$ {\displaystyle \sigma_f={\sqrt {\frac {\sum _{i=1}^{N}(f(x_{i})-{f(x_0)})^{2}}{N}}}} $$
Now $\sigma_f$ changes from a Monte Carlo simulation to another, and $\sigma_f$ is more accurate for large $N$.
My question is:
How can I evaluate the error on $\sigma_f$?
I expect this error, which is the standard deviation of $\sigma_f$, to be dependent on $N$ and $x_0/\Delta x$, and on the function $f$.
No comments:
Post a Comment