Let's say I have an experimental uncertainty of ±0.03134087786 and I perform many uncertainty calculations using this value. Should I round the uncertainty to 1 significant figure at this stage or leave it unrounded until my final answer?
Answer
tl;dr- No, rounding numbers introduces quantization error and should be avoided except in cases in which it's known known to not cause problems, e.g. in short calculations or quick estimations.
Rounding values introduces quantization error (e.g., round-off error). Controlling for the harmful effects of quantization error is a major topic in some fields, e.g. computational fluid dynamics (CFD), since it can cause problems like numerical instability. However, if you're just doing a quick estimation with a calculator or for a quick lab experiment, quantization error can be acceptable. But, to stress it – it's merely acceptable in some cases; it's never a good thing that we want.
This can be confusing because many intro-level classes teach the method of significant figures, which calls for rounding, as a basic method for tracking uncertainty. And in non-technical fields, there's often a rule that estimated values should be rounded, e.g. a best guess of "103 days" might be stated as "100 days". In both cases, the issue is that a reader might mistake the apparent precision of an estimate to imply a certainty that doesn't exist.
Such problems are purely communication issues; the math itself isn't served by such rounding. For example, if a best guess is truly "103 days", then presumably it'd be best to actually use that number rather than arbitrarily biasing it; sure, we might want to adjust an estimate up-or-down for other reasons, but making an intermediate value look pretty doesn't make any sense.
Getting digits back after rounding
Often, publications use a lot of rounding for largely cosmetic reasons. Sometimes these rounded values reflect an approximate level of precision; in others, they're almost arbitrarily selected to look pretty.
While these cosmetic reasons might make sense in a publication, if you're doing sensitive work based on another author's reported values, it can make sense to email them to request the additional digits or/and a finer qualification of their precision.
For example, if another researcher measures a value as "$1.235237$" and then publishes it as $``1.2"$ because their uncertainty is on-the-order-of $0.1$, then presumably the best guess one can make is that the "real" value is distributed around $1.235237$; using $1.2$ on the basis of it looking pretty doesn't make any sense.
Uncertainties aren't special values
The above explanations apply to not just a base measurement, but also to a measurement's uncertainty. The math doesn't care for a distinction between them.
So for grammatical reasons, it's common to write up an uncertainty like ${\pm}0.03134087786$ as $``{\pm}0.03"$; however, no one should be using ${\pm}0.03$ in any of their calculations unless they're just trying to do a quick estimate or otherwise aren't too concerned with accuracy.
In summary, no, intermediate values shouldn't be rounded. Rounding is best understood as a grammatical convention to make writing look pretty rather than being a mathematical tool.
Examples of places in which rounding is problematic
A general phenomena is loss of significance:
Loss of significance is an undesirable effect in calculations using finite-precision arithmetic such as floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two nearly equal numbers (known as catastrophic cancellation). The effect is that the number of significant digits in the result is reduced unacceptably. Ways to avoid this effect are studied in numerical analysis.
–"Loss of significance", Wikipedia
The obvious workaround is then to increase precision when possible:
Workarounds
It is possible to do computations using an exact fractional representation of rational numbers and keep all significant digits, but this is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data are accurate to only ten digits? The same effect will occur.
One of the most important parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it.
–"Loss of significance", Wikipedia
A specific example is in Gaussian elimination, which has a lot of precision-based problems:
One possible problem is numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row reduce the matrix one would need to divide by that number so the leading coefficient is 1. This means any error that existed for the number which was close to zero would be amplified. Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using partial pivoting, even though there are examples of stable matrices for which it is unstable.
–"Gaussian elimination", Wikipedia [references omitted]
Besides simply increasing all of the values' precision, another workaround technique is pivoting:
Partial and complete pivoting
In partial pivoting, the algorithm selects the entry with largest absolute value from the column of the matrix that is currently being considered as the pivot element. Partial pivoting is generally sufficient to adequately reduce round-off error. However, for certain systems and algorithms, complete pivoting (or maximal pivoting) may be required for acceptable accuracy.
–"Pivot element", Wikipedia
No comments:
Post a Comment