Error analysis (mathematics)
In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics.
Error analysis in numerical modeling
In numerical simulation or modeling of real systems, error analysis is concerned with the changes in the output of the model as the parameters to the model vary about a mean.
For instance, in a system modeled as a function of two variables . Error analysis deals with the propagation of the numerical errors in and (around mean values and ) to error in (around a mean ).[1]
In numerical analysis, error analysis comprises both forward error analysis and backward error analysis.
Forward error analysis
Forward error analysis involves the analysis of a function which is an approximation (usually a finite polynomial) to a function to determine the bounds on the error in the approximation; i.e., to find such that . The evaluation of forward errors is desired in validated numerics.[2]
Backward error analysis
Backward error analysis involves the analysis of the approximation function , to determine the bounds on the parameters such that the result .[3]
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable.[4] The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.[5]
Applications
Global positioning system
The analysis of errors computed using the global positioning system is important for understanding how GPS works, and for knowing what magnitude errors should be expected. The Global Positioning System makes corrections for receiver clock errors and other effects but there are still residual errors which are not corrected. The Global Positioning System (GPS) was created by the United States Department of Defense (DOD) in the 1970s. It has come to be widely used for navigation both by the U.S. military and the general public.
Molecular dynamics simulation
In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the phase space or infrequently occurring events, these lead to the statistical error due to random fluctuation in the measurements.
For a series of M measurements of a fluctuating property A, the mean value is:
When these M measurements are independent, the variance of the mean <A> is:
but in most MD simulations, there is correlation between quantity A at different time, so the variance of the mean <A> will be underestimated as the effective number of independent measurements is actually less than M. In such situations we rewrite the variance as :
where is the autocorrelation function defined by
We can then use the auto correlation function to estimate the error bar. Luckily, we have a much simpler method based on block averaging.[6]
Scientific data verification
Measurements generally have a small amount of error, and repeated measurements of the same item will generally result in slight differences in readings. These differences can be analyzed, and follow certain known mathematical and statistical properties. Should a set of data appear to be too faithful to the hypothesis, i.e., the amount of error that would normally be in such measurements does not appear, a conclusion can be drawn that the data may have been forged. Error analysis alone is typically not sufficient to prove that data have been falsified or fabricated, but it may provide the supporting evidence necessary to confirm suspicions of misconduct.
See also
References
- James W. Haefner (1996). Modeling Biological Systems: Principles and Applications. Springer. pp. 186–189. ISBN 0412042010.
- Tucker, W. (2011). Validated numerics: a short introduction to rigorous computations. Princeton University Press.
- Francis J. Scheid (1988). Schaum's Outline of Theory and Problems of Numerical Analysis. McGraw-Hill Professional. pp. 11. ISBN 0070552215.
- James H. Wilkinson; Anthony Ralston(ed); Edwin D. Reilly(ed); David Hemmendinger(ed) (8 September 2003). "Error Analysis" in Encyclopedia of Computer Science. pp. 669–674. Wiley. ISBN 978-0-470-86412-8. Retrieved 14 May 2013.CS1 maint: extra text: authors list (link)
- Bo Einarsson (2005). Accuracy and reliability in scientific computing. SIAM. pp. 50–. ISBN 978-0-89871-815-7. Retrieved 14 May 2013.
- D. C. Rapaport, The Art of Molecular Dynamics Simulation, Cambridge University Press.