Accuracy and precision are commonly confused terms. In fact, ISO 5725 (1994) uses two terms “trueness” and “precision” to describe the accuracy of a measurement method. Precision identifies the grouping or closeness of the readings. In contrast, trueness indicates the closeness of the test results to a reference or true value. This is more commonly referred to as accuracy.
The reasons cited in ISO 5725 for variations in measurement readings include:
- the operator
- the equipment used
- the calibration of the equipment
- the environment (temperature, humidity, air pollution, etc.)
- the time elapsed between measurements.
A bullseye target visually shows the difference between accuracy and precision.
Repeatability and reproducibility are two aspects of precision. Describing the minimum variability of precision, repeatability identifies variations that occur when conditions are constant and the same operator uses the same instrument within a short period of time. In contrast, reproducibility describes the maximum variability of precision where variations occur over longer time periods with different instruments and different operators.
Error is the difference between a measurement and the true value of the measurand (the quantity being measured). Error does not include mistakes in making the measurement. The total error is usually a combination of systematic error and random error.
Systematic error tends to shift all measurements in a similar way so the mean value of several measurements is displaced or varies in a predictable way from the true value. When the true value is known, corrections can be made for systematic error.
In contrast, the random error portion of the total error varies in an unpredictable way, so it is not possible to correct for random error.