Accuracy and Precision

Accuracy
In analytical chemistry, the term ‘accuracy’ is used in relation to a chemical measurement. The International Vocabulary of Basic and General Terms in Metrology (VIM) defines accuracy of measurement as… “closeness of the agreement between the result of a measurement and a true value.” The VIM reminds us that accuracy is a “qualitative concept” and that a true value is indeterminate by nature. In theory, a true value is that value that would be obtained by a perfect measurement. Since there is no perfect measurement in analytical chemistry, we can never know the true value.
Our inability to perform perfect measurements and thereby determine true values does not mean that we have to give up the concept of accuracy. However, we must add the reality of error to our understanding. For example, lets call a measurement we make XI and give the symbol µ for the true value. We can then define the error in relation to the true value and the measured value according to the following equation:
error = XI – µ   (14.1)
We often speak of accuracy in qualitative terms such a “good,” “expected,” “poor,” and so on. However, we have the ability to make quantitative measurements. We therefore have the ability to make quantitative estimates of the error of a given measurement. Since we can estimate the error, we can also estimate the accuracy of a measurement. In addition, we can define error as the difference between the measured result and the true value as shown in equation 14.1 above. However, we cannot use equation 14.1 to calculate the exact error because we can never determine the true value. We can, however, estimate the error with the introduction of the ‘conventional true value’ which is more appropriately called either the assigned value, the best estimate of a true value, the conventional value, or the reference value. Therefore, the error can be estimated using equation 14.1 and the conventional true value.
Errors in analytical chemistry are classified as systematic (determinate) and random (indeterminate). The VIM definitions of error, systematic error, and random error follow:
Error – the result of a measurement minus a true value of the measurand.
Systematic Error – the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions, minus a true value of the measurand.
Random Error – the result of a measurement minus the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions.
A systematic error is caused by a defect in the analytical method or by an improperly functioning instrument or analyst. A procedure that suffers from a systematic error is always going to give a mean value that is different from the true value. The term ‘bias’ is sometimes used when defining and describing a systematic error. The measured value is described as being biased high or low when a systematic error is present and the calculated uncertainty of the measured value is sufficiently small to see a definite difference when a comparison of the measured value to the conventional true value is made.
Some analysts prefer the term ‘determinate’ instead of systematic because it is more descriptive in stating that this type of error can be determined. A systematic error can be estimated, but it cannot be known with certainty because the true value cannot be known. Systematic errors can therefore be avoided, i.e., they are determinate. Sources of systematic errors include spectral interferences, chemical standards, volumetric ware, and analytical balances where an improper calibration or use will result in a systematic error, i.e., a dirty glass pipette will always deliver less than the intended volume of liquid and a chemical standard that has an assigned value that is different from the true value will always bias the measurements either high or low and so on. The possibilities seem to be endless.
Precision
The term precision is used in describing the agreement of a set of results among themselves. Precision is usually expressed in terms of the deviation of a set of results from the arithmetic mean of the set (mean and standard deviation to be discussed later in this section). The student of analytical chemistry is taught – correctly – that good precision does not mean good accuracy. However, It sounds reasonable to assume otherwise.
Why doesn’t good precision mean we have good accuracy? We know from our discussion of error that there are systematic and random errors. We also know that the total error is the sum of the systematic error and random error. Since truly random error is just as likely to be negative as positive, we can reason that a measurement that has only random error is accurate to within the precision of measurement and the more precise the measurement, the better idea we have of the true value, i.e., there is no bias in the data. In the case of random error only, good precision indicates good accuracy.
Now lets add the possibility of systematic error. We know that systematic error will produce a bias in the data from the true value. This bias will be negative or positive depending upon the type and there may be several systematic errors at work. Many systematic errors can be repeated to a high degree of precision. Therefore, it follows that systematic errors prevent us from making the conclusion that good precision means good accuracy. When we go about the task of determining the accuracy of a method, we are focusing upon the identification and elimination of systematic errors. Don’t be misled by the statement that ‘good precision is an indication of good accuracy.’ Too many systematic errors can be repeated to a high degree of precision for this statement to be true.
Significant figure
There are two types of significant figures, measured and exact.
Measured Observations
As scientists we get a large amount of the numbers we report, and use in our calculations from  measured observations. In this instance a number is determined to be significant or not by the accuracy and precision of the measuring device.  With a number derived from a measurement the last digit to the right  expresses the uncertainty.  For example if you are sure that your low resolution quadrupole mass spectrometer can deliver an accurate measurements to a tenth of a mass unit then you would be justified in reporting masses to a tenth of a mass unit.  For example if one measured a mass of 110.1 u this number would contain four significant figures with the last digit expressing the uncertainty.  The uncertainty would be plus or minus 0.05 u.  Even if the instrument could spit out 10 digits passed the decimal point one should only report the significant digits.  Errors can arise in calculations if insignificant figures are used in a calculation.  If a number resulting from a measurement is used in a calculation that involves multiplication or division all significant figures should be carried through the calculation and then the result should be rounded at the end of the calculation to reflect the term used in the calculation with the fewest significant figures. For example 10.4 X 5.0 should be reported as 52 and not 52.0.  If the calculation involves addition and subtraction a different rule applies, one should preserve common decimal places of the numbers involved.  For example if two numbers obtained from a measurement are used in an addition, 10.1 1000.234 the reported number should be 1010.3. Notice that 10.1 has 3 significant figures and 1000.234 has 7 significant figures and the result of the addition has 5 significant figures.
General rules for determining the number of significant figures in a number:
A) All non-zero numbers are significant.  
B) All zeros between significant numbers are significant, for example the number 1002  has 4 significant figures.
C) A zero after the decimal point is significant when bounded by significant figures to the left, for example the number 1002.0  has 5 significant figures.
D) Zeros to the left of a significant figure and not bounded to the left by another significant figure are not significant. For example the number 0.01 only has one significant figure.
E) Numbers ending with zero(s) written without a decimal place posses an inherent ambiguity. To remove the ambiguity write the number in scientific notation. For example the number 1600000 is ambiguous as to the number of significant figures it contains, the same number written 1.600 X 10^6 obviously has four significant figures.
Several Notes:
1)  It is important to know the accuracy and precision of the measuring device one is using and it is important to report only those digits that have significance. To reiterate, your electrospray mass spectrometer may be able to spit out 10 numbers past the decimal place but you should only use the digits that have significance in reporting or in a calculation.
2) It is generally accepted that the uncertainty is plus or minus 0.5 unit at the level of the uncertainty, for example the “true value” for the number 0.003 can be described as being bounded by the numbers 0.0025 and 0.0035.  It is important to note that in some instances scientists will want to express an uncertainty that exceeds 1 at the level of the uncertainty and this should be noted explicitly in the following fashion, 0.003 ± 0.002
Exact Numbers
Exact numbers are those that are counted without ambiguity, for example the number of mass spectrometers in the lab is exactly three, or the number of cars in the parking lot is exactly four.  These numbers carry no ambiguity and can be considered to have an infinite number of significant figures.  When using these numbers in a calculation the restriction on reporting is borne by the measured number if any.
%d bloggers like this: