To Be Uncertain or In Error?
- Evolving concepts of analytical quality
- Precision and accuracy
- Total analytical error
- Operating specifications
- ISO concepts of trueness and uncertainty
- Difficulties in implementing the ISO recommendation
- Need for statistical skills
- Trend towards lower skilled personnel
- No practical benefit over error concepts & terminology.
- Quality requirements already defined as allowable errors.
- Changes will take a long time.
- And the answer is….
Efforts to provide worldwide standards have been led by the International Organization of Standards, or ISO as it is commonly known. The ISO standards were initially aimed at industry but are now being adapted to healthcare laboratories. One of the areas currently under discussion is the standardization of concepts, definitions, and terminology related to analytical quality management, as discussed by Dr. Xavier Fuentes-Arderiu in an essay on "Trueness and Uncertainty" and an associated "Glossary of ISO Metrological and Related Terms and Definitions Relevant to Clinical Laboratory Sciences."
I applaud efforts to standardize and clarify concepts, definitions, and terminology, but I believe there are serious issues that need to be resolved before the proposed ISO definitions and terminology are adopted.
Precision and accuracy were the terms being used to describe the performance of analytical measurements in the 1960s when healthcare laboratories began to be automated and to grow into high volume production operations. In contrast to many industrial applications where replicate test measurements are made routinely, single test measurements are the norm in healthcare testing. This distinction is important because the effects of imprecision can be minimized by performing multiple measurements, hence industrial applications could easily control the amount of random analytical error by increasing the number of replicate measurements. Therefore, the most important measure of industrial quality was the inaccuracy or systematic error.
Total analytic error was recommended in the mid 70s  as a better way to evaluate the performance of clinical measurements where only a single measurement is generally made in determining a test result. All individual test results may be in error due to both the imprecision and inaccuracy of the method, therefore the combination of the two errors, or the total analytical error, determines the quality of the test result. In developing this new concept, the emphasis was on "errors", their random and systematic components, and the net or total effect of those components. It took over a decade for this concept to become established in healthcare laboratories. Its relevance is especially clear in the US today because the CLIA laboratory regulations specify criteria for acceptable performance in the form of allowable total errors that are used in grading analytical performance in proficiency testing surveys .
Operating specifications were developed in the early 90s as an outgrowth of total quality management and the interest in making quality control a quantitative technique for managing routine production . The operating specifications for a method define the imprecision and inaccuracy that are allowable and the QC that is necessary (control rules, number of control measurements) to assure that a stated quality requirement will be achieved in routine production. The important point here is that operating specifications consider both the stable performance of the method (imprecision and inaccuracy) and the capability of the QC procedure to detect changes (unstable performance).
Trueness is used by ISO to describe the "closeness of agreement between the mean obtained from a large series of results of measurement and a true value." The emphasis on the "mean obtained from a large series of results of measurement" limits this concept to the systematic error or inaccuracy (bias) of a method.
Uncertainty is used by ISO to describe a "parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand." This term could be quantitatively described by calculating a standard deviation or some multiple (confidence interval).
In short, the new ISO terminology recommends trueness instead of accuracy, inaccuracy, and systematic error and uncertainty instead of random error, imprecision, and precision. Total error wouldn't exist! Furthermore, estimates of uncertainty would combine different sources and estimates of random error through propagation of errors calculations. These calculations would require a competent statistician, a clinical chemist with some mathematical or statistical aptitude, or special computer programs that could be used by laboratory analysts.
The objective is to describe the uncertainty of a measurement or test result. For example, a test result of 100 means a value in the range from say 96 to 104. The test result is good to within 4 units. Of course, that also means it may be off by up to 4 units, but the result isn't in error - just uncertain!
This complete article and many more essays can be found in the Nothing but the Truth about Quality manual, available in our online store. You can also download the Table of Contents and additional chapters here.