Tools, Technologies and Training for Healthcare Laboratories

To Be Uncertain or In Error?

Do you know the new ISO terms? Dr. Westgard weighs in on the new terminology and suggests there are practical applications for the new uncertainty concepts as well as the old error concepts. He also asks, should we be spending time improving terminology or improving lab performance? (Preview)

July 1999

Efforts to provide worldwide standards have been led by the International Organization of Standards, or ISO as it is commonly known. The ISO standards were initially aimed at industry but are now being adapted to healthcare laboratories. One of the areas currently under discussion is the standardization of concepts, definitions, and terminology related to analytical quality management, as discussed by Dr. Xavier Fuentes-Arderiu in an essay on "Trueness and Uncertainty" and an associated "Glossary of ISO Metrological and Related Terms and Definitions Relevant to Clinical Laboratory Sciences."

I applaud efforts to standardize and clarify concepts, definitions, and terminology, but I believe there are serious issues that need to be resolved before the proposed ISO definitions and terminology are adopted.

Evolving concepts of analytical quality

Precision and accuracy were the terms being used to describe the performance of analytical measurements in the 1960s when healthcare laboratories began to be automated and to grow into high volume production operations. In contrast to many industrial applications where replicate test measurements are made routinely, single test measurements are the norm in healthcare testing. This distinction is important because the effects of imprecision can be minimized by performing multiple measurements, hence industrial applications could easily control the amount of random analytical error by increasing the number of replicate measurements. Therefore, the most important measure of industrial quality was the inaccuracy or systematic error.

Total analytic error was recommended in the mid 70s [1] as a better way to evaluate the performance of clinical measurements where only a single measurement is generally made in determining a test result. All individual test results may be in error due to both the imprecision and inaccuracy of the method, therefore the combination of the two errors, or the total analytical error, determines the quality of the test result. In developing this new concept, the emphasis was on "errors", their random and systematic components, and the net or total effect of those components. It took over a decade for this concept to become established in healthcare laboratories. Its relevance is especially clear in the US today because the CLIA laboratory regulations specify criteria for acceptable performance in the form of allowable total errors that are used in grading analytical performance in proficiency testing surveys [2].

Operating specifications were developed in the early 90s as an outgrowth of total quality management and the interest in making quality control a quantitative technique for managing routine production [3]. The operating specifications for a method define the imprecision and inaccuracy that are allowable and the QC that is necessary (control rules, number of control measurements) to assure that a stated quality requirement will be achieved in routine production. The important point here is that operating specifications consider both the stable performance of the method (imprecision and inaccuracy) and the capability of the QC procedure to detect changes (unstable performance).

ISO concepts of trueness and uncertainty

Trueness is used by ISO to describe the "closeness of agreement between the mean obtained from a large series of results of measurement and a true value." The emphasis on the "mean obtained from a large series of results of measurement" limits this concept to the systematic error or inaccuracy (bias) of a method.

Uncertainty is used by ISO to describe a "parameter, associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand." This term could be quantitatively described by calculating a standard deviation or some multiple (confidence interval).

In short, the new ISO terminology recommends trueness instead of accuracy, inaccuracy, and systematic error and uncertainty instead of random error, imprecision, and precision. Total error wouldn't exist! Furthermore, estimates of uncertainty would combine different sources and estimates of random error through propagation of errors calculations. These calculations would require a competent statistician, a clinical chemist with some mathematical or statistical aptitude, or special computer programs that could be used by laboratory analysts.

The objective is to describe the uncertainty of a measurement or test result. For example, a test result of 100 means a value in the range from say 96 to 104. The test result is good to within 4 units. Of course, that also means it may be off by up to 4 units, but the result isn't in error - just uncertain!

 

Nothing but the Truth bookWe invite you to read the rest of this article

This complete article and many more essays can be found in the Nothing but the Truth about Quality manual, available in our online store. You can also download the Table of Contents and additional chapters here.