Tools, Technologies and Training for Healthcare Laboratories

Need for a System of Quality Standards

We've got TEa, performance criteria, clinical outcome criteria, proficiency testing criteria, total biologic goals, and, of course, "state of the art." It's no wonder people are confused about quality requirements. Find out how CLIA, NCEP, biological goals, clinical decision intervals, and other quality standards can (and should) be reconciled.

 

 

Abstract

The management of analytical quality depends on the careful evaluation of the imprecision and inaccuracy of laboratory methods and the application of statistical quality control procedures to detect medically important analytical errors that may occur during routine analysis. Different forms of quality standards have been recommended in the literature. All need to be translated into operating specifications for the imprecision, inaccuracy, control rules, and number of control measurements that are necessary to assure the analytical quality during routine production of test results. A system of quality standards is recommended to incorporate clinical outcome criteria, analytical outcome criteria, performance criteria, and operating specifications.

Introduction

Statistical quality control was introduced in clinical laboratories by Levey and Jennings in 1950 [1] and became a standard practice in most laboratories in the 1960s. During the 60s, efforts to improve analytical quality also focused on the establishment of method evaluation protocols, e.g., Barnett in 1963 [2] and Broughton et al in 1969 [3]. Also, the first recommendations for establishing standards of quality for laboratory tests were published by Tonks in 1963 [4], Barnett in 1968 [5], and Cotlove, Harris, and Williams in 1970 [6]. It is notable that these seminal publications introduced three different approaches for defining standards for quality:

  • Tonks looked at the distribution of test results for a healthy population,
  • Barnett assessed the medically important change in a test result, and
  • Cotlove et al made use of the distribution of test results for a healthy individual.

These different approaches have also led to different formats for stating the quality required for a test, such as:

  • the allowable total error (TE),
  • the medically allowable standard deviation (SD), and
  • the medically allowable bias.

Since that time, there has been extensive debate about the preferred approach for defining standards of quality. However, few laboratories today actually utilize any form of quality standards in the management of their testing services. Methods are more often validated against the claims of manufacturers, who tend to set their performance goals on the basis of the "state of the art" in order to be competitive in the marketplace. Statistical QC procedures are arbitrarily selected without consideration of quality needed for the test.

Objective management of analytical quality

The importance of establishing objective criteria for selecting analytical methods that have appropriate precision and accuracy was discussed some 25 years ago [7]. This year, 1999, one of the leading laboratory journals began advising authors that "results obtained for the performance characteristics should be compared objectively with well-documented quality specifications, i.e., published data on the state of the art, performance required by regulatory bodies such as CLIA '88, or recommendations documented by expert professional groups." [8]. This "information for authors" was followed by an editorial by Fraser and Petersen [9] that reviewed approaches and sources of quality specifications.

Also this year, the US National Committee for Clinical Laboratory Standards (NCCLS) updated its guidelines on statistical quality control [10]. The new guidelines include a section on "planning a statistical quality control procedure." The first step is to define the quality required for the test. Thus, there is a new emphasis on the use of standards of quality for managing analytical methods in laboratories today - both for evaluating method performance and establishing appropriate statistical QC procedures.

The theoretical basis for relating test quality to statistical control rules and numbers of control measurements was developed some 20 years ago [11], following Aronsson, deVerdier, and Groth's seminal work that introduced computer simulation as a tool for systems analysis [12]. Practical approaches for selecting appropriate control rules and numbers of control measurements have since been described in the literature [13-15] and can be readily implemented today with available QC planning tools, such as power function graphs [16], charts of operating specifications [17], and the QC Validator computer program [18].

However, analytical quality management in healthcare laboratories is not likely to improve until there are clear guidelines for defining the quality that is needed for test applications. The debate over the best type of quality specification needs to be resolved if progress is to be made.

Need for a system of quality standards

I believe the solution to the current debate is to formulate a system of quality standards. Rather than continuing to argue about the best way to define a quality standard, we need to recognize the relationships between the different types of quality standards and identify the proper application of each.

For example, there is a natural distinction between certain types of quality standards, such as clinical outcome criteria, analytical outcome criteria, performance criteria for imprecision and inaccuracy, and analytical operating specification.

  • Clinical outcome criteria encompass the highest number of variables or factors that affect the value observed for a test result.
  • Analytical outcome criteria encompass all the analytical factors, but do not consider the effects of pre- or post- analytical factors.
  • Performance criteria define the maximum allowable limits for individual characteristics, such as imprecision and inaccuracy.
  • Operating specifications identify the bench level conditions (imprecision, inaccuracy, and QC) that are needed to assure that a defined quality criterion is satisfied in routine service.

There is a natural order from broad clinical criteria to total analytical criteria to specific performance criteria, all of which can be related to the operating specifications needed for a method.

A system of quality standards would recognize that:

  • Different quality standards require different formats, e.g., clinical outcome criteria can be defined in terms of medically important changes in test values, analytical outcome criteria can be stated in the form of allowable total errors, analytical performance criteria can be defined as the maximum allowable SD or CV and the maximum allowable bais, and analytical operating specifications are stated in terms of the imprecision (CV), inaccuracy (bias), and QC (control rules, number of control measurements) that are necessary in the daily operation of a method.
  • Different quality standards are needed for different applications, e.g., clinical outcome criteria are used in guidelines for interpretation of patient test results, analytical total error criteria are used to score proficiency testing results, analytical performance criteria are used in method evaluation studies, and criteria for imprecision, bias, and QC are used to manage the routine operation of a method.
  • Different sources of information are appropriate for defining different types of criteria, i.e., physician practice guidelines and standard clinical pathways may be useful for defining clinical outcome criteria, population biological variation may be useful for defining the allowable total error, individual biological variation may be useful for defining the maximum allowable imprecision, and the effect of systematic errors on diagnostic patient classifications may be useful for defining the maximum allowable bias.
  • Different sources of information may be available, or may be more reliable, at different times during the evolution of a testing process, e.g., for new diagnostic tests, it may be possible to define medically important changes in the test results from the initial clinical studies of a method's diagnostic sensitivity, specificity, and predictive value; for well-established tests, there is already available an extensive "data-bank" of estimates of biologic variability.
  • Different quality standards may take priority in different situations, e.g., government regulations may place a high priority on satisfying proficiency testing criteria in certain laboratory situations, whereas special patient needs may set more demanding clinical criteria in other settings.

For all these reasons, there is a need for a systems approach that incorporates the different types of quality standards, different sources of information or data for defining those standards, and different applications of those standards.

An example system for quality standards

The accompanying figure shows the relationships between certain kinds of recommendations and different types of quality criteria.

Starting at the top of the figure, medically important changes in test results can be defined by standard treatment guidelines (clinical pathways, clinical practice guidelines, etc.) to establish clinical outcome criteria (or decision intervals, Dint). Such clinical criteria can be converted to laboratory operating specifications for imprecision (smeas), inaccuracy (biasmeas), and QC (control rules, N) by a clinical quality-planning model [19] that takes into account pre-analytical factors, such as individual or within-subject biologic variation (swsub). Note that some earlier models did not account for within-subject biological variability, therefore, the recommendations for medically allowable he medically allowable standard deviations were erroneously large.

The left side of the figure shows how performance criteria for imprecision and inaccuracy can be defined as separate analytical goals for the maximum imprecision and bias that would be allowable for the stable performance of the method. Specifications for maximum imprecision can be derived on the basis of within-subject biological variation [20]. The maximum allowable bias can be derived from diagnostic classification models [21]. Laboratories can utilize these individual performance criteria by relating observed method performance to the maximum allowable value, calculating the critical-size error that needs to be detected to maintain satisfactory performance, and then selecting appropriate QC procedures by use of power function graphs [14].

The right side of the figure shows how proficiency testing criteria define analytical outcome criteria in the form of allowable total errors (TEa), which can likewise be translated into operating specifications (smeas, biasmeas, control rules, N) via an analytical quality-planning model [22]. Note that the allowable total error can also be set on the basis of total biologic goals that are population based or individual based [14], therefore the extensive data-bank of individual biologic variation can also be utilized in this situation.
Conclusions

In the absence of any defined standards of quality, the laboratory is left to accept the manufacturer's assessment of the precision and accuracy that are needed and to accept consensus guidelines on the QC that is needed, as shown at the bottom of the second figure. "State of the art" analytical performance sets the specifications for imprecision and inaccuracy because manufacturers tend to set their product performance goals on the basis the performance needed to be competitive in the marketplace. Arbitrary control exists instead of quality control because QC practices are set on the basis of professional practice, regulatory, or accreditation guidelines.

The objective management of the analytical quality of laboratory tests depends on having defined standards of quality. A system of quality standards is necessary to promote practical applications and facilitate the comparison [23-25] of the many different recommendations for quality goals, requirements, and specifications that exist today. Clinical outcome criteria, analytical outcome criteria, and analytical performance criteria are all part of a system for analytical quality management. With the systems approach outlined here, clinical outcome criteria, analytical outcome criteria, and analytical performance criteria can be converted to operating specifications that define the operating characteristics required by the testing process at the bench level of operation. The bottom line is the imprecision, inaccuracy, and QC that are necessary for the laboratory to manage and assure the quality of routine testing.

One common limitation of current approaches for defining quality specifications is that the known "insensitivity" of common laboratory QC procedures is not adequately considered, therefore the specifications apply only to stable methods. Until QC specifications are included along with specifications for imprecision and inaccuracy, quality standards will have little practical value for managing and assuring the quality of the test results produced in routine laboratory service.

References

  1. Levey S, Jennings ER. The use of control charts in the clinical laboratory. Am J Clin Pathol 1950;20:1059-66.
  2. Barnett RN. A scheme for the comparison of quantitative methods. Am J Clin Pathol 1965;43:562.
  3. Broughton PMG, Buttolph MA, Gowenlock AH, Skentelbery RG, Neill DW. Recommended scheme for the evaluation of instruments for automatic analysis in the clinical biochemistry laboratory. J Clin Path 1969;22:278.
  4. Tonks D. A study of the accuracy and precision of clinical chemistry determinations in 170 Canadian laboratories. Clin Chem 1963;9:217-233.
  5. Barnett RN. Medical significance of laboratory results. Am J Clin Pathol 1968;50:671-676.
  6. Cotlove E, Harris E, Williams G. Biological and analytical components of variation in long-term studies of serum constituents in normal subjects: III. Physiological and medical implications. Clin Chem 1970;16:1028-1032.
  7. Westgard JO, Carey RN, Wold S. Criteria for judging precision and accuracy in method development and evaluation. Clin Chem 1974;20:825-833.
  8. Information for authors. Clin Chem 1999;45:1-5.
  9. Fraser CG, Petersen PH. Analytical performance characteristics should be judged against objective quality specifications. Clin Chem 1999;45:321-323.
  10. C24-A3. Statistical Quality Control for Quantitative Measurements: Principles and Definitions; Approved Guidelines - Third Edition. Clinical Laboratory Standards Institute, Wayne, PA, USA, 2003.
  11. Westgard JO, Groth T, Aronsson T, Falk K, deVerdier C-H. Performance characteristics of rules for internal quality control: probabilities for false rejection and error detection. Clin Chem 1977;23:1857-1867.
  12. Aronsson T, deVerdier C-H, Groth T. Factors influencing the quality of analytical methods - a systems analysis, with use of computer simulation. Clin Chem 1974;20:738-745.
  13. Linnet K. Choosing quality-control systems to detect maximum clinically allowable analytical errors. Clin Chem 1989;35:284-288.
  14. Hyltoft Petersen P, Ricos C, Stockl D, Libeer JC, Baadenhuijsen H, Fraser C, Thienpont L. Proposed guidelines for the internal quality control of analytical results in the medical laboratory. Eur J Clin Chem Clin Biochem 1996;34:983-999.
  15. Westgard JO. Error budgets for quality management: Practical tools for planning and assuring the analytical quality of laboratory testing processes. Clin Lab Manag Review 1996;10:377-403.
  16. Westgard JO, Groth T. Power functions for statistical control rules. Clin Chem 1979;25:863-869.
  17. Westgard JO. Charts of operational process specifications (OPSpecs Charts) for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing criteria. Cln Chem 1992;38:1226-1233.
  18. Westgard JO, Stein B, Westgard SA, Kennedy R. QC Validator 2.0: a computer program for automatic selection of statistical QC procedures for applications in healthcare laboratories. Comput Methods Programs Biomed 1997;53:175-186.
  19. Westgard JO, Hyltoft Petersen P, Wiebe DA. Laboratory process specifications for assuring quality in the U.S. National Cholesterol Education Program. Clin Chem 1991;37:656-661.
  20. Fraser CG, Hyltoft Petersen P, Ricos C, Haeckel R. Proposed quality specifications for the imprecision and inaccuracy of analytical systems for clinical chemistry. Eur J Clin Chem Clin Biochem 1992;30:311-317.
  21. Klee GG. Tolerance limits for short-term analytical bias and analytical imprecision derived from clinical assay specificity. Clin Chem 1993;39:1514-1518.
  22. Westgard JO, Wiebe DA. Cholesterol operational process specifications for assuring the quality required by CLIA proficiency testing. Clin Chem 1991;37:1938-1944.
  23. Westgard JO, Seehafer JJ, Barry PL. European specifications for imprecision and inaccuracy compared with operating specifications that assure the quality required by US CLIA proficiency-testing criteria. Clin Chem 1994;40:1228-1232.
  24. Westgard JO, Bawa N, Ross JW, Lawson NS. Laboratory precision performance: State of the art versus operating specifications that assure the analytical quality required by Clinical Laboratory Improvement Amendments proficiency testing. Arch Pathol Lab Med 1996;120:621-625.
  25. Westgard JO, Seehafer JJ, Barry PL. Allowable imprecision for laboratory tests based on clinical and analytical test outcome criteria. Clin Chem 1994;40:1909-1914.