Tools, Technologies and Training for Healthcare Laboratories

Future Directions in Quality Control

Total Laboratory Automation and POC devices are two trends in current diagnostic equipment. Dr. Westgard reviews the history of quality control, from manual methodsin the 1950s, to the current state of the art of laboratory fourth generation instrumentation, to the quality control systems in future instruments. Learn how automation and computerization will impact the future quality control practices - what will be done for us, and what we still have to do for ourselves.

Introduction

There is a lot of interest in changing the way laboratory quality control is done today. There are a variety of driving forces, but the strongest ones arise from efforts in Total Laboratory Automation and Point-of-Care (POC) applications, which are at opposite ends of a wide spectrum of laboratory testing operations. QC in POC applications seems to be a hot topic because POC is a broader, more established market. POC manufacturers and users have found traditional statistical quality control (SQC) difficult to implement and costly to perform, and they argue that changes in QC regulations and practices are necessary for the unit devices used in POC testing.

There are many reasons to be concerned about current QC practices [1] and it is important to recognize the need for improvements and to identify a strategy for developing those improvements. I discussed this recently at an NCCLS subcomittee meeting and at a CDC CLIAC meeting, and an outline of that presentation appeared in the handouts of a recent QC forum sponsored by AACC. My intention was to indicate that improvements in quality control should be guided by an understanding of the expected evolution of SQC and its natural integration into a total quality control (TQC) system, rather than doing away with traditional SQC and trusting manfacturers to produce error-free devices. This position is based on my beliefs that

  • (a) quality is an important responsibility of all healthcare professionals, whether they are laboratory scientists, nurses, therapists, etc.,
  • (b) SQC will always have an important role because it gives the user an independent mechanism for assuring the quality of their own work, even when the work depends on a testing device supplied by someone else, and
  • (c) there still exist fundamental flaws in our understanding of quality, existing product labelling for quality, and manufacturers' claims and documentation of quality.

State of QC practice

It has been reported by Tetrault and Steindel [2] in a survey of 500 laboratories that current QC practices consist of analyzing 2 or 3 control per run and interpreting the control data with the same control rules as used ten years ago. They found it common for laboratories to use runs of 8 hours or so and to evaluate QC data via software residing in instruments or laboratory information systems, thus run length and data handling have changed even though the practices of analyzing stable control materials and interpretation of control data have stayed pretty much the same. The practice for run length has clearly been influenced by the CLIA regulations that require a minimum of two different control materials to be analyzed in an 8 hour period. Minimums that are set in QC regulations can be seen to establish maximums in QC practices!

Concerns with POC applications

Current QC practices would be probably be still considered okay, except that even these regulatory minimums are difficult to achieve in POC applications. Some of the reasons are that POC testing personnel don't have time to do QC, don't understand QC, and haven't been trained to do laboratory tests as part of their professional education and responsibilities. Managers in POC settings are concerned about the cost of doing QC, the cost of the training needed for all the personnel involved, and the on-going cost of proving operator proficiency and validating routine performance. As customers, they ask manufacturers to solve these problems.

The perfect solution, of course, would be to have POC devices that are easy to operate and require no training, are perfectly stable and require no validation, and never ever have problems that require QC. Manufacturers have responded by developing simple to use stable devices, but haven't yet demonstrated that these devices are error free. Most often these devices depend on individual reagent strips, tablets, packs, or cartridges that are expensive to manufacturer and are costly on a per test basis, which further fuels the desire to change or even eliminate QC. Regulatory agencies feel the current pressure to reduce government interference in business and especially in healthcare, where the costs cycle back to impact the government's own finances, thus they are also sensitive to current POC problems.

Reasons for changing QC

Driven primarily by cost considerations, laboratories, manufacturers, and even regulators seem to have agreed that changes in QC are needed, though there is little common understanding of what to do and how to do it. Labelling regulations require manufacturers to describe QC procedures that can be used with their analytical systems, but manufacturer's QC instruction are often inadequate. CLIA regulations [3] define a broad concept of quality control for the whole testing process, including pre- and post-analytical steps or factors, and call for manufacturers to define valid QC procedures for use with their methods. In situations where manufacturer's QC instructions have not been cleared by the government, CLIA assigns the responsibility for appropriate analytical QC to the laboratory by requiring that "...the laboratory must evaluate instrument and reagent stability and operator variance in determining the number, type, and frequency of testing calibration or control materials and establish criteria for acceptability used to monitor test performance during a run of patient specimens." [3, 493.1218, page 7166].

In spite of these regulations, few laboratories have actually changed how they do QC, even though they have implemented new generations of instrument systems and POC devices during the last ten years. Tetrault and Steindel [2] document the lack of progress and recognize that "the best set of rules will vary from method to method and cannot be determined through simple algorithms or formulas." They recommend that laboratories "balance the true error-detection capabilities against the probabilities of falsely rejecting a good run." A practical approach for doing this is provided by the QC planning process and QC planning tools presented on this website. Thus, there exists a starting point for planning QC procedures that will be appropriate for different tests and different measurement systems. When addressing new generations of analytical systems, it will also be helpful to have an understanding of the evolution of measurement and control procedures and to recognize where new devices fit in this hierarchy of analytical systems.

Evolution of laboratory measurement and control procedures

Evolutionary change should be easier to understand than revolutionary change. Therefore, it is wise to review history, appreciate experience, and plan incremental changes or improvements, rather than starting over and doing something entirely different. Stepwise improvements in laboratory QC practices should be less risky than totally new and untried approaches. In understanding the evolution of QC, it is helpful to recognize some of the distinguishing characteristics of different generations of measurement procedures, consider the complementary nature of measurement and control procedures, and observe how control procedures have evolved in response to those characteristics.

Manual methods were the mainstay of the laboratory in the 1950s when Levey and Jennings introduced statistical QC [4]. Prior to this, quality was assessed primarily by inspection, i.e., test results were reviewed and correlated with what was known about the patient's condition. This inspection practice was naturally superceded by statistical QC as the craftsman model of production changed into managed production where parts were expected to be uniform and interchangable. The first hurdle was to get laboratories to analyze control materials and interpret the control data, which took at least a decade to accomplish. The use of a simple and practical QC procedure, such as 2s control limits (or a 12s rule) with a low N (even 1 control measurement made per run), was critical for establishing the practice of QC.

First generation automated systems were developed by Technicon in the form of the AutoAnalyzer, which brought continuous flow automation into routine use in clinical laboratories during the late 1950s and set the standard of operation in the 1960s. Because this batch analyzer showed some drift in the measurement response over time, it was recognized that at least 2 controls needed to be analyzed to bracket the batch of patient samples - a QC practice that is often still in place in many laboratories today.

Second generation automation can be represented by multichannel batch analyzers, such as the SMA 12 and 12/60, which became the production workhorses of the 1970s. Existing QC practices based on 12s control rule and 2 control measurements were a problem because of accumulating effect of false rejections when 12 tests were being performed simultaneously and any single control problem required a repeat analysis that consumed the capacity of all 12 channels. QC practices, such as multirule procedures [5], were developed to minimize false rejections by applying rules that individually had very low false rejection rates, while at the same time maximizing error detection by applying multiple rules or criteria for rejection.

Third generation automation is represented by stable random access analyzers, such as the DuPont aca that was introduced in the 1970s, plus a wide variety of other stable random access instruments that became available in the 70s and 80s. The long-term stability of these systems was their outstanding characteristic and led to a reduction in the number of controls used and challenged the practice of bracketting patient samples with QC samples. It became acceptable practice to apply different QC practices to different instruments because of differences in stability, but the same control rules and Ns were generally used on all tests on a given instrument.

Fourth generation automation can be characterized as high-precision and high- stability analyzers, such as the BMC/Hitachi 700 series that were introduced in the mid 1980s. The high precision led to the design of QC procedures for individual tests based on the quality required for that test [6]. Quantitative QC planning processes were developed to take into account the analytical or clinical quality required, the observed method imprecision and inaccuracy, and the expected error detection and false rejection characteristics of the control rules and Ns used. QC planning tools have been developed to support this process and make it possible to quickly and easily select the appropriate control rules and Ns.

In summary, as analytical systems have developed and improved, industrial process control was adapted to reflect practical considerations for manual implementation of simple Levey-Jennings charts and then further evolved to apply a bracketing approach to deal with early continuous flow automated systems. Similarly, industry process control has now evolved to implement multirule procedures to minimize false rejections with simultaneous multichannel automated systems, to individualize QC designs and expand run lengths for instruments with improved stability, and to individualize QC designs for the quality required for individual tests for instruments with improved analytical performance. Industrial process control has been optimized for cost-effective operation based on the false rejection and error detection characteristics of the QC procedure, the stability and analytical performance (imprecision and inaccuracy) of the measurement procedure, and the quality required by the test itself.

QC for next generation automated systems

Complete automation of the testing process is the goal of the next generation total laboratory automation. This automation is targeted to include the steps of specimen processing, transport, and loading into instrument systems, as well as the automatic release and distribution of test results. These systems also consolidate tests that were traditionally performed by different sections of a laboratory, providing high-volume broad-menu work stations. There are even discussions of a "virtual laboratory," where computerization is extended to optimize the test orders themselves, distribute specimens to a network of laboratories based on the quality needed in relation to the performance capabilities and costs of the different laboratories, and validate the medical utility of the test results.

The distinguishing characteristics of these next generation systems are their more inclusive automation, more extensive computerization, and more expansive test menus. However, the driving force for implementing these new systems is still cost reduction, which will have to be achieved by reducing the number of laboratory workstations, reducing the number of analysts, reducing the need for specialized personnel, increasing the menu of tests that can be performed by individual analysts, and lowering of the skill level and wages of the remaining personnel.

These changes will also require more automation and computerization of laboratory QC systems, which will need to encompass on-line monitoring and tracking of specimens through the steps in the automated laboratory, automatic flagging of samples having analytical problems, automatic re-analysis when needed, and automatic validation and release of test results. There will be increased used of patient data to check internal consistency, prioritize reporting, initiate additional testing, monitor stability of pre-analytical steps, and provide measures of run length. Even the design of the QC procedures themselves will need to be automated to maximize the quality and productivity of the testing processes, to adapt to differences in performance for the same test on different analytical instruments, and to respond to changes in performance of a given method over time. Thus considerable effort and computer resources will need to be deployed to provide the automatic QC that will be necessary in the totally automated laboratory.

QC for POC applications

The distinguishing characteristics of POC devices tend to be stability and simplicity, the latter being achieved to some extent by automation of pre-analytical, analytical, and post- analytical steps. Such characteristics are also found in 3rd to 5th generation systems. Unfortunately, POC devices are less likely to have the improved analytical performance found in 4th and 5th generation systems and are likely to be subject to more operator variance, thus they would seem to require individualized QC designs that take into account the quality required, the performance observed, and the stability expected. Monitoring of pre- and post- analytical steps is critical because of the expected operator skills and level of analytical experience. This requires a broad or total QC strategy that integrates different components, including SQC, into a Total Quality Control (TQC) system.

This TQC system will likely vary from one POC application to the next, owing to the different performance characteristics of the devices, the different quality requirements in the applications, and the particular costs in each setting. Just as "one size fits all" SQC is not realistic in clinical laboratories, it should not be expected that the same TQC system will apply for all POC applications. A careful design approach will be necessary to consider the special factors in each application and plan the TQC system. Obtaining the right combination of different QC components in the TQC system will require a quality-cost optimization (or cost-benefit analysis) to balance the prevention and appraisal costs against the potential internal and external failure costs (i.e., the costs of treatment and outcomes if test results were incorrect). Thus, a simple do-little QC approach would not appear to be a realistic expectation in POC applications, contrary to the wishes of POC users, manufacturers, and regulators and accreditors.

TQC systems may actually complicate the lives of manufacturers, users, and regulators and accreditors by requiring more detailed QC checks and more documentation than ever before. Each variable, factor, and step in the process may potentially need a separate QC check to be sure that each is okay, rather than monitoring a combination of variables, factors, and steps through a single statistical QC check. The control of many of these variables may depend on assessing the compentency of the operator, which is difficult to monitor in any quantitative way except by statistical QC. If these unit devices are actually uniform, as claimed by manufacturers, then the easiest, most objective, and most quantitative check of operator competency will be to analyze controls and apply statistical QC. To argue that statistical QC is not applicable is also to admit that production is not uniform, in which case operator compentency is not the main issue - the device shouldn't be on the market.

Thus, it will still be valuable to apply traditional SQC to sample the supply of unit devices and maintain an independent check, though the interval for analyzing controls may be optimized to coincide with specific variables or factors that need to be examined (such as new a lot number, local storage of a sub-lot, or a new operator). SQC should remain an important and practical tool for validating the quality of POC testing. Everyone will probably be unhappy with this conclusion and will continue to ask for changes in QC!

What else can be done?

If an analytical system were perfectly stable and never had a problem, there would be no reason to do QC; all QC would be a waste of time and effort! While many of today's analytical systems certainly perform better than systems in the past, how close to perfection are they? Unfortunately, information about "close to perfection" isn't available. We need to know how often test results are defective or have errors large enough to invalidate the medical usefulness of the test. This information won't be available unless there are some fundamental changes in current practices, beginning with the need to understand quality in terms of defects, to label products and make claims for quality in terms of defect rates, and to document the defect rates that can be expected in field applications.

There is little discussion and understanding of defects and defect rates in laboratory testing, even though the defect is the universal identifier of poor quality products or services and the defect rate (that describes the proportion of products or services that fail to meet the requirement for quality ) is the universal measure of quality [see Bob Burnett's discussion of Defect rates, quality, and productivity]. Defects cannot be identified without defining the quality requirement - a key parameter that is currently unavailable in product labelling and often undefined even in professional laboratory practice and in healthcare in general! It is inadequate to make claims and label product performance in terms of characteristics such as imprecision and inaccuracy that only characterize stable performance. Defects and defect rates are concerned with how often things go wrong - the unstable performance of the product which is not adequately studied, characterized, labeled, and documented. Thus there are some basic issues that need to be addressed if both manufacturers and users are to objectively deal with the quality of testing devices and the value and usefulness of different kinds of QC practices.

References

  1. Westgard JO, QA: Are laboratories assuring, assessing, or assuming the quality of clinical testing today? Proceedings of the CDC 1995 Institute on Critical Issues in Health Laboratory Practice: Frontiers in Laboratory Practice Research. CDC, Atlanta, GA, 1996, pp 179-189.
  2. Tetrault GA, Steindel SJ. QProbe 94-08. Daily quality control exceptions practices. Chicago, IL: College of American Pathologists, 1994.
  3. U.S. Department of Health and Social Services. Medicare, Medicaid, and CLIA Programs: Regulations implementing the Clinical Laboratory Improvement Amendments of 1988 (CLIA). Final Rule. Fed Regist 1992(Feb 28);57:7002-7186.
  4. Levey S, Jennings ER. The use of control charts in the clinical laboratory. Am J Clin Pathol 1950;20:1059-66.
  5. Westgard JO, Barry PL, Hunt MR, Groth T. A multi-rule Shewhart chart for quality control in clinical chemistry. Clin Chem 1981;27:493-501.
  6. Koch DD, Oryall JJ, Quam EF, Felbruegge DH, Dowd DE, Barry PL, Westgard JO. Selection of medically useful QC procedures for individual tests on a multi-test analytical system. Clin Chem 1990;36:230-3.