Tools, Technologies and Training for Healthcare Laboratories

Multitest Chemistry Analyzer

How do you select QC for 18 different tests on a multitest instrument? Dr. Westgard illustrates how it's done, using the critical-errors graphs, and also explains how it can save money.

QC Procedures for Individual Tests on a Multitest Chemistry Analyzer

Note: at this time this was written, QC Validator was the QC Design software available. Validator has been replaced by EZ Rules 3, which has all the features and capabilities of the earlier version, as well as new enhancements.

A detailed application from the clinical chemistry literature [1,2] documents the selection of QC procedures for individual tests on a multitest chemistry analyzer. We thought it important to describe this application on this website because it represents a complicated situation (18 different tests) for which the QC solution was still relatively simple and easy to implement.

Remember the Hagar cartoon where Helga tells Hagar there's a bill collector at the door. Hagar's response, rather than being upset, is to tell Helga to "Give him all the bills in the top door of the desk. I like simple solutions." Just because the analytical system and the selection process seem to be complicated, it doesn't mean the QC solution can't be relatively simple. In this application, we'll show you that it takes just a few different QC designs to cover the 18 tests on a multitest analyzer.

We'll again follow the steps of the QC planning process. OPSpecs charts had not been developed at the time of this study, so this multitest application makes use of power function and critical-error graphs. Please consult the QC Lesson on Power Function Graphs for more information about these QC planning tools.

1. Define the quality requirement.

In this application, analytical quality requirements were defined in terms of allowable total errors, as shown in the accompanying table. Note that this study pre-dated the publication of the CLIA proficiency testing (PT) criteria for acceptable performance, therefore the quality requirements are not the exactly same as those defined by CLIA; the biggest difference is seen for calcium where the CLIA allowable total error is 10% compared to 5% defined in this study.

2. Evaluate accuracy and precision

Imprecision was estimated from measurements on control products over 15 months of operation. Inaccuracy, or bias, was set to zero based on method evaluation studies that showed near-zero systematic differences between methods within the laboratory.

3. Calculate critical-sized errors

The calculated critical-errors varied widely from test to test, as shown in the table.

Test TEa (%) smeas (%) ?SEcrit
Sodium 3.08 % 0.52% 4.27
Potassium 10.0 1.17 6.90
Chloride 4.0 1.04 2.20
Total CO2 10.0 2.50 2.35
Glucose 8.0 1.20 5.02
BUN 10.0 1.33 5.87
Creatinine 30.0 3.00 8.35
Calcium 5.0 1.68 1.33
Phosphorus 10.0 1.28 6.16
Uric acid 10.0 1.10 7.44
Cholesterol 10.0 1.35 5.76
Total protein 12.0 1.84 4.87
Albumin 10.0 2.13 3.04
Total bilirubin 20.0 2.20 7.44
GGT 10.0 1.17 6.90
ALP 10.0 1.17 6.90
AST 20.0 3.00 5.02
LD 20.0 3.00 5.02

Table showing analytical quality requirements (TEa in %), analytical imprecision (smeas in %), and calculated critical systematic errors (?SEcrit in multiples of smeas) for individual tests on a multitest chemist analyzer.

4. Obtain power function graphs

Computer simulation was used to generate power function graphs. Control rules evaluated included 12.5s, 13s, 13.5s, and 14s, all with N's of 2. The design strategy was to hold N constant for all tests and vary the control rules as necessary to obtain the desired performance.

5. Assess the probabilities for rejection.

QC procedures were selected with the objective of achieving 90% detection of the critical sytematic errors, while maintaining false rejections below 5%, preferably near 1%. The potential performance of these different candidate QC procedures was assessed using critical-error graphs. A summary critical-error graph for systematic error is shown below:


The graph shows only 4 tests because the other 18 have critical-SE's greater than 4.0 and are therefore off the scale of the graph. Those 14 tests are easy to control because all of the candidate QC procedures will achieve at least 90% error detection for these large critical-SE's. The problem tests are albumin, chloride, total CO2, and calcium, where the critical errors are on the rising portions of the power curves.

6. Select control rules and N

For sodium, potassium, glucose, BUN, creatinine, phosphorus, uric acid, cholesterol, total protein, total bilirubin, GGT, ALP, AST, and LD, error detection of 90% or greater can be achieved with 3.5s control limits and N's of 2. Choosing the 3.5s control limits, rather than the 3s limits, should reduce false rejections to essentially zero. For albumin, 90% error detection can be achieved with 2.5s control limits and N=2. False rejections will be 2-3%. Chloride and total CO2 are more difficult to control. Use of 2.5s control limits and N=2 provides approximately 55-65% detection of the critical-SE. Multi-rule QC procedures would be advantageous here to improve the error detection. Calcium is a problem! The critical-SE would be detected only about 30% of the time. For routine service, the quality of this test was assured by using special procedures, which included making duplicate measurements on every patient specimen to improve precision and provide better control of analytical variation.

Note that the only 4 different QC designs are required for the 18 tests on this analyzer. Individualizing the QC design doesn't require an endless number of different QC designs; 3 to 5 different designs are usually sufficient, as demonstrated here.

7. Adopt a total QC strategy

Increased maintenance and more frequent calibration would be appropriate for the problematic tests. Improvement in method imprecision would be desirable for these tests. For the other tests, the excellent precision of the methods provides for very cost-effective operation.

8. Reassess for changes

We have periodically reviewed these QC procedures and made adjustments for changes in quality requirements and method imprecision. In addition, we gradually increased the length of the analytical run to provide more cost-effective operation. Since the initial study, we've also upgraded to later models of the analyzer, requiring a complete reassessment of method performance and QC design. These reassessments have been easier because of the development of the OPSpecs chart and the QC Validator program.

Cost savings from improved QC designs

We had initially used multirule QC procedures on all tests on this analyzer because that's what we had done in the past on an earlier generation analyzer. Old QC practices are often carried over to new analyzers, because we assume the new system will have similar performance characteristics. However, improvements in precision and stability may permit more cost-effective QC procedures to be implemented, as demonstrated in this application.

The initial QC changes provided a cost-savings of about $1,450 per month or $17,400 per year, which projects a savings of $87,000 over the expected 5-year lifetime of the instrument, as described and documented in the literature [2]. Later changes in run length increased the savings to $3,000 per month or $36,000 per year, which would project to $l80,000 over a 5-year lifetime. Thus, by selecting appropriate QC procedures, the necessary quality could be guaranteed and at the same time the cost of operation could be reduced, verifying Deming's principle that improved quality leads to improved productivity and reduced costs [3].

References

  1. Koch DD, Oryall JJ, Quam EF, Feldbruegge DH, Dowd DE, Barry PL, Westgard JO. Selection of medically useful quality control procedures for individual tests done in a multitest analytical system, Clin Chem 1990;36:230-233).
  2. Westgard JO, Oryall JJ, Koch DD. Predicting effects of QC practices on the cost- effective operation of a multitest analytical system. Clin Chem 1990;36:1760-64.
  3. Westgard JO, Barry PL. Cost-Effective Quality Control: Managing the Quality and Productivity of Analytical Processes. Washington DC, AACC Press, 1986, p 2.