Tools, Technologies and Training for Healthcare Laboratories

Perspectives on Analytical Quality Management, Part 5

A concluding note on the ongoing debate between the metrological measurement uncertainty approach to quality, and the allowable total error / sigma metric approach. While there is ultimately room for both, when one attempts to abolish the other, there is no alternative but vigorous struggle.

Perspectives on Analytical Quality Management

Part 5. Rethinking Metrology in Age of Evidence-Based Practices

James O. Westgard, PhD
November 2022

I caught the darkness,
It was drinking from your cup.
I said is this contagious.
You said just drink it up.
-Leonard Cohen

In a recent editorial in CCLM [1], the journal Editors challenged the future of Statistical Quality Control with their support for a proposal by Panteghini et al [2] to rethink Internal QC in the traceability era [3].

They cited Panteghini’s recommendation to use fixed clinical limits on control charts, agreeing that the new model also “highlighted the need to take into consideration not only a single checked value but also ‘temporal trends’”[1]. As we explained in Part 4 in this series, properly designed SQC strategies can employ multirule strategies that provide simple “counting rules” (such as 2:2s, 4:1s, 6:x) to objectively assess temporal trends, rather than casual inspection of QC results. We have also demonstrated elsewhere that multirule procedures can be designed to include trend rules for moving averages [4], plus multi-stage SQC strategies can be designed to periodically confirm acceptable quality at chosen and defined reporting intervals [5] for bracketed operation of continuous production processes.

The Editors’ second point emphasized the need for manufacturers to provide evidence of traceability for their measurement procedures and to provide “control materials as a qualified part of the measuring system.” That has nothing to do with a laboratory’s responsibility to “design QC procedures to verify the attainment of the intended quality of results,” as required by ISO 15189 [6]. The dilemma seems to be requirements of the latest European IVD regulations, whose implementation has been delayed because such control materials are not available.

Their third point is that “the proposal by Panteghini et al stresses the need to take into higher consideration not only imprecision but also trueness (bias) of laboratory results, and this is a major concept to be emphasized in the traceability era.” If we may be so bold, this is perhaps the height of double-talk. Metrologists oppose the TAE/Six Sigma quality management approach because Total Analytical Error actually does consider both the trueness (bias) and imprecision observed in laboratory methods, whereas metrologists advocate that such biases be eliminated, corrected, or ignored so that measurement uncertainty (precision) can be the sole measure of performance. Bias is a fact of life in medical laboratories today: eliminating bias is mainly a responsibility of manufacturers; correcting bias is often illegal according to accreditation guidelines and government regulations; ignoring bias, or misrepresenting bias as a variance component, is the realm of metrology. The “one size fits all” metrology QC design ignores the actual performance of methods in individual laboratories, whereas the TAE/SixSigma model provides individualized SQC strategies appropriate for the precision and bias observed in each laboratory.

Falsely Suggested Alternatives

The Editors point out that there are some 300 or more tests for which there are no reference methods and no reference materials, which supposedly is a reason for rethinking IQC in the traceability era. Perhaps it is a better reason for understanding the progression from Statistical Process Control to Statistical Quality Control. In the evolution of measurement procedures, the first stage is statistical process control where the objective is to identify special causes (likely systematic errors) from common causes (random errors) in order to reduce the variability of the process. In this situation, the goal is to achieve stable and predictable performance. During this time, controls are analyzed to characterize the variation of the process, outliers and trends are used to identify sources of errors that are then eliminated through process re-design until the stable performance is considered acceptable for the intended use. After this, it becomes possible to impose statistical quality control to verify the attainment of the intended quality of results. All 300 of these tests are good candidates for statistical process control, even if they are not yet candidates for statistical quality control.

The Editors think that another solution is PBRTQC (Patient Based Real Time Quality Control). Anyone who has read the recent publications on PBRTQC should recognize these procedures are still quite complicated, most likely beyond the capability for partical application in routine services laboratories. Furthermore, these approaches will never be appropriate for many current tests based on limited numbers of patient samples available, wide population distributions (total biologic SD for the population relative to the analytical SD), and lack of uniform patient populations within a day and between days of the week. Even when PBRTQC procedures are employed, laboratories need traditional SQC when they startup their processes and also for trouble-shooting any problems revealed by PBRTQC.

Van Rossum, one of the leading advocates for PBRTQC, also recognizes the difficulties and limitations in implementation [8]. “PBRTQC procedures often need to be set up in a laboratory-specific manner. This contrasts with statistical iQC for which the statistical set-up, based solely on the analytical variation component and without the biological and pathological variation component, enables a general implementation across laboratories using universal statistical approaches.” An important limitation is that PBRTQC applications should focus on testing processes whose performance is 4.0 or less on the Sigma Scale. A critical conclusion is that “Although iQC is associated with limitations for certain analytes, it still constitutes the cornerstone of analytical QA/QC for most, if not all tests. For many tests, iQC itself provides adequate QA/QC of the analytical phase.

Thus, the CCLM Editors suggestion that PBRTQC is the solution to our QC problems is a grasping for straws, with the ultimate straws being machine learning and artificial intelligence as “promising” approaches. We may not understand what needs to be done or how to do it, but thankfully there will be a higher computer power that will take over at some time in the future.

What’s the problem?

It seems there are few useful applications of metrology for managing quality in the medical laboratory. It is only because of ISO 15189 accreditation (“it was drinking from your cup”) that laboratories have attempted to characterize MU (“you said just drink it up”). How do laboratories use MU to validate the performance of new methods? They don’t. Because it would take too long to obtain a reliable estimate of MU. How do laboratories use MU to establish QC procedures that verify the attainment of the intended quality of results? They don’t. As pointed out in this series of essays, metrologists’ recommendation to use clinical fixed control limits is not scientifically sound, and reflects a leap backward in statistical QC technique. How do laboratories communicate MU to help physicians interpret test results? They don’t. But apparently this is the ultimate goal, but it is not clear how this will be accomplished and whether it will be well received or accepted by medical personnel.

In these practical matters, laboratories have benefited very little from metrology. The critical application of traceability is important for achieving more uniform test results across methods and laboratories, but must be aimed at manufacturers to achieve success. The important need for commutable control materials is for use in EQA/PT programs that should be focused on assessing the biases of laboratory measurement procedures. The responsibility of laboratories is to maintain stable performance on a daily basis and identify error conditions that must be corrected to provide patient test results that conform to the quality required for intended use. Laboratories may also provide estimates of MU from routine QC data if metrologists can ever agree on how to do this [9,10]. If manufacturers provide traceable methods, if EQA/PT programs provide reliable measures of biases, and if laboratories maintain stable in-control performance, the quality of patient test results can be improved. That is the partnership that is critical for the laboratories’ efforts in improving patient care.

Metrology itself has achieved little so far in improving analytical quality management in the medical laboratory. After more than 20 years of implementation, the TAE model still predominates in laboratories for managing analytical quality, validating method performance, comparing performance between laboratories using PT/EQA programs, and selecting appropriate SQC strategies.

As Leonard Cohen also said, “At first, nothing will happen, And a little later, nothing will happen again.” That certainly seems to describe the contribution of measurement uncertainty to the quality management activities in medical laboratories today. Maybe we need to rethink metrology’s usefulness in laboratory quality management?

What’s the point?

There is no scientific evidence to support the proposal to set fixed clinical limits on control charts based on the APS for MU. Statistical theory predicts the approach will not work. Instead, statistical theory supports the approach for designing appropriate risk-based SQC strategies following the CLSI C24-Ed4 roadmap [11] which can be readily implemented using available QC planning tools [12].

SQC is and will continue to be a valuable tool for monitoring and managing analytical quality - if it is properly designed and correctly implemented. Improvements will continue to be made, as illustrated by Parvin’s development of a patient risk model for determining the frequency of QC events [13]. That model provides the theoretical basis for developing risk-based SQC strategies for optimizing the temporal monitoring of continuous production processes and verifying the attainment of the intended quality of test results. As Van Rossum has recommended [8], “using more advanced IQC practices that incorporate multi-QC level Westgard control rules, based on a sigma-metric or risk-based approach would provide a more valuable and powerful starting point for many tests compared to the standard 2SD [practice]

That is, in fact, the goal of a new training manual that we have published [14]. Our approach is to follow the CLSI “roadmap” for planning SQC risk-based strategies but enhance that approach by providing practical QC planning tools that can be implemented in your laboratory.

References

  1. Plebani M, Gillery P, Graves RF, Lackner KJ, Lippi G, Melichar B, Payne DA, Schlattmann P. Rethinking internal quality control: the time is now. Clin Chem Lab Med 2022: https://doi.org/10.1515/cclm-2022-0587.
  2.  Panteghini M. Reply to Westgard et al.: “Keep you eyes wide… as the present now will later be past.” Clin Chem Lab Med 2022: https://doi/org/10.1515/cclm-2022-0527.
  3.  Braga F, Pasqualetti S, Aloisio E, Panteghini M. The internal quality control in the traceability era. Clin Chem Lab Med 2021;59:291-300.
  4. Bayat H, Westgard SA, Westgard JO. Multirule procedures vs moving average algorithms for IQC: An appropriate comparison reveals how best to combine their strengths. Clin Biochem 2022. https://doi.org/10.1016/j.clinbiochem.2022.01.001.
  5. Westgard JO, Bayat H, Westgard SA. Planning risk-based SQC schedules for bracketed operation of continuous production analyzers. Clin Chem 2018;64:289-296.
  6. ISO 15189. Medical Laboratories – Requirements for quality and competence. Geneva:ISO, 2012.
  7. Westgard JO, Bayat H, Westgard SA. How to evaluate fixed clinical QC limits vs. risk-based SQC strategies. LTE: Clin Chem Lab Med 2022; https://doi.org/10.1515/cclm-2022-0539.
  8. Van Rossum HH. Technical quality assurance and quality control for medical laboratories: a review and proposal of a new concept to obtain integrated and validated QA/QC plans. Crit Rev Clin Lab Science. https://doi.org/10.1080/10408363.2022.2088685.
  9. Coskun A, Theodorsson E, Oosterhuis WP, Sandberg S. On behalf of the European Federation of Clinical Chemistry and Laboratory Medicine Task and Finish Group on Practical Approach to Measurement Uncertainty. Clin Chim Acta 2022;531:352-60. https://doi.org/10.1016/j.cca.2022.04.1003.
  10. Panteghini M. The simple reproducibility of a measurement results does not equal its overall measurement uncertainty. Clin Chem Lab Med 2022; https://doi.org/10.1515/cclm-2022-0618
  11. CLSI C24-Ed4. Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions. Clinical and Laboratory Standards Institute, 950 West Valley Road, Suite 2500, Wayne PA. 2016
  12.  Bayat H, Westgard SA, Westgard JO. Planning risk-based SQC strategies: Practical tools to support the new CLSI C24-Ed4 guidance. J Appl Lab Med 2017;2:211-221.
  13. Parvin CA. Assessing the impact of the frequency of quality control testing on the quality of reported patient results. Clin Chem 2008;54:2049-2054.
  14. Westgard JO, Bayat H, Westgard S. Advanced QC Strategies: Risk-Based Design for Medical Laboratories. Madison, WI:Westgard QC, Inc., 2022.