Tools, Technologies and Training for Healthcare Laboratories

Quality in an Era of Untruth

James O. Westgard reflects on the unique circumstances of our time, and how Quality is more important than ever.

Quality in a Period of Untruth

James O. Westgard, PhD
January 2019

Please note, this essay contains politics.

Beginning a new year, we typically take time to reflect on the past and identify hopes for the future. We usually focus on technical considerations and issues in quality management, but those boundaries seem too restrictive in the current environment where “untruths” and ethical issues are prominent in public discourse and political discussions. We are told today that there are “alternative facts” and that the “truth isn’t truth” anymore. We have a President who has told some 7600 untruths according to the Fact Checker database, averaging 15 erroneous claims per day in 2018. Several government officials are under investigation for ethics violations and lying to the FBI. A leading cancer researcher recently resigned because he failed to disclose financial ties to businesses and industry. A medical device company became the subject of the best seller “Bad Blood” that revealed fraud and unethical behavior in laboratory testing.

In this period of untruth, we fear that quality also suffers and may be further compromised in the future. Quality is in many ways related to truth. We have written in the past that the truth standard – “to tell the truth, the whole truth, and nothing by the truth” - can be applied to quality. The translation is as follows: a laboratory test must have a defined requirement for quality (the truth) that is satisfied by the precision and bias of the analytical method (the whole truth) and guaranteed to be achieved by the QC procedure (the nothing but the truth aspect of quality).

Quality requirements have traditionally been defined in terms of an Allowable Total Error that reflects that combined effect of precision and bias in a single measurement. This concept was introduced in the 1970s as part of the structure and system for validating the performance of analytical methods. Sources of these requirements are the acceptability criteria in proficiency testing and external quality assessment programs, recommendations based on biologic variability, and clinical treatment guidelines. The “Ricos database” has been the standard source for the biologic goals in the form of allowable precision, allowable bias, and allowable total error. One significant change is that the European Federation of Laboratory Medicine has taken over the biologic database following the Milan conference in 2015 and subsequent activities by working groups. Under the new guidelines for performing studies on biologic variation, it is expected that the goals will be tighter and more demanding. And it is not clear whether the new database will include goals for ATE, which we think is a severe limitation. It has now been 4 years since the database was updated and it is not clear when the official EFLM database will be available. That is one of the reasons why we will continue to support the 2014 database free of commercial sponsorship here on Westgard Web. And we will be posting comparisons of the new EFLM goals when they become available.

Precision and bias are two of the primary analytical characteristics that need validated for implementation of testing processes in medical laboratories. For example, the US CLIA regulations require that laboratories demonstrate that they can achieve the precision and bias claimed by a manufacturers. ISO 15189 requires that laboratories select examination procedures that are validated for their intended use. In ISO-speak, examination procedures refers to analytical methods and intended use refers to quality requirements. The ISO requirement is therefore better focused on the truth in terms of the clinical need, as compared to the CLIA requirement that is aimed at the claimed manufacturer performance.

Measurement uncertainty is an important part of the ISO requirements and its ultimate quality characteristic of a testing process. We have had ongoing discussion about the pros and cons of measurement uncertainty (MU) vs total analytical error (TAE) for several years now. The current state is a standoff where both concepts are recognized as useful for quality management, but with ISO proponents still favoring MU over TAE [1]. The main point of disagreement is whether or not bias needs to be measured – included in TAE, not included in MU. It is interesting to see that AACC is sponsoring a global effort for “harmonization”, which is focused on achieving agreement of results between different laboratory methods. Essentially that means compensating for between method biases by agreeing on a relative rather than absolute standard. In contrast, ISO focuses on traceability and attempts to standardize methods to achieve the true value. CLIA has no provision for traceability, no requirement for MU, and defines acceptability criteria for PT in the form of an allowable total error. Thus, this battle between MU and TAE reflects different regulatory requirements in different countries, as well as different theories of measurement. We believe that the principles of Six Sigma Quality Management advance the Total Error theory, are consistent with industrial process capability indices [2], and provide practical tools for analytical quality management, particularly for the design of SQC procedures [3].

SQC practices were the subject of an updated CLSI document in 2016 [4] and a survey of 21 academic medical centers in 2018 [5]. The CLSI guidance says that SQC procedures should be designed based on the quality required for intended use, the precision and bias observed for the method, the statistical rejection characteristics of the control rules and numbers of control measurements, and the risk of harm to the patient if an erroneous result is reported. The theory is complicated, but simple graphical tools make it possible to design such risk-based SQC procedures [6]. Existing practices, however, show no evidence of being properly designed. The 21 academic labs show a surprisingly common use of 2 SD control limits, but a wide variety in frequency of SQC, i.e., how often controls are analyzed. The survey makes no mention of the existing CLSI guidance for planning SQC procedures, yet suggests the need for a consensus approach for developing evidence-based SQC practices. It is difficult to assess whether this is a failure of CLSI in making its guidance practical or the lack of knowledge that such guidance is available. Nonetheless, this situation indicates a sorry state of SQC practices in US laboratories today and we are developing an approach for establishing evidence-based SQC practices [7].

References

  1.  Westgard JO. Error methods are more practical, but uncertainty methods may still be preferred. Clin Chem 2018;64:636-638.
  2. Westgard SA, Bayat H, Westgard JO. Mistaken assumptions drive new Six Sigma model off the road. Biochem Med (Zagreb) 2019; in press. https://doi.org/10.11613/BM.2019.010903
  3. Westgard SA, Bayat H, Westgard JO. Selecting a risk-based SQC procedure for a HbA1c Total QC Plan. J Diabetes Sci Tech 2018;780-785.
  4. CLSI C24-Ed4. Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions. Clinical and Laboratory Standards Institute, 950 West Valley Road, Suite 2500, Wayne PA, 2016.
  5. Rosenbaum MW, Flood JG, Melanson SEF, et al. Quality control practices for chemistry and immunochemistry in a cohort of 21 large academic medical centers. Am J Clin Pathol 2018;150:96-104.
  6. Westgard JO, Bayat H, Westgard SA. Planning risk-based SQC schedules for bracketed operation of continuous production analyzers. Clin Chem 2018;64:289-296.
  7.  Westgard JO, Westgard SA. Establishing evidence-based statistical quality control practices. Am J Clin Pathol 2019 (in press). DOI: 10.1093/ajcp/aqy159