Tools, Technologies and Training for Healthcare Laboratories

Hitting a Goal, but Missing the Point?

A recent advisory report on the NHS, chaired by Dr. Don Berwick, has valuable lessons for all healthcare systems: Sometimes we can hit a goal, but miss the point. But while the report acknowledges this truth, it also says the opposite can be true as well: sometimes we can set a goal we'll never be able to hit, and that, too, can be harmful to our healthcare system. So we bring the issue to laboratory quality requirements: do we have goals that are too hard (or too easy) to hit?

 

 

 

Analytical Quality: Are we hitting our goals, but missing the point? (Or are we missing both our goals and the point?)

Sten Westgard
August 2013

2013 has been a year of turmoil and change for the National Health Service (NHS) in the United Kingdom. Scandalous care at the Mid Staffordshire Trust caused multiple investigations and inquiries that revealed not only errors and system failures, but some outright negligence in care. This at a time when the NHS is undergoing profound transformations in structure, delivery and funding, has meant that nearly every aspect of the service is in flux.

Enter into this fray a new report from the National Advisory Group on the Safety of Patients in England. This report, titled A promise to learn - a commitment to act provides both a sobering assessment of current NHS challenges and shortcomings, as well as practical and impassioned recommendations for improvements. While certainly there are unique circumstances to the problems of the NHS and their possible solutions, the report is rather universal in its application. Healthcare systems around the world would do well to examine and ponder the conclusions.

The report is also notable in that it was chaired by Dr. Don Berwick, a former head of the Centers for Medicare and Medicaid Services (CMS) in the US, as well as the head of the Institute for Healthcare Improvement (IHI). Dr. Berwick's long commitment to patient safety and healthcare reform lends a greater weight to the report's conclusions. One can read this report as written not just for the NHS, but for the US as well.

A few choice quotations:

"Incorrect priorities do damage. The Mid Staffordshire tragedy and wider quality defects in the NHS seem traceable in part to a loss of focus by at least some leaders on both excellent patient care and continual improvement as primary aims of the NHS....In some organisations, in the place of the prime directive, "the needs of the patient come first", goals of (a) hitting targets and (b) reducing costs have taken centre stage. Although other goals are also important, where the central focus on patients falters, signals to staff, both at the front line and in regulatory and supervisory bodies, can become contaminated. Listening to and responding to patients' needs then become, at best, secondary aims. Bad news becomes unwelcome and, over time, it is too often silenced. Under such conditions organisations can hit the target but miss the point."....

"Use quantitative targets with caution. Goals in the form of such targets can have an important role en route to progress, but should never displace the primary goal of better care. When the pursuit of targets becomes, for whatever reason, the overriding priority, the people who work in that system may focus too narrowly. Financial goals require special caution: they reflect proper stewardship and prudence, but are only a means to support the mission of the NHS: healing."....

"Give the people of the NHS - top to bottom - career-long help to learn, master and apply modern methods for quality control, quality improvement and quality planning, the NHS has an obligation to assure their growth and support"....

"While 'Zero Harm' is a bold and worthy aspiration, the scientifically correct goal is 'continual reduction'. All in the NHS should understand that safety is a continually emerging property, and that the battle for safety is never 'won'; rather, it is always in progress."

In the NHS, overattention to hitting targets - such as 4 hour A&E (known in the US as ER) waiting times - placed increasing pressure on staff into managing numbers instead of patients. The Initial UK Government Response to the Mid Staffordshire Trust Public Inquiry states: "Targets and performance management in places overwhelmed quality and compassion....Pressure to perform and fear of failure led to a controlling and defensive approach from organisations." At Mid Staffs, nurses would fake the waiting times entered in patient records in order to make the target.

How does this relate to the laboratory?

Labs face the same pressures as the rest of the hospital. Financial targets and Turn-Around-Time (TAT) targets are the predominant concerns of most laboratories. Assumptions about the performance of methods are more commonly made than assessments of the actual quality of testing being performed. Compliance mentality in the lab leads to "quality" that can be documented to an inspector, but not delivered in reality to a patient.

On a more specific level, even for laboratories that are attempting to assess and assure quality, the question of goals, targets and requirements are a true challenge. The main question for labs is, "What is the Right Goal for this Test?"  While the 1999 Stockholm hierarchy identifies an approach, the ideal quality goals - ones developed based on the evidence of how the test results are actually used by clinicians - are rare indeed. Even for well-studied assays that have been used in medicine for decades, such as sodium, there is still a wide disparity on the actual goal that should used for quality.

Example: What's the Right Goal for Sodium?

Analyte Acceptance criteria / quality requirements
CLIA Desirable
Biologic
Goal
RCPA Rilibak SEKK

Spanish
Minimum
Consensus

Sodium ± 4 mmol/L ± 0.9% ± 3 mmol/L < 150 mmol/L
± 2% > 150 mmol/L
 ± 5%  ± 5% ± 5%

This table is adapted from a study by Friedecky, Kratochvila and Budina in a 2011 article in CCLM. You can see that the Desirable Biologic "Ricos goal" is quite small for Sodium, while Rilibak, SEKK, and the Spanish Minimum Consensus set the same wider goal of 5%. The latter three sources, (Rilibak, SEKK and the Spanish Minimum Consensus goals) set their goals by a process akin to "waiting for the arrow to land before painting the target around it." That is, they set the goals so that 90% of labs can achieve the goal. Thus, no matter what performance of the assay is, the vast majority of labs pass. The "Ricos goal," in contrast, is specified entirely along an evidence-based methodology, without regard to whether or not there are methods on the market that can hit such a small target.

Where can we find data on method performance?

Here is a tricky situation. While every lab has data on their own performance, it's very hard to find data about other instruments outside your own laboratory. If we want true apples-to-apples comparisons of performance, moreover, we need to find a study that evaluates different instruments within the same laboratory under similar conditions. Frankly, those studies are few and far between. Not many labs have the time or the space to conduct head-to-head studies of different methods and analyzers. I've seen a lot of studies, but few gold-standard comparisons.

Proficiency Testing surveys and External Quality Assessment Schemes can provide us with a broad view, but the data from those reports is pooled together. Thus, we don't get individual laboratory standard deviations, we get instead the all-group SD or perhaps the method-specific group SD. As individual laboratory SDs are usually smaller than group SDs, the utility of PT and EQA reports is limited. If we want a better idea of what an individual laboratory will achieve with a particular method, it would be good to work with data from studies of indidivual laboratories.

In the imperfect world we live in, we are more likely to find individual studies of instrument and method performance. A lab that evaluates one instrument and its methods, compares it to a "local reference method" (i.e. usually the instrument it's going to replace). While we can get Sigma-metrics out of such studies, their comparability is not quite as solid. It's more of a "green-apple-to-red-apple" comparison.

Given that caveat, here are a few studies of major analyzer performance that we've been able to find in recent years:

  • Evaluation des performances analytiques du systeme Unicel DXC 600 (Beckman Coulter) et etude de la transferabilite des resultats avec l’Integra 800 (Roche diagnostics), A. Servonnet, H. Thefenne, A. Boukhira, P. Vest, C. Renard. Ann Biol Clin 2007: 65(5): 555-62
  • Validation of methods performance for routine biochemistry analytes at Cobas 6000 series module c501, Vesna Supak Smolcic, Lidija Bilic-Zulle, elizabeta Fisic, Biochemia Medica 2011;21(2):182-190
  • Analytical performance evaluation of the Cobas 6000 analyzer – special emphasis on trueness verification. Adriaan J. van Gammeren, Nelley van Gool, Monique JM de Groot, Christa M Cobbeart. Clin Chem Lab Med 2008;46(6):863-871.
  • Analytical Performance Specifications: Relating Laboratory Performance to Quality Required for Intended Clinical Use. [cobas 8000 example evaluated] Daniel A. Dalenberg, Patricia G. Schryver, George G Klee. Clin Lab Med 33 (2013) 55-73.
  • The importance of having a flexible scope ISO 15189 accreditation and quality specifications based on biological variation – the case of validation of the biochemistry analyzer Dimension Vista, Pilar Fernandez-Calle, Sandra Pelaz, Paloma Oliver, Maria Josa Alcaide, Ruben Gomez-Rioja, Antonion Buno, Jose Manuel Iturzaeta, Biochemia Medica 2013;23(1):83-9.
  • External Evaluation of the Dimension Vista 1500 Intelligent Lab System, Arnaud Bruneel, Monique Dehoux, Anne Barnier, Anne Bouten, Journal of Clinical Laboratory Analysis 2012;23:384-397.
  • Six Sigma metrics used to assess analytical quality of clinical chemistry assays in a reference laboratory, Koen Hens, Frank Heyvaert, Dave Armbruster, Sten Westgard, in submission to the Belgian Clinical Chemistry journal

These studies in turn can give us multiple data points (for instance, CV for high and low controls) for some instruments and we can attempt to evaluate these instruments' ability to achieve different quality requirements.

Can any instrument hit the Desirable Biologic TEa Goal for Sodium?

Here is a Method Decision Chart using the "Ricos Goal" of 0.9%. Please note, we're using the Normalized Method Decision chart to make our evaluation.

[If you're not familiar with Method Decision Charts and Six Sigma, follow the links here to brush up on the subject.]

2013-Sodium-Comparison-Ricos-NMEDx

As you can see, all methods are literally "off the chart." No current method on the market has the ability to achieve high performance, given the tightness of the desirable biologic specification. In this case, however evidence-driven this goal is, it's not a practical goal for today's laboratories. It doesn't allow labs to differentiate between the performance of any of the manufacturers, nor does it give labs any ability to try and make up for poor performance through better QC. Labs trying to adhere to this goal would need to run patient samples multiple times and use an impractical number of controls.

Can any instrument hit the Rilibak interlaboratory comparison TEa goal for Sodium?

2013-Sodium-Comparison-Rilibak-NMEDx

Here we see a dramatic change in fortunes. Indeed, there is considerable differentiation between the performance of methods. Some methods are in the Six Sigma zone, others are not. Some show a wide variability in performance, others show more consistency.

Can any instrument hit the CLIA TEa Goal for Sodium?

2013-Sodium-Comparison-CLIA-QR4units-NMEDx

In this graph, two of the Roche points are quite favorable, which can be explained by the fact that they are being measured down well below the reference range at 30 and 40 mmol/L. At that level, CLIA goals are then around 8% and 12%, much larger than the "Ricos" or Rilibak Goal. This is why sometimes a criticism is leveled at CLIA that the goals are too large. However, around the normal reference range for sodium, the CLIA goal is closer to 2-3%, which is neither as large as Rilibak, nor as small as the "Ricos goal." At a level of 135 mmol/L, for example, the CLIA goal would be about 3%. At a higher level of 200 mmol/L, that goal would decline to 2%. In this case, the CLIA goal might be more useful as a median goal. [Note that in some studies, no level for the sodium measurement was given, and thus a 3.0% goal was applied.]

Conclusion

With quality requirements, there is no one source that is perfect and practical. Laboratories need to be careful when selecting quality goals, making sure that they choose all the goals that they are mandated to meet (In the US, labs must be able to achieve CLIA targets), but also making sure that their goals are practical and in harmony with the actual clinical use of the test in their health system. While biologic variation goals are preferable in many cases, we concur with Friedecky et al:

"From the data it is clear that it would be undesirable to derive the size of the acceptance limits from the biological variability of electrolytes (with the exception of potassium), proteins, creatinine, and albumin."

From our perspective, a goal that no one (no method) can hit is not useful, except perhaps to instrument researchers and design engineers (for them, this becomes an aspirational target, to design the next generation of methods to achieve that ultimate level of performance). Neither is it helpful to use a goal that every method can hit - that might only indicate we have lowered the bar too much, and actual clinical use could be tightened and improved to work on a finer scale.

There are some who find the issue of quality requirements entirely too frustrating. The fact that there isn't agreement or uniformity can lead some to despair. Basically, some labs, exasperated at the choice, prefer to make no choice at all and avoid the issue entirely. But operating a laboratory without choosing quality goals is like driving a car without headlights in the dark. It might make it easier to navigate in the short run, but ultimately it may lead to a dramatic, unpleasant crash. If you want to see your way down the road to quality, you need to choose goals and implement them.

Like so much in laboratory medicine, there is still a lack of standardization and harmonization, and a healthy debate about what needs to be achieved. In future articles, we hope that we can make it easier to see the differences in quality requirements, method performance and the practical choices that labs can make.

If you liked this article, you might also like