Tools, Technologies and Training for Healthcare Laboratories

Quexit?, TE-xit? or IQCP-xit?

Every so often reality delivers a vicious refute to the myth of continuous forward progress. While we often assume (and hope) that the arc of the universe bends towards justice, progress, unity, and peace, it is never safe to take for granted that civilizations will always choose to take a step forward. There is always a very real possibility that countries and societies will choose instead to take a step backward.

Quexit? TExit? Or IQCPexit?

brexitwall400

July 2016
James O. Westgard and Sten A. Westgard

In an age of advancing technology and global connectivity, why does it seem people are more eager to tear each other apart?

Despite all the economic benefits and net gains from the European Union, it is undeniable that the expansion of unfettered capitalism has produced a significant number of losers, people whose economic fortunes have declined rather than improved due to globalization, while at the same time a smaller “few” have amassed what appears to be an ever increasing share of wealth, power, and privilege. When the excluded and alienated “many” are given a choice to sabotage those “few”, it shouldn’t surprise us that they act in rage, if not in well thought out rational behavior.

So it is that the “Brexit” occurred, with a thin but significant majority of the votes being cast in favor of the United Kingdom seceding from the European Union. In the wake of this political upheaval, the leadership of the major UK political parties resigned or ran away, leaving a new set of officials to work out the terrible details of fracturing the connection between the United Kingdom and Europe.

But we in laboratory medicine are not far from making irrational departures ourselves. Indeed, there appear to be mounting challenges to the existing order of quality management. And if laboratories act emotionally rather than rationally, we may find ourselves regressing in quality, not progressing. We’re in the era where a Quality Exit – QuExit – is being proposed. Some labs may not realize the consequences of such a significant change.

IQCPexit?

Despite the full force of the law, the implementation of IQCP has been met with more frustration and less enthusiasm than was advertised. Instead of empowering labs to choose “The Right QC” as CLSI and EP23 promise, instead it appears that IQCP is simply a very time-consuming paperwork exercise that allows laboratories to maintain the same QC that they were doing back in the EQC era.

We’ve recently completed a survey of more than 200 labs who reported on their IQCP experiences. Not to repeat the findings here, but give a succinct summary: it’s mostly been a "waste of time", an exercise of paperwork to justify current practices, with very little change occurring in QC practices. We still don’t know if the regulators are going to be happy with these new IQCPs, but we’re pretty certain that most laboratories have found little use for them. You can see the full survey breakdown here.

IQCP has been marketed as a step forward, but really it’s a new cover for an old problem. When “Equivalent QC” protocols proved untenable, a new justification had to be found to allow labs to conduct just once-a-month QC. Rather than attempt any quantifiable justification, the amorphous but well-regarded concepts of Risk Management were summoned. Now with the proper forms filled out, a lab can justify reducing their QC frequency despite any real evidence that the risk is minimal.

The concepts of Risk Management, indeed many of the tools of Risk Assessment, when used properly, can actually provide a useful analysis of a method’s strengths and weaknesses. But the current IQCP is really a piece of regulatory theater – it makes everyone look like they’re trying hard, even when they may be hardly addressing the real quality issues at all.

TExit?

The campaign to eliminate Total Error, despite what has been nearly half a century of widespread utility, continues at the hands of a few aggrieved metrologists. While the Milan Consensus of 2014-2015 was hardly earth-shattering – the most notable reorganization of a five-level hierarchy into a three-level hierarchy is actually a minor step forward – buried in the committee work was an attempt to supplant total analytic error with measurement uncertainty. In the face of overwhelming evidence that labs only calculate MU when forced to by ISO, the labs don’t report MU, that labs don’t find that clinicians request MU, use MU, or change their medical decisions based on MU, the metrologists cling to their belief that there can only be one true error statistic and all others must be purged. This magical thinking allows them to assert that by eliminating TE, labs will be more enthusiastic about MU, and then manufacturers will be forced to improve their methods faster. There is no evidence that MU can be more effective at designing control procedures, evaluating methods, ranking manufacturers, or pointing out failures of the diagnostic manufacturers. Currently, Sigma-metrics and TE are doing all of those things quite handily. But the proponents of MU seem to prefer dwelling on the theoretical issues, rather than contributing or commenting on practical matters.

What’s particularly striking about this argument is that the MU-vement to eliminate TE does not have a mirror opposite on the other side. There is no one advocating that MU be eliminated or purged. We are not arguing for a MU-xit. Indeed, MU and TE provide complementary support to the quality movement. The most recent arguments, more nuanced and thoughtful than the “us vs. them” manifestos, note that not only is there a place for TE, there is even official support in VIM and GUM. Metrologists are quick to quote  the passages of the VIM and GUM that argue for a TE-less, purely MU-ed world. But even those hallowed tomes of measurement purity acknowledge the existence of bias and the practical need to handle it with terms such as TE.

The damaging actions – lashing out to try and eliminate useful techniques or calculations blindly – can have a major long-term impact. As the Brexit hangover has shown, the stark reality of the separating from the European Union is going to be quite costly and painful. Similarly, if TE were totally eliminated tomorrow, there would be severe consequences for laboratories. First, all proficiency testing programs and external quality assurance programs would have to restructure how they send out samples, how they express performance specifications, and how they judge the results of their events. Second, labs would quickly discover that there is not just one way to calculate MU, but many. While the simplest MU comes from taking a multiplier of the intermediate imprecision (2*CV, essentially), there are many other models, from top-down, to bottom-up, to models that include an uncertainty of bias, to models that seek to include uncertainties of pre-analytical phases and post-analytical phases, aiming toward a holistic estimate of diagnostic uncertainty. The passion in the metrologist modellers is undeniable, but it often feels like they would rather spend more time developing a more and more precise model than spending any time in the real world developing a practical tool.

We are reminded of a very short story by Jorge Luis Borges, called “On Exactitude in Science” about an Empire where the practice of Cartography reached its zenith, such that a map of a province was as large as a city, and the map of the Empire was as large as a Province. Yet the pursuit of ever more accurate maps continued unabated, until the map-makers sought to make a map was the exact replication of the territory it was meant to represent. That is, each inch of reality would be replicated faithfully by an inch of map. While this map was hailed as the ultimate achievement by the Guild of Cartographers, by all others it was deemed useless, and the following Generations tore it to shreds and abandoned it to the winds.

In our worst moments, we in laboratory quality community resemble that story, pursuing the passion of models over the creation of practical tools for real laboratories. The point is not for one model to “win” or “lose” but for the laboratory, indeed for the care we provide to patients, to improve. To have a perfect model that can never be implemented is a failure. To have an “impure” model that is practical and can help improve patient care, that’s success.