Tools, Technologies and Training for Healthcare Laboratories

IQCP: Does the P stand for Placebo?

The momentum behind IQCP is accelerating. Companies are launching education initiatives, new software programs, and there are webinars, workshops, and online tutorials. But does anyone really understand what an IQCP is yet? Does anyone know what the P stands for? Problem? Plan? Placebo?

IQCP: What does the P stand for? 

Sten Westgard, MS
June 2014

Placebos are well-known in healthcare. In Latin, Placebo means roughly “I shall please.” Google lists two interesting definitions:

  • A substance that has no therapeutic effect (often used as a control in testing new drugs)
  • A measure designed merely to calm or please someone

The Placebo Effect is well known in medicine – where even just by giving a sugar pill to someone (but not telling them it’s a sugar pill) can help alleviate their problems. In industry, there’s a similar phenomenon called the Hawthorne Effect. Simply by trying to improve things, sometimes we do.

Why are placebos on my mind? We’re nearing the annual conference of the AACC and ASCLS in Chicago, and it looks like this will be the year of the Individualized Quality Control Plan (IQCP). There will be workshops, new software programs and tools, and lots of “solutions” that will be marketed to laboratories about the new “Risk QC” protocols. Most of these offerings are well-meaning and led by experts with good intentions. But the EQC-to-IQCP crisis is an entirely self-inflicted wound. When the government failed to enact appropriate QC regulations with the CLIA Final Rule, it slapped a Band-aid over the problem with “Equivocal QC” (EQC). Not surprisingly, EQC didn’t have a scientific foundation, didn’t work, and wasn’t accepted by the laboratory marketplace. Now IQCP has been proposed as its replacement, and labs have less than two years to switch from their EQC to the new IQCP.

The mystery about IQCPs is that there are no technical details on how these risk assessments actually work. You can find a lot of detail about how an IQCP should be structured, and what components should be addressed, but there is almost nothing mentioned in the current regulatory memos about the most appropriate technique to assess the Risk of each failure mode, what level of Risk is acceptable, and how to equate the level of risk with a frequency of QC. Currently, there’s no practical formula to explain how Risk X translates to Frequency Y for QC. When pressed, CMS states that all of these details will be cleared up in 2016, when the regulations go into effect and the State Operations Manual (SOM) is updated. In the meantime, we’re stuck without details.

And that’s obviously frustrating – that we have less than two years to prepare, but we won’t truly know the regulations until the day the regulations kick into effect. Until then, CMS will not actually comment on whether or not a software program correctly assesses risk, or whether or not your nascent attempts at risk management are adequate or not. While we’re in the transition period, no one is going to officially approve or disapprove of your Risk QC. In other words, you undertake Risk QC at your own Risk. You’ll only find out if the software you bought is acceptable in 2016, or if your IQCP is acceptable in 2016 when the inspector actually shows up to sift through your risk assessment.

This might be very troubling if it turns out that IQCP is going to be interpreted very strictly. But the evidence is gathering that inspections of IQCP might not be very strict at all.

In the notes of the CLIAC meeting of March 5th and 6th of 2014, the Summary report noted:

A member asked how laboratories’ Individualized Quality Control Plans (IQCPs) will be evaluated by surveyors and whether there will be a process for formally addressing subjective disagreements. Ms. Yost [ed. Head of CMS] replied that surveyors will be looking for the five risk assessment components, whether the entire testing process has been addressed, documentation that supports the process, and the director’s signature. If all parts of the IQCP have been addressed, CMS plans to accept what the laboratory director approves. They will then look at outcomes that result once the IQCP has been implemented before citing a laboratory for IQCP-related deficiencies. If a laboratory is subsequently cited and it disagrees with the survey findings, the issues may be addressed in the wrap-up meeting, on the written response to any deficiencies cited, with the CLIA state agency director, or the CMS regional office.

My reading of this is that IQCPs are about checking off a list of topics in document, not actually meeting any particular level of risk assessment or quality or performance. Let’s stress that one sentence. ”If all parts of the IQCP have been addressed, CMS plans to accept what the laboratory director approves.” The cynic in me translates that statement into a low bar indeed: as long as your IQCP has the right TOC and the right signature, you’ll be in the clear. The inspector isn’t going to challenge the laboratory directory’s judgment, at least until problems start showing up with patient results. Perhaps the “P” stands for Patient – the real out-of-control signal will come when the patients start being hurt.

So if our IQCPs are really just formalities, exercises in creative document formatting, why are we doing this?

And then I saw a recent article about Safety Checklists:

Safety checklist compliance and a false sense of safety: new directions for research, Christofer Rydenfalt, Asa Ek, Per Anders Larsson, BMJ Qual Saf 2014;23:183-186.

While checklists have a great reputation and some admirable results in improving patient care, this study took a look at institutions that were using the checklists and tried to ascertain just how compliant they were in fulfilling all the requirements of safety checklists. Unfortunately, the devil in the details is not pretty:

“The compliance rate reported in [checklist] studies could at best be considered as moderate. Rydenfalt et al report a compliance of the timeout part of 54%....In the study by Cullati et al, the mean percentage of validated checklist items in the timeout was 50% and in the sign out 41%....[T]his raises the question: do safety checklists used with this level of compliance really make practice safer?”

The problems that Rydenfalt et al note with safety checklists is that they are ultimately only symbolic barrier systems. That is, when a checklist is available, it’s only able to protect patients when the surgical team is reminded to use it. A safety checklist is not foolproof, nor is it as strong as a physical barrier system – something that actually is highly efficient at protecting patients.

Nevertheless, there is huge support of the use of safety checklists. The study authors conclude:

1. “The checklist is a weak type of safety barrier that is easily put out of function and is vulnerable to normalisation of deviance, especially those parts that are not perceived as important to all users.

2. The checklist provides gains in safety but those gains are threatened from demands for efficiency, resulting in safety gains being transformed into production gains. As a result, other barriers against patient harm could be perceived as being replaced by the checklist and thus ignored in order to improve production.”

You could easily substitute “IQCP” for “checklist” and those statement would still be true. An IQCP, done correctly, could make significant gains for safety (although those gains might come at the expense of running more controls, requiring more competency, etc.). But an IQCP is extremely vulnerable to poor execution. If inspectors won’t really stress-test an IQCP, there’s little incentive to make them robust.

Recall that much of the motivation behind EQC and IQCP is really about production gains – labs and manufacturers want to maximize the output of their tests and devices while minimizing the frequency of quality control. IQCPs provide a mechanism that looks good, feels good, and it pleases us. It’s highly possible that many of our earnest Risk Assessment efforts will produce improvements in our operations. But when IQCP is done poorly, as it seems likely it will be, the it’s really just a placebo. It’s all risk, no control.

In the 1999 scifi movie, The Matrix, at one key moment, the hero of the movie, Neo, is faced with a choice: he can take a blue pill or a red pill. The blue pill will allow him to stay in an artificial delusional reality, while the red pill will reveal to him how the world truly works. His mentor, Morpheus explains:

You take the blue pill – the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in wonderland, and I show you how deep the rabbit hole goes. Remember, all I’m offering is the truth – nothing more.”

There’s a similar choice ahead: you can believe that qualitative, subjective, compliant-but-inadequate IQCPs solve all your problems and allow you to run QC just once a week or once a month without endangering patients, or you can use more data-driven risk assessment techniques to truly determine the performance of your tests – and compare that performance with the quality required by your patients.

Which P will you take?

[Important Note: While our frustration with the policy of IQCP is undoubtedly clear, please don't take this as a comment on any of the people involved in the process of developing or implementing these regulations and guidelines. As we all know, you can act with the best of intentions and put tremendous effort into a project, but within the bureauracy of an institution, the outcome is not what you as an individual would have hoped. The consensus process for Risk QC is a perfect storm of sorts: manufacturers who don't want to invest more in quality devices, laboratories who feel enormous pressure to crank out quick numbers instead of quality results, and healthcare systems desperate to find cost savings somewhere, anywhere. Combine all of that, and no individual has the power to stop what has emerged for IQCPs.]