Tools, Technologies and Training for Healthcare Laboratories

CLIA Final Rule

IQCP - Does the P stand for Panic?

Pardon our political incorrectness: at the 2015 AACC/ASCLS conference, there were about as many different presentations on IQCP as there are GOP presidential candidates. With about the same amount of agreement. As US labs near the IQCP deadline, confusion is mounting, not coherence. If this isn't worrying you, you're not paying attention.


IQCP - Does the P stand for Panic?

James O. Westgard, PhD and Sten Westgard, MS
August, 2015

There were a lot of discussions and presentations about Individualized Quality Control Plans (IQCP) at the national AACC/ASCLS convention in Atlanta last month. Notable activities included a well-attended symposium by The Joint Commission (TJC) and a CMS/CDC booth in the exhibition hall where free copies were available of the guidance for "Developing an IQCP: A step-by-step guide”. Given that we are now in the last 6 months of the 2 year educational period for replacing EQC with IQCP, it is good to see the increased educational activities and the increased participation by laboratory personnel.


TJC appears to be following the CMS/CDC IQCP guide very closely, which means basically addressing 3 questions as the risk assessment methodology:

  • What are our possible sources of error?
  • Can our identified sources of error be reduced?
  • How can we reduce the identified sources of errors?

As we have discussed earlier, this approach is really only hazard identification (question 1), followed by an subjective decision of whether or not to do something about the identified hazard (question 2), followed by risk mitigation (question 3). This is no risk assessment occurring because there is no attempt to determine the seriousness of the identified sources of error, which depends on their frequency of occurrence, severity of harm, and detectability by controls. Instead the 2nd question is simply whether or not something might be done about an identified source of error. The answer of "Yes" or "No" is being substituted for risk assessment and risk evaluation.

TJC seems to have adopted a very "flexible" attitude towards IQCP, recognizing that this is a new and subjective approach. Basically the medical director of the laboratory is responsible for the quality of the IQCP and any IQCP is likely to be acceptable to the TJC inspectors as long as the medical director has signed the document. Any weaknesses in the IQCP can be corrected in updated versions and there is little expectation that the laboratory will implement the right IQCP right the first time.

While this is probably a relief for laboratory managers, surely we can agree this would not be comforting to share with patients. Do we really want to tell them, "Well, we're trying Risk Management this year, but not that hard, and not that thoroughly. We figure if we make any mistakes this year, we'll correct them next year. Our inspectors have agreed to not be too tough on us this year, so thanks again for choosing us for your healthcare."

Is that what CLIA means when it requires laboratories to "detect immediate errors"?


The new CAP “all common checklist” is more demanding in its requirement that “the risk assessment must include a process to identify the sources of potential failures and errors for a testing process, and evaluate the frequency and impact of those failures and sources of errors.” That last part – evaluate the frequency and impact of those failures - at least implies that the frequency of occurrence and the severity of harm should be considered in determination of risk. CAP references the CLSI EP23[TM]A guidance for the use of this 2-factor risk assessment model (whereas we - and the risk management practices outside of healthcare - recommend a 3-factor model that also considers the detection capability of the control procedures. We continue to express our dis-belief that any risk assessment for purpose of QC can somehow ignore the detection capabilities of the control procedures!

CAP further specifies that:

"The QC study performed to assess the performance and stability of the tests must support the QC frequency and elements defined in the laboratory quality control plan. The study must include data representing, at a minimum, the maximum interval between runs of external quality control. The laboratory may use historical data during the risk assessment for tests already in place."

This suggests that the stability of the testing process must be documented by control data, which is more specific than the guidance from TJC and CMS/CDC, however, inspectors are told to look for the sources of information that are used to perform the risk assessment rather than the proper use of laboratory data for risk assessment. Otherwise, CAP follows the CMS guidance to evaluate potential sources of errors in all phases of the Total Testing Process, the effects of reagents, environment, specimen, testing personnel, and test system, and the factors involved in different testing sites and clinical applications.

Any IQCP will do if it's "all right" with you!

IQCP has been promoted as "The Right QC" for you and your laboratory, however, it is difficult to see any objectivity or rigor in the emerging guidelines for IQCP from CMS/CDC, TJC, or even CAP! After 10 years of discussion about getting rid of EQC and replacing it with something "better" (risk-based QC), the regulatory and accreditation organizations have recognized that formal risk assessment methodology is "too difficult" for laboratories based on the current state of their knowledge and experience. We might hope that the initial qualitative, subjective, and arbitrary identification and evaluation of critical error sources will be improved and corrected in the future. However, experience shows that when CLIA sets a minimum requirement it is more likely to stagnate at that level rather than become more rigorous in subsequent revisions. Remember when CMS postponed and postponed and postponed the "manufacturer's QC clearance" provision of CLIA? They eventually eliminated the QC clearance provision, and as a result, they had to develop a new policy to cover POC devices. This was called EQC. EQC, as you may recall, was so poorly justified scientifically, and poorly accepted by manufacturers, that CMS decided the regulations had to be eliminated. And that is how IQCP came about... a sequence of poor regulations followed by tepid follow-through. Do we really expect anything different from IQCP?

In this age of quality compliance, regulatory minimums become de facto standards of practice, regardless of their actual effectiveness.

Simpler Risk Management is not Safer Risk Management

A recent study provides an even more sobering analysis of the risks of poor risk management:

Failure mode and effects analysis: a comparison of two common risk prioritisation methods, McElroy LM, Khorzad R, Nannicelli AP,Brown AR, Ladner, DP, Holl JL,  BMJ Quality and Safety, 2015; online first, 1-8. doi: 10.1133/bmjqs-2015-004130

In this study, the authors compared a traditional industrial application of the Failure mode and effect analysis (FMEA) risk management tool with a more simplified version. The traditional FMEA methods use "10-point scales to assign scores for each failure attribute (ie frequency, severity, safeguards), a risk priority number (RPN), calculated as the product of three scores (ie, frequency x severity x safeguard), is used to rank failures." The more simplified version uses "abridged scales for the three failure attributes" and instead of a calculation of RPN, generates a three-tiered verdict: high, medium, or low risk.

The attraction of the simplified risk scale is easy to state: while the study required 45 hours to train staff in the traditional FMEA technique, the abridged version only required 11 hours. It should be noted that this was a marginal decrease, since the study standardized the amount of time identifying the hazards: "Approximately 80 h[ours] were required to conduct the observations and to create the process map and template hazard table." The fact that one process requires such an instensive investment of hours to fully document the hazards and risk is a telling statement in itself. If each instrument is going to require more than 100 hours of staff time to provide a thorough hazard analysis, there are very few laboratories in the US that are going to be able to fulfill this regulation. For many devices, it would actually take less time and effort to simply run controls every day.

The question the authors examined is, of course, with a simplified risk assessment technique, are hazards missed, risks underestimated, and other dangers generated? It turns out, sadly, the answer is yes.  While the traditional scoring method and the simplified scoring method achieved high congruence when using the RPN, that congruence rate fell to only 50% when criticality index scores were compared. In 6 of 12 cases where the more thorough traditional scoring method rated a hazard as high risk, the simplified method rated that same hazard as only medium risk. That is, the traditional scoring method and the simplified scoring method had about the same chance of agreeing as a coin flip, and when they didn't agree, the simpler method was missing higher risks.

What is even more worrisome: in laboratory testing, the traditional industrial FMEA approach was abandoned early as too complex. And then the third factor of detectability (what this study referred to as "safeguards") was dropped in favor of a two-factor criticality. EP23[TM] provided simplified examples of criticality matrices. These are very similar to the simplified methods mentioned in this FMEA study.

To make things worse, the CMS step-skip-step manual on "Developing an IQCP" doesn't even require any simplified FMEA criticality scoring. Risk is reduced to simple "Yes/No" without any real analysis. So the Risk Assessment being proposed now is beyond simplified - it's simplistic to the point of short-sighted. Let's be clear: there is no short-cut to proper Risk Management; if you think you can only spend an hour and get an acceptable result, you're deceiving yourself and shifting risks onto the clinicians and patients.

What's the risk of implementing an inadequate regulation on an unprepared marketplace? Do we really want to find out?