Tools, Technologies and Training for Healthcare Laboratories

CLIA Final Rule

Is this the end of EQC 1, 2, 3?

The Current State of Equivocal QC Practices

This article is an attempt to capture the state of opinion about EQC. As such, it is very dependent on the context of this particular moment (June 2005). Opinions and positions have been fluid. It is likely that in the coming months there will be further changes in attitude and adoption of EQC. We therefore do not claim that we have determined the definitive mood on this issue. All we can assert is that we have grabbed a snapshot.


June 2005


Something quite unusual happened at the March 18 2005 workshop on “Quality Control for the Future” that was held in Baltimore under the auspices of CLSI (formerly NCCLS). Also billed as a workshop on “Equivalent QC”, there was no attempt to defend the 3 proposed EQC procedures. Rather, CMS together with AdvaMed launched a 4th option for alternative QC that is reminiscent of the 1992 CLIA provision for FDA clearance of a manufacturer’s QC instructions.

Two years have passed since the Final CLIA Rules were issued and it's been just a year since the three “Equivalent QC” Procedure (EQC) options were introduced in the State Operator's Manual (SOM) Interpretive Guidelines. During that time, there has been much discussion of the value and scientific validity of the EQC recommendations. CMS has been explaining and promoting EQC at meetings, audioconferences, and in the trade media - and trying to dispel the confusion and doubt created by these new EQC options.

Current opinions about EQC

Quite a lot has been said about EQC in the past several months, (see below), almost none of it supportive. Even Judy Yost, MA, MT, Director of CMS Division of Laboratory Services, has said this:

“After we released the three EQC options in our CLIA surveyor guidelines, it wasn't long before we started hearing concerns. We had calls and e-mails from laboratories, manufacturers, professional groups, and accrediting organizations....There is innovative and new technology that we did not anticipate, and we have created some policies that are perhaps inconsistent with what the accrediting organizations have.” [4]

“Although the idea of EQC has merit, numerous questions regarding the statistical basis for EQC have arisen. Additionally, some organizations and individuals in the lab community feel that reduced QC is inadequate, while others feel that no external QC is necessary if the test system incorporates internal quality checks. Newer technologies that were not anticipated by EQC complicate this mix.” [5]

Or, as she put it more succinctly at the Baltimore workshop, “We blew it” [2].

From the very moment the EQC options were introduced, and indeed even before they were announced to the general public, there has been opposition to EQC – even casual readers of Westgard Web know that we have gone so far as to rename them “Equivocal QC Practices”.

So the question is, with all of this vocal opposition to options 1 through 3, what is going to happen with them? Are they going to be eliminated? Altered? Replaced by this Option 4? And what should you be doing about them?

What the Professional Organizations are saying:

Most of the laboratory press have noted that the professional organizations “have adopted a 'wait-and-see' attitude with respect to adopting EQC.” [3] But let's review who is waiting at this time:

AACC has no formal position on Equivocal QC . “The Association has served rather as a source of information for its members and the laboratory community on this and other regulatory components of CLIA. Last year, for example, the EQC issues were discussed during audioconferences in the spring and fall and covered in Clinical Laboratory News.”[6]

CLMA has no official position on Equivocal QC...yet. When asked, officials at CLMA have expressed interest in developing a position in the future.

CAP stated in 2004 that the current checklists gave the laboratory the responsibility and flexibility to work with Equivocal QC options. The checklists require that the laboratory perform a validation study (which would be equivalent to one of the “evaluation protocols” of Equivocal QC), but doesn't specify the details of the study. Dr. Stephen Sarewitz, MD, FACP, stated at the July 14, 2004 audioconference that “Each laboratory must develop its own validation protocol based on its setting, its experience and the particular device it is using.” [7]

At the 2005 CLSI meeting, CAP went further, however, and “recommended that federal officials err to the side of stricter quality control standards and put on hold CLIA 'equivalent quality control' (EQC) procedures.”

CAP also suggested, as put forward by Dr. Sarewitz, that the CMS “declare that reduced external QC would not be permitted for systems without internal controls and that only FDA-approved, unmodified tests of waived or moderate complexity, with internal controls, would be eligible for reduced QC....The college also recommended that, for systems with internal controls, CMS should require laboratories to perform appropriate validation studies before relying on internal controls as the only daily QC.”

“A minimum requirement for external QC should be set at two levels of control for every new lot number or shipment of reagents (or cards or cartridges); the need for more frequent external QC depends on the instrument and setting, the College said. The College recommended that QC for unit-use devices should follow the recommendations of EP18-A (Quality Management for Unit-Use Testing), a CLSI standards document.”[8]

JCAHO provided an early detailed and conservative response to Equivocal QC. There has been some confusion in the field concerning their acceptance of the evaluation protocols. In 2004, they indicated that they were going to stick with a 60 day evaluation protocol. With the help of JCAHO officials, we have determined this is what actually happened:

JCAHO standards specifically address only electronic QC, and “current requirements have not been modified and are the same as previous years...The laboratory may determine the extent of the validation period, however, it must be performed for a statistically valid sample. A new footnote in the CAMLAB 2005 under QC.1.180 has been added that suggests 20 data points as a minimum.”

“In July 2004, JCAHO had published in Perspectives that we intended to modify these requirements by specifying a 60 data point validation and weekly external QC. Clearly, this was done to address concerns with regards to CLIA 2003 EQC options and represented the most stringent requirements. After field comment and further review, it was determined that the existing standard, as summarized above, sufficiently addressed EQC as it relates to electronic QC and the revised version was never implemented. It is possible that some people have misinterpreted this as a "change" in our requirements.”

However, if CMS insists, JCAHO may be forced to accept the Equivocal QC Option 1 of only 10 days, rather than their preferred 20 days.

The JCAHO official we spoke with said “We are concerned about EQC from both sides of the argument. We have both patient interests for quality laboratory services and customer interests for cost and resource efficiencies. We have struggled with understanding the applicability of EQC, how it relates to existing JCAHO standards, and how to survey it....I have not seen statistical evidence for support of EQC, although it is clearly derived from CLSI's guidelines for reduction from daily to weekly QC for susceptibility testing in microbiology. On a similar note, when CLSI reduced the validation from 30 days to 20-30 days, I only found a statement indicating that it still provided sufficient statistical validation. I did not locate any further evidence to support that, but I also did not question it. I suspect the rationale for changes in CLIA 2003 came from such evidence, but have no further information.”[9]

The JCAHO Professional and Technical Advisory Committee has also looked into the EQC recommendations. PTAC, which is composed of representatives from many professional laboratory organizations, including AACC, ASM, CLSI, and ASCLS, met in 2004; at that time, no organization other than CMS spoke in favor of EQC, and “the committee did not feel there was enough information available regarding EQC and, therefore, was not comfortable with integrating EQC options 1 & 2 into the JCAHO standards. No one has supported Option 3.”

ASCLS has appointed a task force to write a position paper on EQC and, more broadly, Quality Assurance. With the permission of Elissa Passiment, we were able to see the final draft of this paper. It contains perhaps the strongest official statement against EQC by a professional organization, and we quote at length below. But here are some highlights:

"ASCLS does not support the application of the EQC protocol in its entirety due to its lack of scientific or statistical basis upon which the guidelines were set.... ASCLS recommends that EQC alternative #2 be eliminated.... ASCLS recommends that EQC alternative #3 be eliminated....Ultimately the biggest risk of EQC is a reduction in positive patient outcomes." [10]

This position paper will be voted on at the 2005 House of Delegates at their Annual Meeting next month.

What's it all mean?

The tide of opinion seems to be turning against EQC options 1 – 3, and in the absence of any scientific data to back these approaches, we may see them eliminated, altered, or replaced by new options. The only thing that may remain of the original “Equivalent QC” options is their name, “Equivalent QC.”

The year 2005 was supposed to be the end of the introductory educational period for EQC. However, it now appears that the educational period will be extended – so whether you use options 1, 2, or 3 or not, you probably won't get cited, but may receive an “educational letter” about it. The format for such a letter was posted on the CMS website ( - now broken). Note the following statements:

“…since the publication of the 2003 final regulations and accompanying guidelines, CMS has identified innovations in technology and received input from technical experts that may lead to further modifications of QC policies in our interpretative guidelines. CMS is also undertaking a number of processes to acquire additional information, data and scientific input relative to such QC and technological advances in order that our policies will reflect these innovations.

“Therefore, so long as laboratory directors, at a minimum, review manufacturers’ QC instructions, find those instructions to reasonably monitor the accuracy of the analytic process and the laboratory then follows those manufacturers’ instructions, we plan to continue the educational process noted above until any merited changes are incorporated into our guidelines, for the QC requirements contained in the 2003 modifications of the CLIA regulations.”

Therefore, for the immediate future, regulatory compliance most clearly depends on adhering to the manufacturer’s QC instructions or the CLIA minimum QC (2 levels per day). If you decide to do something different, you may be cited, but the followup will be educational until the EQC options are finalized.

“Option 4” looms on the horizon, and the committee creating this new option is on a tight and disciplined schedule to get a draft document out in 2006. That document must then go through the CLSI consensus process, which will likely take another year. Therefore, it is likely to be 2 years until the option for “alternative QC” becomes a reality.

What to do?

In this interim time period, options 1, 2, and 3 are still on the table, but most likely will not be strictly enforced. We have the unknown Option 4 that may change everything, and then we have everything else that was developed in the eleven years between CLIA's beginning in and the Final EQC options of 2003. It's a bit like the Wild West right now – nearly anything goes, depending on the attitude of your inspector. In times like these, we need to adhere to our professional standards and do what’s right. Compliance with transient regulations is a recipe for frustration. A commitment to pursue excellence will yield better results.

The current EQC has succeeded in one unusual way, it has generated a nearly unanimous agreement that the current options are not satisfactory and that something else must be developed. Now we wait and see if Option 4 can satisfy manufacturers, regulators, the laboratory professional organizations, and, most importantly, the analysts who must take responsibility for the quality of the tests performed.


What people are saying about EQC:

In case you haven't been able to read recent coverage or couldn't attend the Baltimore meeting, here's a recap of what's been said this year about the EQC options:

“Nobody is particularly overjoyed with the approach that CMS has taken with quality control.” Carolyn Jones, JD, MPH, AdvaMed Associate VP, Technical and Regulatory Affairs. [1]

"When the CMS (the Centers for Medicare & Medicaid Services) issued their new quality control guidelines in 2003, industry was really concerned about the way they had incorporated equivalent quality control (EQC) with the three options they had offered.” Luann Ochs, MS, Chair of AdvaMed's CLIA working group, Chair of CLSI Area Committee on Evaluation Protocols [1]

“EQC allows you to do something less rigorous than what we are doing today. QC is my responsibility as a lab director, and I know that if my staff doesn't want to deal with not only the upfront work [ed. the evaluation protocols] but the hassle of handling failed runs [ed. recalling a week or a month of test results in out-of-control situations], then we'll probably stay with the current QC protocols for now. In that regard, we can't lose, because we're doing much more than we need to do.” Valerie Ng, Ph.D., M.D., Director of the University of California San Francisco's Clinical Laboratory at San Francisco General Hospital.[2]

“As [EQC] is currently proposed, it is a very poor approach and it's doubtful whether it would achieve the ends that CMS had in mind.” Ronald H. Laessig, PhD, Director of the Wisconsin State Laboratory of Hygiene and Professor of Population Health Sciences at the University of Wisconsin-Madison. [2]

“Very simply, if the instrument's internal QC does not encounter a failure during the evaluation process, there is no assurance (except the manufacturer's promise) that the internal monitoring system, in fact, is able to detect a quality-system failure.” Sharon Ehrmeyer, PhD, Professor at University of Wisconsin-Madison [3]

"At the moment, no one is please with the EQC options....If you look at the surveyor guidelines, I believe the approaches detailed there are scientifically difficult to explain and may be risky to implement, depending on the instrumentation." Gerald Hoeltge, MD, treasurer of CLSI, CAP liaison to the CLSI[4]

"Those of us on the laboratory side were quite baffled about how to approach EQC. If you look at the three different options and how many times you do controls, and what you must do if QC fails, none of that seemed to make sense to any of us." Thomas L. Williams, MD, Director of the Department of Pathology at Nebraska Methodist Hospital, Omaha, member of Clinical Laboratory Improvement Advisory Committee. [4]

ASCLS Final Draft Position on EQC

"ASCLS does not support the application of the EQC protocol in its entirety due to its lack of scientific or statistical basis upon which the guidelines were set.

"Quality control, by CLIA’s own definition, is to detect immediate analytical error. By reducing the overall frequency of external QC testing the probability of detecting errors as they occur is reduced dramatically. By increasing the external QC testing interval the user increases the likelihood that an error will remain undetected.

"There is no scientific evidence indicating that all test systems falling into each respective category could be stable over the interval set out in each of those categories (e.g. 60 days of stability evaluation for a test system without internal/procedural controls). If the test systems could be stable over the designated interval, it is undocumented whether or not that amount of data would be sufficient to proclaim the system’s stability.

"EQC offers no protocol to make the decision if the instrument’s internal quality control monitors the “entire”, a “portion” or none of the analytical process which is difficult when the test system’s ability to detect error is not documented by the manufacturer or in published case studies.

"Further, the guidelines for the evaluation process set out that “acceptable” results indicate that the process has been successful. Acceptability is subjective by nature, leaving it to, in this case, the discretion of the laboratory director.

"ASCLS supports EQC alternative #1 with the following changes: The word “acceptability” is defined as “results falling within the claimed reproducibility statistics of the test system’s manufacturer.”

"ASCLS recommends that EQC alternative #2 be eliminated. The portion of a test system being tested by an internal/procedural control varies from one system to another. A test for liquid sensing would be a procedural control for a portion of the system but would be totally unrelated to the remainder of the test system which might exhibit higher sophistication and complexity.

"ASCLS recommends that EQC alternative #3 be eliminated. As written, the alternative could be applied to test systems of high complexity which should be monitored on a more frequent basis depending upon the nature of the test system.

"Most manufacturers have determined the stability of their test systems which fall into the test systems described in alternatives 2 and 3. Laboratories should follow the manufacturer’s recommendations on external QC testing once the laboratory proves the reproducibility and precision of the test system falls within the manufacturer’s claims. The frequency of external QC testing should be increased if the manufacturer’s claims are not met. Further consideration should be given to increasing QC frequency if the number of patient specimens in a routine run is high.

"Ultimately the biggest risk of EQC is a reduction in positive patient outcomes. By increasing the interval between quality control tests, any undetected errors will affect the patient results and potentially the diagnosis and/or treatment. Depending upon the number of patients tested during the interval between QC runs, the cost of recovering, recollecting and retesting patient specimens could be prohibitive.

"With EQC, “the director must consider the laboratory’s clinical and legal responsibility for providing accurate and reliable patient test results vs. the cost implications of reducing the quality-control frequency.” The liability for a testing error is that of the laboratory director’s, not CMS who wrote the rules, nor with the instrument manufacturer."

Stories and Interviews referenced in this essay:

  1. "Contention, Community Mark 'QC for the Future' Workshop", CLSI enews, April 11 2005
  2. Julie McDowell, “Revisiting Equivalent Quality Control”, Clinical Lab News, June 2005.
  3. Ronald H. Laessig, PhD, and Sharon S. Ehrmeyer, PhD, “CLIA 2003's new concept: equivalent quality control”, MLO, January 2005.
  4. Sue Parham “Looking for an equivalent quality control winner” CAP Today, May 2005.
  5. Judy Yost, MA, MT, Director of CMS Division of Laboratory Services, “EQC's future sparks continued dialogue” MLO, May 2005.
  6. Email from Pam Nash, AACC, 2/3/05.
  7. Anne Paxton, "Checklist clarifications: To each its own—guidelines for point-of-care" QC" CAP Today, September 2004,
  8. "College Participates in Equivalent Quality Control Conference", Statline, March 23, 2005, Volume 21, Number 6
  9. Email from Megan Sawchuk, Associate Director of Standards Interpretation. JCAHO, 2/2/05 and 2/3/05
  10. Draft of Position paper provided by Elissa Passiment, Executive Vice President, ASCLS, 4/19/05.

All URLs last accessed 6/15/05.


Joomla SEF URLs by Artio