Analysis of the Rationale for "Equivalent" QC
An analysis of the scientific rationale provided by CMS to justify "Equivalent" QC procedures in the CLIA Final Rule.
CLIA Final Rule:
An Examination of the
the Scientific Rationale for Equivalent QC
I applaud CMS for responding to Patrica White’s request (1) for an explanation of the Equivalent QC (EQC) recommendations (2) .
There clearly is a need for better QC methodology and technology, especially in Point-of-Care applications, which seem to be the focus of the EQC recommendations. My concern is whether there is a scientific basis for EQC, including a scientific basis for the evaluation process recommended by CMS.
CMS recognizes the need for laboratories to evaluate the suitability of applying the EQC option to their tests and methods (2):
“Prior to making any decisions about decreasing the frequency of external control testing, a critical factor repeatedly mentioned was the need to conduct an evaluation to verify the test system’s stability and the ability of internal controls/monitors to reliable detect errors and alert the operator to test system problems.”
CMA justifies their recommendations for evaluation process by reference to CLSI standards (2), particularly M2-A “Performance Standards for Antimicrobial Disk Susceptibility Tests.” According to the CMS letter, “performing and documenting these types of evaluations appears to be standard practice in many accredited laboratories.” Maybe this practice exists in microbiology, but it hasn’t – to my knowledge – become a standard practice in chemistry and other areas. Given that most POC testing is in the chemistry area, I’m also a little concerned with the extrapolation from microbiology to other areas where measurements are both more complicated and more quantitative.
Nonetheless, having to admit that I don’t know much about QC in microbiology, I have begun to study the M2-A document in the hopes of gaining a better understanding of the EQC guidelines. Unfortunately, this has raised even more questions!
M2-A began as proposed standard in 1975 and was first approved in 1984. Since then, it has been revised in 1988, 1990, 1993, 1997, 2000, and 2003. The latest document includes the notation “M2-A8” to identify the eighth revision.
Here’s what M2-A8 (3) says about the frequency of quality control testing:
10.5.1 Daily Testing
When testing is performed daily, for each antimicrobial agent/organism combination, 1 out of every 20 consecutive results may be out of the acceptable range (based on 95% confidence limits, 1 out of 20 random results can be out of control). Any more than 1 out-of-control result in 20 consecutive tests requires correction action (see section 10.6).
10.5.2 Weekly Testing
10.5.2.1 Demonstrating Satisfactory Performance for Conversion from Daily to Weekly Quality Control Testing
Test all applicable control strains for 20 or 30 consecutive test days and document results.
To convert from daily to weekly quality control testing, no more than 1 out of 20 or 3 out of 30 zone diameters for each antimicrobial agent/organism combination may be outside the acceptable zone diameter limits stated in Tables 3 and 3A.
10.5.2.2 Implementing Weekly Quality Control Testing
Weekly quality control testing may be performed once satisfactory performance has been documented (see Section 10.5.2.1).
Perform quality control testing once per week and whenever any reagent component of the test (e.g., a new lot of agar or a new lot of disks from the same or a different manufacturer) is changed.
If any of the weekly quality control is out of acceptable range, correction action is required (see Section 10.6).
If a new antimicrobial agent is added, it must be tested for 20 or 30 consecutive days and satisfactory performance documented before it can be tested on a weekly schedule. In addition, 20 or 30 days of testing is also required if there is a major change in the method of reading test results, such as conversion from manual zone measurements to an automated zone reader.
If the CLSI M2-A8 document is the basis for the evaluation processes recommended for Equivalent QC, then CMS needs to answer a the following questions to justify their recommendations:
What is the source of the 10 day evaluation protocol in EQC option 1? M2-A8 doesn’t say anything about testing for 10 days! It recommends 20 or 30 days.
What is the source of the recommendation for reduction of frequency to monthly QC? M2-A8 only allows reduction from daily to weekly QC, not to monthly QC!
What control limits are supposed to used in the evaluation process? Are 2SD statistical control limits supposed to be used? Or, are fixed control limits supposed to be used? M2-A8 mentions statistical limits, but then refers the user to a table of fixed limits.
Should a laboratory define statistical control limits based on its own data, or will the manufacturer or CMS define fixed control limits? M2-A8 provides a table of control limits, but there’s no other NCCLS document that provides similar information for other tests!
Where does the criterion for NO out-of-control runs come from in EQC options 1 through 3? M2-A8 allows 1 out of 20 runs to be out-of-control or up to 3 out of 30 runs to be out-of-control!
Are there any papers in the scientific literature that support these recommendations for evaluating the appropriateness of reduced QC? It’s not clear if any of the references in M2-A8 directly relate to these recommendations for reducing QC from daily to weekly to monthly QC! Version 8 introduces the 20 or 30 days period, whereas version 7 specified only a 30 day evaluation purpose, but the only new reference are two CDC reports dealing with Vancomycin-resistant Staph Aureus.
In this era of evidence-based medicine, quality and quality control also need to be evidence-based. If laboratory tests are important for evidence-based medical decision making, then the quality of laboratory testing will also need to be based on the best scientific evidence. The need for a scientific assessment is particularly important when making changes in our standard procedures and practices. Such changes are supposed to make things better, not worse! How will we know if we don’t properly evaluate the effects of these changes?
In the scale used for evaluating the “level of evidence,” consensus groups are the lowest form of evidence. All forms of scientific study rank higher. Therefore, reference to the M2-A guidelines is the weakest form of evidence. We have recommended earlier that CMS make use of the 30 day evaluation study to characterize the sigma performance of the test methods and then relate the QC recommendations to the observed quality of the methods. There is a scientific way to do this, it can utilize the same data from the 30 day evaluation period, and it provides better evidence of the amount of QC that is needed to assure the quality of each test-method combination (4).
CMS has yet to demonstrate that the recommendations for evaluation of Equivalent QC procedures are scientifically sound! But we’ll give them another chance and look for a response to these questions about their proposed evaluation methodology, as well as the issue of utilizing Six Sigma methodology to provide a more scientific assessment of method performance and the related appropriateness of QC procedures.
White PN. An open letter regarding equivalent QC. www.westgard.com/cliaeqc.html
CMS. Response to Patrica M. White’s correspondence regarding the rationale behind the Equivalent QC recommendations. www.westgard.com/cliaeqc.html
CLSI M2-A8. Performance Standards for Antimicrobial Disk Susceptibility Tests; Approved Standard – Eighth Edition. NCCLS, 940 West Valley Road, Suite 1400, Wayne, PA 19087-1898, 2003.
Westgard JO, Ehrmeyer SS, Darcy TP. CLIA Final Rules for Quality Systems: Quality assessment issues and answers. Madison WI:Westgard QC, Inc., 2004. See chapter 13 “Doing the Right QC.