Tools, Technologies and Training for Healthcare Laboratories

Will Molecular Diagnostics repeat our QC mistakes?

Molecular diagnostics is a field exploding with growth. Next Generation Sequencing (NGS), RNA and DNA microarrays, mass spectrophotometry are poised to enter the clinical laboratory. Special challenges arise when we try to apply "traditional" quality control and quality assurance models to these new testing technologies. Not only will we need to adapt current quality thinking to the practical realities of the new tests - we will definitely need to avoid repeating the mistakes we made in our traditional QC approaches.

 

 

 

Molecular Diagnostics - How to Assure Quality while Avoiding Mistakes?

Sten Westgard, MS
June 2012

Dr. Clark Rundell, colleague and good friend of the website, brought to our attention a recent article on QA in the molecular diagnostic world:

Quality Assurance of RNA Expression Profiling in Clinical Laboratories, Weihua Tang, Zhiyuan Hu, Hind Muallem, and Margaret L Gulley, J Mol Diagn 2012, 14:1-11

In fact, Dr. Rundell, along with Joan Gordon, wrote a review of this article for MLO: Quality Control - more challenges for molecular diagnostics. In the article, they point out that the new emerging technologies such as microarrays are going to pose a serious challenge to our "traditional" QC and QA approach:

"These complex platforms test a variety of human specimen types for sometimes thousands of analytes - potentially the entire human coding genome of approximately 22,000 genes. In order to apply federal QC guidelines of CLIA'88 that state a positive and negative control must be run for every analyte detected, well over 1,000 controls could be required - an impractical approach. Labs don't have sufficient freezer space to archive patient samples for use as controls, software isn't available to track QC, and the cost is too high to run so many controls even on a rotating basis."

Impractical indeed. While control vendors may salivate in anticipation of labs having to purchase 1,000 controls per run, the rest of the laboratory community is worried about striking a proper balance - doing enough QC without breaking the bank.

In their review, Tang et al provide an overview of the regulatory and technical challenges facing RNA profiling. Despite the "novel" nature of this technology, the authors still believe that this type of testing is not exempt from quality control.

"Quality control is among the most important of quality assurance measures....Multiple types of controls are used in RNA profiling. A 'no template' control can evaluate background signal and contamination by stray nucleic acid. An exogenous control is run alongside patient specimens to evaluate assay performance in a general manner. A separate exogenous control, representing each of the main outcome groups, could be included for every X patients who are run, with X chosen based on the medical impact of an erroneous classification and the timeliness required to correct any error, recognizing that failure of a control will launch an investigation that questions the results for those patients tested since the last time that the control performed as expected."

This description of quality control sounds very familiar to those of us in the "traditional" areas of the laboratory. In fact, it may even be a more advanced approach than we currently employ. Throughout much of the world, laboratories have lapsed into a compliance mentality, where they perform QC at the regulatory minimum frequency. Here, Tang et al is suggesting that QC frequency should be determined not by regulatory minimums, but by a combination of factors: medical impact, run size, and time required to catch the error. It's encouraging to see that thinking and we hope that it won't be snuffed out when and if a regulatory body issues a "once a day" type ruling. QC frequency is still a hot topic in "traditional" testing, but the CLIA regulations effectively reduced most labs to the minimal legal frequency. Labs base their frequency of QC on what's required to stay out of legal trouble, not on medical impact, etc. The emergence of EP23 and Risk Analysis have provided a new opportunity to assess QC frequency, but there is still (as yet) no proposed mechanism from CLIA or CLSI for reliably and reproducibly determining the appropriate QC approach.

One area of the Tang review is a bit worrisome: the issue of control limits:

"Limits on acceptable performance of controls are empirically set by replicate analysis. Consider running a control many times (across different days, technologists, instruments, and lot numbers), and then calculate the mean +/- 2 SDs as the limit on its performance. When multiple controls are used, the expected failure rate of 5% for any one control implies that a combination of four controls will fail 18% of the time. This high failure rate emphasizes the benefits of a quality control strategy that includes multiple controls for the many critical aspects of the assay and synthesizes multiple data points to interpret overall success or failure of an assay."

While you won't find a big argument from a Westgard about the use of multiple rules or controls, the setting of the control limits raises a concern. As Tang rightly notes, the use of 2 SD control limits will generate a high false rejection rate. This can lead to multiple bad practices, as we have seen throughout the "traditional" testing world (for example, the Repeated, Repeated, Got Lucky approach to QC). Frequent false rejections often corrode the laboratory's trust in the QC procedure.

Here's where Molecular Diagnostics can make a technological "leap" over the traditional laboratory testing world. QC for chemistry tests started back more than half a century ago, and 2 SD control limits were considered acceptable back then; it wasn't until more recent times that it became clear a better approach was needed. QC practices evolved from single tight control limits to "Westgard Rules" and ultimately to customized QC and multistage QC. Here is a case where the field of Molecular Diagnostics doesn't need to go through the same prolonged evolution - they can leapfrog right to the most advanced techniques and avoid some of the mistakes of the past. If control limits can be appropriately adjusted, and appropriate multiple rules applied, the use of statistical QC can have higher (true) error detection without the troublesome false rejection.

The challenge that all testing fields face is how to design those appropriate control limits and control rules, which us ultimately driven by the specification of goals for performance. Choosing the appropriate QC procedure is dependent upon the ability to state a quality requirement. Even in "traditional" testing, determining quality requirements has been a long struggle. In Molecular Diagnostics, this may be an issue that hasn't been discussed much at all. However, it would be extremely useful information. If Molecular Diagnostic methods are running with high CVs, for example, that may be completely acceptable in cases where changes on the scale of orders of magnitude are required before the results are considered significantly different. That, in turn, might indicate quality control limits could be more generously set than the old tradition of setting them at 2 SD.

One final note, Tang et al note that despite all the best efforts of the manufacturer and the laboratory, systematic errors are still expected to occur, and the current state of diagnostic performance may mean that reliance on a single determination is not warranted:

"[C]onsider designing redundancy into the test system by targeting critical analytes numerous times (eg in different physical quadrants of the array or by probing both ends of a given transcript). Likewise, one could test multiple components of a critical biochemical pathway, multiple markers of a critical phenotype, or multiple conserved segments of an RNA viral genome. These scenarios capitalize on the array's strength in multiplexed testing."

Given the critical nature and high stakes of genetic testing, a strategy that combines multiple testing approaches, multiple statistical rules, and multiple exogenous controls, will help provide a more reliable result.