Tools, Technologies and Training for Healthcare Laboratories

Questions from "Chasing the Mean" Zoom

More than 2,000 laboratory professionals registered for the Westgard Zoom on Chasing the Mean, a frank discussion of PT, EQA, and peer groups. So many questions raised that couldn't be answered during the time of the call, we saved and answered them here.

 Questions and Answers from the "Chasing the Mean" Westgard Zoom

July 2022
Sten Westgard, MS

"Chasing the Mean", the Westgard Zoom on the value of PT, EQA, and peer groups, had a fantastic attendance and spurred many questions. We weren't able to answer them all in the time alotted, but we kept track, and we've got some answers now.

Q: What should we do if the SDI is high or low? What about the CVR/CVI?

A: The broad indicators of SDI and CVR/CVI are used best as negative indicators - if something is wrong and they are too high, too low, too large, there is something genuinely wrong. If the lab's individual CV is larger than the group's CV, that's a major sign of excessive imprecision in the laboratory. Something is wrong with the laboratory itself, probably not in the instrument, but in the practices of the staff. If the labs individual mean is several times the group SD higher or lower than the peer group mean, that's a sign of excessive difference, again more likely the fault of the individual laboratory, than the entirety of the rest of the group. With an SDI, there may be a recalibration that might bring the laboratory back into the fold of the peer group.

Q: Is the 10:x rule still relevant?

A: This is a very pertinent question, as laboratories seem to trip across these failures most often. The 10:x rule is most valuable - indeed mandatory - for methods that are performing at 3 Sigma or lower. But for 4, 5,6 Sigma methods, the 10:x rule is not necessary, it will reflect an error that may be statistically significant, but is unlikely to be clinically significant.

Another reason that the 10:x rule feels more troublesome to laboratories, is the increasing practice of assigned or "standardizing" the means of multiple instruments to something that does not reflect the true mean of the instrument. When you use a manufacturer mean, or a peer group mean, or you establish some sort of target mean arbitrarily, that shift of values is what makes far more likely that you will see 10 values on one side of that mean.

There are those who advise never using the 10:x rule, but the truth is that it's useful in some circumstances and and not in others.

Q: If after major maintenance i run QC 5 times and there is nothing changed in the performance should I still do Repotable range verification?

A: This question tackles one of the regulatory provisions that requires laboratories to re-verify the range after a complete change of reagents - but there is an ambiguous clause that says if you can prove patients weren't affected you don't have to do it after all. So how does one prove there was no change to patients? Running QC a few times can't really tell you that. You need to do some sort of patient testing, using samples run on the previous lot and then run on the new lot, comparing the differences (of course you need to set some sort of allowable difference, which is completely unaddressed in the regulations.) In this specific case, running of 5 QCs does not fulfill the requirements of a verification of reportable range, you should do something more. Either run patient samples and "prove" there is no difference, or do a reportable range verification study.

Q: If my historic CV is greather than the peer CV, how should I approach setting an appropriate CV to new QC that aligns to my instrument performance?

A: CV should not be "set" or "fixed" - it should be calculated. Setting a CV or SD is only acceptable when you have no information available to you. Then you might take the SD from the package insert. Again only for a short period of time - as soon as it is possible to establish a mean and SD, usually the minimum of 20 values, you should.

Worries about having SDs that are "too tight" usually spring from the practice of setting or implementing 2 SD control limits, which are known to be tight and generate high false rejection rates.

Q: It is common to find deviations from the means when changing batches of reagents? What parameters should be considered in determining what deviations are acceptable or not?

A: Differences in reagent lots are inevitable, it's not possible to produce identical lots consistently. But determining the acceptable difference is critical. Typically, 1/3 of the TEa is used as a benchmark for acceptability. The CLSI guideline for this matter recommends that real patient samples are the best practice for determining these differences. We do provide some further guidelines for acceptability in the Advanced QC Practices textbook.

Q: Do you agree that using MA in CBC is particularly safe and reliable? Are there certain analytes that are "guaranteed" to work with moving averages?

A: Any analyte where the s:pop/s:meas is favorable will generate a useful moving average. You will almost always see PBRTQC papers use sodium as an example, because the biological variation is so small. For analytes where biological variation is quite small relative to the analytical variation of the method, the moving average will need a minimal number of patient samples to generate a reliable moving average.

Remember, however, that the s:pop/s:meas will be different for each individual laboratory. So it's hard to make a broad recommendation that will cover all labs. But our Six Sigma reference manual has a chapter discussing some of the analytes that are good candidates.

Q: Manufacturer mean or Lab mean? In case of a laboratory having a chain of labs working together in different areas belonging to the same entity?

A: The best practice is to have the performance reflect the actual performance of the instrument. There is nothing about having a chain of labs that exempts you from establishing your own mean and SD. While it is tempting to standardize everything so all the instruments and results are the "same", that might obscure the true performance of these instruments and allow errors to occur and impact patients. We discuss this issue in more depth in our textbook, Advanced QC Strategies.

The short answer, you can't do it for all methods and instruments, you have to see how good the methods and instruments are (think Sigma). And in order to check whether or not the instruments are remaining close to the "standard" mean or SD, you will need to constantly run real patient samples. It may sound like a simple way to get one answer, but the effort to support that will be anything but simple.

Thanks again to all who attended the Westgard Zoom on Chasing the Mean and all those who posed questions.

Stay tuned for more zooms in the future.