Tools, Technologies and Training for Healthcare Laboratories

"But...is it really out?" Doing the Wrong QC Wrong

September 2007
with Sten Westgard, MS

We may want to do the Right QC Right, but often our adjustments of QC Theory to fit our 'real world' laboratory end up making us do the Wrong QC Wrong. This essay discusses why and where there are the differences between the theory of QC and the practice of QC.

This article is exclusively sponsored by LGC Technopath Clinical Diagnostics
AACC Email Signature

There is a classic scene from Monty Python’s movie, “Life of Brian”, where the eponymous main character, having been mistaken as the Messiah, tries to convince a mob of erstwhile worshippers that they, in fact, should not follow him.

Brian tells the crowd, “Look. You've got it all wrong. You don't need to follow me. You don't need to follow anybody! You've got to think for yourselves. You're ALL individuals!”

The crowd replies in complete unison, “Yes! We're all individuals!”

Brian persists, “You’re all different!”

Again the crowd replies in unison, “Yes, we ARE all different!”

Then a lone voice pipes up, “I'm not.”

The scene plays better than it reads, but I bring up this up as an illustration of the difference between teaching and experience. Often, we are quite comfortable reading and absorbing the rules of a discipline, but when it comes to practicing that discipline, we find ourselves unable to put theory into practice.

QC in the Real World!

It’s not too difficult to teach control charts and “Westgard Rules.” They are, after all, dots on a graph that are either outside the lines or inside the lines. When certain dots fall outside particular lines, you know there’s a problem. When other dots fall within particular lines, you know you’re fine. So the theory goes…

Once you leave the pages of the textbook, once you walk out of the classroom and into a laboratory, the reality of QC is a bit harder. With the pressure of fast turn-around times, shift changes, new control lots, manufacturer-supplied ranges, and the never-ending influx of new and faster and more complicated instrumentation, it becomes very easy to think, “Well, in the classroom that may have been ‘out,’ but in this particular situation, I’m sure it’s still ‘in’ control.”

Well, we are all different and we are all individuals, but usually those special circumstances that we believe justify our decision to call our controls “in” are simply rationalizations. These “in-excuses,” as we have called them in previous essays, however, may only be the symptom of deeper problems. If the quality control system has been set up improperly, its behavior may have caused the operators to adjust their behavior as well. In a poorly designed and/or poorly implemented quality control system, the operators adapt as best they can. Often, this means adjusting means, widening ranges, repeating more controls, etc. When a system generates too many “false alarms,” the operators will compensate to reduce those alarms. That is, when the QC is wrong, the operators adjust by doing the wrong QC wrong.

The Wrong QC wrong

We get frequent e-mails and phone calls from laboratories asking, “Is my control really out?” It’s almost impossible to give a good answer to this question. The actual literal question may be easy to answer in one sense (if the dot went outside those lines, it’s out, remember?), but in the context of that particular laboratory, it may be very difficult to know if it’s really out.

Not only is it difficult for us to answer the “…but is it really out?” question, it’s probably dangerous to do so. From a few words on a screen, or a few details spoken over the phone, it’s quite likely that we aren’t getting the whole picture from the laboratory. In fact, if we give an answer too quickly, we are probably committing what is known in the statistical world as an error of the third kind: the error committed by giving the right answer to the wrong problem1.

From a statistical perspective, errors are often described as Type I or alpha errors, or Type II or beta errors, which we prefer to describe in terms of the probability for error detection (Ped, Type I errors) and probability for false rejection (Pfr, Type II or beta errors.

  • Type I error refers to the case where a real analytic error occurs, but the QC procedure is unable to detect it (too low Ped)
  • Type II error refers to the case where the QC procedure indicates that an error has occurred, but in reality there is no problem (too high Pfr).

Type I and II errors should be dealt with by proper selection or design of the QC procedures that are implemented in the laboratory. There are many materials on this website that address the problem of selecting the right QC.

But in the real world, it is even more complicated. There can be many problems with the implementation of QC procedures in a laboratory. And these problems can lead to the “errors of the 3rd kind”, where the apparently right answer is actually wrong.

All these problems can be understood in the context of “wrong QC wrong,” where the first wrong must be addressed by selecting the right QC and the second wrong must be addressed by implementing QC right. Here are some common “wrongs” that must be corrected if QC is to function properly in your laboratory.

1. Wrong interpretation of control rule(s)

Sometimes you actually get the interpretation of a control rule wrong. Without a doubt, the multirules are not trivial – they require a sophisticated understanding of the context and applicability of each rule. For instance, how many controls are within a single “run” becomes very important when implementing and interpreting control rules that require multiple measurements.

Do you understand the meaning of within-run, across-run, within-material, and across-material? If you don’t, you might not be applying and interpreting control rules correctly. For example, did you know that the R4s control rule is intended to be applied within-run, not across-run? Or, another example, when you have a 10x rule, does that mean looking back 10 runs for one control level, or 5 runs back for 2 control levels, or both?

We have covered the nuances of the multirule interpretation, including a Top Ten of Don’ts, and a list of Best Practices. Since those earlier discussions are available through the links and through the Basic QC Practices manual, we won't duplicate that effort here.

Let’s go beyond the simple mistakes now. Because before we make sure we know how to implement the rules correctly, we should also make sure we’re using the right rules.

2. Wrong rule(s) implemented

This is perhaps the most frequently-identified problem with QC in the US. If you use the wrong rules, you might be getting the wrong signals. For instance, if you’re using 2s control limits on all your tests, and honestly, we know this is common practice in many laboratories, you will get significant false rejection signals.

The right solution to this problem is a long term change. Instead of arbitrarily applying rules to a test, the laboratory needs to design or select its QC procedures on the basis of the quality is required for the intended use of the test, the precision and accuracy observed for the method, and known rejection characteristics of the control rules and numbers of control measurements. Regular visitors to the website know we say this a lot.

The question of rules is fundamentally related to the question of intended use or required quality of the test. What quality is required for a test performed in your laboratory? For many laboratories, this question has not been answered. For still more laboratories, this question has never been asked. When laboratories don’t ask or don’t answer that question, they often apply a blanket policy for quality control to all tests. That is, they’ll use “Westgard Rules” on all tests, whether they need them or not, to the consternation of the techs. Or perhaps they’ll use the 2s rules on all tests, which causes a lot of false rejects, a lot of repeats, and usually a fudging of the rules and limits to reduce those false rejects.

Even the problems at this level can mask deeper problems. Choosing the right rules assumes that you’ve characterized the precision and accuracy of your methods properly, which further assumes that the right data has been collected and the right calculations have been made. But if you use the wrong data or wrong calculations, you’re going to be choosing and implementing the wrong rules.

3. Wrong calculation of control limits

If you’re using the wrong mean or the wrong standard deviation, the control limits you draw on the chart (or the ones drawn for you by some computer program) will not accurately reflect the performance of your method. If you perform QC Design based on the wrong numbers, you’ve built a castle on a foundation of sand.

The best source of data about your laboratory performance comes from your laboratory. Means, standard deviations, ranges and other data from outside your laboratory does not reflect the individual, particular conditions of your lab. It’s like the fine print on the advertising: “Your results may vary.” They definitely will. So you better take your variance into account.

The use of data supplied from outside the laboratory to provide means, standard deviations, and control ranges is meant to be a temporary workaround. That is, while you accumulate data on your real performance, the manufacturer gives you a wider range to work with.

If you use manufacturer-supplied means, SDs, or ranges, those ranges are often too wide because they usually encompass the variation observed in many labs. You may think you’re using 2s limits but they may in effect be 2.5s, or 3s, or even wider.

Concerning peer group statistics, your best use of the peer group mean is to determine bias, which may lead you to recalibrate the instrument, but that peer mean is not the mean of your instrument. And those peer SDs suffer from the same problem as manufacturer ranges: they’re too wide for your single instrument or method.

Bonus Wrong: Wrong rule meets the wrong action

We’ve said this again and again and again. Don’t use the 2s rule. Don’t use 2s control limits. This QC practice dates back half a century to the introduction of statistical QC in laboratories where all the methods were performed manually! It was appropriate then because only 1 control was analyzed daily, but it is improper today when at least two levels of controls are required. This practice is wasteful because it has a high level of false rejections (10% with two levels of controls), which causes technologists to spend valuable time and resources on “repeating” the control to get it “in.” It is also very dangerous because it corrupts the QC process by making the repeat analysis of controls a standard practice, rather than searching for the cause of out-of-control problems and then fixing the problem. This practices makes it easy for people to ignore out-of-control signals because they believe they are all “false rejects.”

And for those laboratories who believe they’re using 2s rules without experiencing chronic false rejection problems, there is worse news. If your 2s rules aren’t generating constant out of control flags, that means somehow you’ve set your limits far wider than 2s. Which means you don’t know what errors you are detecting and what errors you simply are letting out the door.

Conclusion: Errors, errors, and more errors

When it comes to QC, the laboratory is awash in errors. From using the wrong numbers as the foundation for QC, to choosing the wrong rules, to implementing and interpreting those rules incorrectly, there are errors in all possible dimensions. While surveys of QC practices are rare, it wouldn’t be too much of a speculation to conclude that most laboratories are doing the wrong QC wrong.

Is this the laboratory’s fault? Not always. In many cases, the laboratory professionals are just doing what they have been told to do by administrators, managers, even the technical services reps who provide them support. Certainly most of the personnel in the lab haven’t been taught the proper QC.

The bottom line: we have to improve this situation. We need better manufacturers, better training, and better management. But most importantly, we need you. You, who took the time and effort to read this essay, are probably the quality expert in your organization. Unfortunately, this means the burden is on you. We can’t move forward until the you tell their colleagues, who tell your managers and your tech services, who finally communicate to upper management that quality is important and it needs to be addressed.

But it can be done. You can do it. Always look on the bright side...

If you want to see examples of real-world laboratories and their QC problems, follow to see this essay with a number of user-submitted QC problems.

References

  1. Kimball AW, “Errors of the third kind in statistical consulting”, Journal of the American Statistical Association, Volume 58, Number 278, June 1957.

James O. Westgard, PhD, is a professor emritus of pathology and laboratory medicine at the University of Wisconsin Medical School, Madison. He also is president of Westgard QC, Inc., (Madison, Wis.) which provides tools, technology, and training for laboratory quality management.