Tools, Technologies and Training for Healthcare Laboratories

Abuses, Misuses, and In-excuses

WARNING! You may not want to read this article. It's a sobering list of all the common mistakes made by manufacturers and laboratories when they design, implement and interpret the "Westgard Rules." As it turns out, when your software or instrument or LIS claims to have "Westgard Rules," it might not be true or even useful. And if you see a claim that they've "modified" the rules to make them better, be afraid.

A Top 10 list of problems with QC and the "Westgard Rules"

The "Westgard Rules" are now over 20 years old. Over the course of more than two decades, as you might imagine, we have gotten a lot of questions (and complaints and sometimes even curses) about the use and interpretation of "Westgard Rules." These questions come not only from those med techs working at the bench level, but from the manufacturers who want to include the "Westgard Rules" as an extra feature in their products. Many manufacturers claim to have implemented "Westgard Rules" in their instrument software, QC data workstations, and laboratory information systems. Unfortunately, many of those implementations just don't do it right. The result of poor implementation of the rules is often frustration, and the customers often blame those #$%&! "Westgard Rules" as the source of their troubles and problems.

To address the common questions and complaints, we've compiled a "Top 10" list of abuses, misuses, and "in-excuses" and other bad practices for the implementation of Westgard Rules. I warn readers that many of these points may hit close to home - in your own laboratories or your own instrument systems.

10. Abuse of the term "Westgard Rules."

If you read the original paper in the journal of Clinical Chemistry [CLIN CHEM 27/3, 493-501 (1981)], you'll find absolutely no use of the term "Westgard Rules." That term emerged from common usage of multirule QC procedures, probably as a shorthand way to identify the reference paper. I suppose it was too big a mouthful to say "multirule QC as described by Westgard, Hunt, Barry, and Groth." I'm not sure how this phenomenon got started, but it happened and now we're stuck with it. The problem is that there is no way to know exactly what someone means by "Westgard Rules." Many manufacturers claim to have implemented "Westgard Rules," but there's no way what they've done unless you test the performance of their QC software.

9. Misuse of "Westgard Rules" as a specific set of rules, namely 12s/22s/R4s/41s/10x.

The original paper in the journal of Clinical Chemistry was intended as an example of the application of multirule QC, not a recommendation for a specific combination of control rules. The idea was to combine individual control rules to minimize false rejections and maximize error detection. Thus, we used (and still use) the broader term multirule QC to describe about this type of QC. Even in that paper, the need for adaptation of the rules was described based on the number of control measurements available.

8. Misuse of the 12s "warning rule" in computer implementations.

When multirule QC is implemented by a computer program, you don't need a warning rule. The reason for recommending a warning rule was that at that time - over 20 years ago - most QC was plotted by hand and interpreted visually. The 12s warning rule saved time in inspecting data manually; if there wasn't a 12s violation, you could skip those data points. With computer implementation, there is no need to start with a warning rule because the data inspection can be fast, complete, and effortless. The computer doesn't need to be "warned" - it has more than enough resources to check every point thoroughly.

7. "In-excuse" for using some inappropriate single-rules alone.

The 22s control rule by itself seems to be a favorite in some laboratories, if not by design then by default. Maybe the common practice of repeating controls when one exceeds a 2s limit has led to the routine use of a 22s rule by itself. However, this is really not a very good idea. That rule is responsive only to systematic error and it's not particularly sensitive by itself.

We use the term "in-excuse" because what's happening is that poor choice control rule is giving the laboratory an excuse to think that all the results are okay. The manufacturer allows the customer to choose a control rule that shouldn't be used by itself. The customer chooses that control rule out of some belief and possibly some experience that shows that there are fewer out-of-control flags and more in-control results when that particular control rule is used. Since the 22s single-rule undoubtedly has less false rejection that the 12s control rule, the method has fewer false rejects (which is good). In many cases, however, the control rule also isn't sensitive enough to detect a significant medical error that it should. So the control rule chosen doesn't sound many alarms at all, giving the customer the false sense of security that the QC must be great because problems are so rare.

Both the manufacturer and the customer are partly to blame. The customer is picking a control rule without real knowledge of how that control rule works. The manufacturer is allowing the customer to pick control rules that they shouldn't because the customer is always right. This co-dependency enables people to do bad QC. And it's inexcusable, since both the manufacturer and customer should know better.

6. Misuse of the R4s rule across runs.

The intention of the R4s rule is to detect random error. When used across runs, systematic errors may be detected and misinterpreted as random errors. It is better to catch those systematic errors with the 22s or 41s rules to aid in trouble-shooting. Here again this is an "in-excuse" - a poor use of the control rule. Using the rules without any explanation or understanding of why the rules are combined the way they are. There's a deep logic to the combinations. Certain rules are good at detecting random errors, while others are good at detecting systematic errors.

5. "In-excuse" for illogical combinations of control rules.

Multirule combinations should be built from the outside in. For example, when 2 control materials are analyzed, start with a single rule with wide limits such as 13s, then add a 22s and R4s, followed by a 41s, and finally by a mean rule, such as 8x, 10x, or 12x, depending on whether you want to "look-back" at the control data in the previous 3, 4, or 5 runs. When analyzing 3 control materials once per run, the rules fit better if you use 2of32s, 31s, and an appropriate mean rule, such as 6x, 9x, or 12x to look-back at the previous 1, 2, or 3 runs. With 3 materials, it makes no sense to use an 8x or 10x control rule to look back at control results in the previous 1.7 or 2.3 runs.

4. Misuse of combinations of control rules whose error detection capabilities are not known.

Sure, you can combine any individual rules to make a multirule QC procedure, but only certain combinations have been studied and have known performance characteristics. Just because a computer program lets you pick and choose rules at random doesn't mean it's a good idea. Making up new combinations or rules is like making up new methods. There is a responsibility to document the performance of the new rules before you use them. This means performing mathematical calculations or simulation studies to determine the power curves and the probabilities for false rejection and error detection. Unless you can do that, you shouldn't make up new combinations of rules. The solution in QC software is to select from a list of defined multirule procedures whose power curves have been documented, rather than select individual rules to makeup the multirule procedure. Given a choice between a multirule combination whose performance is known and another whose performance is unknown, you should select the one that has documented performance characteristics.

  • See some of the power functions for known combinations of "Westgard Rules"

3. "In-excuse" for not defining the details of rule implementation.

Multirule QC is actually simpler to do manually than by computer! The reason is that there are many possible rule applications within- and across-materials and within- and across-runs that must be explicitly defined in QC software. In manual applications, you can decide on the best or most appropriate way to inspect the data right when you're looking at the charts. In many software applications, it is not clear when control rules are being applied within- or across- runs, and/or within- or across-materials. And it is almost impossible to find a statement of how a particular software application implements the within/across rules.

2. Misuse of "Westgard Rules" as a magic bullet.

Just because you use "Westgard Rules" doesn't mean that you're doing the right QC. The most important detail about doing QC for a test doesn't concern the control rule used - the critical parameters are the quality required for the test and the bias and CV observed for the method. The control rule chosen flows directly from those details. And in some dire cases, when method performance is bad and CV and bias are high, no amount of "Westgard Rules" can help you. What you really need is a new method.

1. Misuse of Westgard Rules when simpler QC will do.

People are often surprised when we tell them that it may not be necessary to use multirule QC. You may not realize it, but in the labs were I work, not every test is QC'd with the "Westgard Rules." In fact, we only use "Westgard Rules" on those tests that are really hard to QC. Whenever possible, if a single control rule can provide desired error detection, then we'll do it that way because it's simpler and easier.

The across-the-board implementation of "Westgard Rules" on all instruments and all tests is not the most cost-effective way to manage the quality of the tests in your laboratory. It's important to optimize the QC for individual instruments and preferably for individual tests on those instruments. This can be done by following our Quality-Planning process that depends on the quality required for the test and the imprecision and inaccuracy observed for the method. You need to define the quality needed for each test to determine the right QC to implement.

Conclusion

So here you have a tally of "bad practices" that we've encountered when people, laboratories, and manufacturers implement the "Westgard Rules." I've always been a little afraid of pointing out the flaws in many implementations because I might discourage people from using multirule QC in their laboratory. If you've reached this paragraph, you may have the impression that "Westgard Rules" are so complicated you don't want to use them at all. But I want to assure you that it's really not that hard. Everything you need to know to be able to properly use the "Westgard Rules" is available for free (and on this website, in fact). Stay tuned to these pages for another article on "best practices" for "Westgard Rules" - as well as a way to tell if you're doing things correctly