Tools, Technologies and Training for Healthcare Laboratories

Connecting the Dots to Find Common Causes

July 2004

As laboratory scientists, we're taught to connect the dots between control points to look for patterns in the data and underlying causes. As observers of world events, we can also connect the dots and study the patterns to understand the underlying causes. Those events can help us understand the difficulties and importance of quality control practices in the laboratory.

[Webmaster's note: If you find it offensive when politics is mixed with our laboratory world, you may prefer to skip this essay.]

Is your QC trying to tell you something?

It's not always so obvious when something is going wrong in the laboratory

There is a lesson to be learned about common and special causes of deviation, both in quality control as well as the current world events. As laboratory scientists, we have been taught to connect the dots between control points to look for patterns in the data and underlying causes. As observers of world events, we can also connect the dots and study the patterns to understand the underlying causes. Those events can help us understand the difficulties and importance of quality control practices in the laboratory, such as separating common causes of variation from assignable causes, which are problems that need to be fixed.

In the News

I was traveling much of the last two months, which means I had a lot of time to read the newspapers. It seemed to me there were many articles that focused on “connecting the dots” in an attempt to make sense of current events.

For example, the 9/11 Commission has been in the news recently to discuss their findings about the relationship between 9/11 and Iraq, specifically Saddam Hussein’s possible role. There reportedly was a meeting between members of Hussein’s government and members of al-Qaeda? Does that mean that Hussein shares responsible for the 9/11 attack? The 9/11 commission has concluded there was no causal relationship between Iraq and the 9/11 attack. However, US government leaders still maintain there was a “connection.” One wonders if the fact that 15 of the 19 hijackers were Saudi nationals might be a stronger connection, but evidently not!

The events at Abu Ghraib prison are another example currently in the news. On June 23, 2004, USA Today reported “Rumsfeld OK’d harsh treatment”, citing a memo written in December 2, 2002 that apparently approved new interrogation procedures at the US Naval Base in Guantanamo Bay, Cuba (often referred to as Gitmo). Major General Geoffrey Miller, who supervised the interrogations at Gitmo, visited Iraq in the summer of 2003 with a group of 17 of his staff and spent 10 days inspecting and assessing practices at Abu Ghraib. Miller issued a report in September 2003, after which several interrogation teams from Gitmo were sent to Iraq to train personnel at Abu Ghraib. By October 12, 2003, the Army was moving ahead to implement Miller’s recommendations. By late October, prisoner abuse was being documented in photographs taken by the guards. According to government leaders, these connections are NOT causal. They claim that the prisoner abuse was the result of a few bad apples, not a causal result of management policies and procedures, lack of adequate staffing, or lack of adequate training for the reserve troops.

Common vs Assignable Causes

“A few bad apples” appears to be the non-technical term for “outliers” that fall beyond the expected normal behavior or normal variation. This term shows up so much these days, I’m wondering if it is being taught in one-minute manager training on root-cause analysis. In quality control language, that variation is attributed to common causes. Non-normal or uncommon variation is described is attributed to a special cause that is assignable to an underlying event, which is termed special or assignable causes.

Common causes are those factors and variables that give rise to normal variation, whereas special causes lead to un-normal or unusual variation. Common causes often encompass many small random variations that individually are difficult to isolate and control, but together result in an expected and often predictable range of variation. In analytical processes, slight variations in the volume of sample, volume of reagents, temperature control, timing, etc., give rise to the inherent stable imprecision that is a characteristic of method performance.

Assignable causes, on the other hand, give rise to uncommon or unusual variation which, if properly identified, can then be eliminated. The elimination of assignable causes, or at least minimizing their number, should actually be the first step in establishing a state of statistical control where the range of deviations becomes predictable. With laboratory instruments, that is the manufacturer’s responsibility. With laboratory testing processes, that is the management’s responsibility. In routine production in a laboratory, the job of the analyst is to identify assignable causes when they occur, make corrections when necessary, and eliminate the underlying problems whenever possible. That’s what statistical QC is for and why it is so important!

The ability to distinguish between common causes and assignable causes may depend on the skills of the analyst. We all know that laboratory analysts with many years of experience can glance at a control chart and immediately tell whether or not there is a problem. When asked how they do it, they will often talk about the appearance or “pattern” in the data. The difficulty for new analysts is to gain enough experience to competently evaluate control data. That is also a challenge for laboratory managers who must be sure that all the analysts have the skills necessary to identify problems.

Control rules should separate common vs assignable causes

That’s why the definition of control rules is important. The control rules define the patterns that we are looking for in the data. Given the normal behavior or variation of a process, we try to identify results that exceed the expected range, as well as unusual patterns within the expected range. For example, here’s what the following rules are looking for:

13s - Is there any result that exceeds the expected range of variation? There’s only about 3 chances in 1000 of observing a result outside of limits that are set as the mean +3s and the mean -3s
22s - Is there a pattern where two consecutive results exceed the same 2s control limit? This rule requires two in a row that are both high or two in a row that are both low, which could be caused by a systematic shift in method results (an accuracy problem). There’s only about 4 chances in a 1000 observing this condition due to the normal variation of the process.
R4s - Is there a pattern where the range of results is too wide, one result exceeding the mean + 2s and another result exceeding the mean -2s control limit? If so, the random error of the method is larger than expected (a precision problem).
41s - Is there a pattern where four consecutive control results exceed the same mean +1s or the same mean -1s control limit? If so, it is likely there is a systematic error or accuracy problem.
10x - Is there a pattern where ten consecutive control results are all above the mean or all below the mean? If so, it is likely that there is a systematic error or accuracy problem.

Note that as the control limit gets closer to the mean of the normal variation, the “string” of consecutive observations must increase to correctly identify an unusual pattern in the midst of the expected usual variation. A long string of related events, even in the midst of normal variation, can reveal that something unusual is going on and there is really an underlying cause that needs to be identified and addressed.

Corruption of QC

If the wrong control rule is used, there can be a real danger in declaring an assignable cause even though the variation is normal. Here’s where the commonly used 12s control rule is itself the problem. You probably thought I forgot to include it in the list above; it isn’t there because it won’t distinguish assignable causes from common causes.

We all know there’s a 5% chance of observing 1 measurement outside 2s control limits even when everything is working perfectly. That’s the “one in twenty” we were taught in school. That is really a “false alarm” or a “false rejection” because it is due to the common, expected, and usual variation of the process. What many analysts don’t know is that the chance of a false rejection is about 10% when two levels of controls are analyzed, which is the minimum required by CLIA (and about 15% if three levels of controls are analyzed).

A high level of false rejections causes so much confusion that analysts don’t know what to do when a run is out-of-control! This leads to the common practice of repeating the control until we get it “in.” And this practice totally corrupts the laboratory application of QC because we no longer make any attempt to identify assignable causes and correct or eliminate the problems. See our earlier discussion “Repeated, Repeated, Got Lucky” for more detail about this problem and its remedy.

The Bottom Line

Management is responsible for establishing proper control procedures that provide safe test results for our patients. QC tools and training are necessary for bench level operators to perform their jobs as intended, i.e., to produce the quality of results required for patient care.

If we have control problems today, that means we have management problems. If we want to change those control problems, that means we have to change management practices or the management people who are responsible. The bad apples aren’t those people in the front line who are doing their duty and performing their jobs as expected. The bad apples are in management, often top management.

The bottom line is that top management is responsible for quality problems. And they should be held accountable. That’s why it’s a good sign that the CEO of Maryland General Hospital was held accountable for their recent laboratory problems. I’ll leave it up to you to decide who is accountable for the Abu Ghraib problems.


James O. Westgard, PhD, is a professor of pathology and laboratory medicine at the University of Wisconsin Medical School, Madison. He also is president of Westgard QC, Inc., (Madison, Wis.) which provides tools, technology, and training for laboratory quality management.