Tools, Technologies and Training for Healthcare Laboratories

$aving the Cost$ of Poor Quality

Contrary to popular business belief, there are ways to improve quality and save money at the same time. Six Sigma is founded on that core philosophy. But where are the savings in the laboratory? Here's a hard fact: poor QC practices are wasting money in your laboratory RIGHT NOW. Dr. Westgard shows how to identify, quantify, and eliminate the wasted time, effort, and resources spent on repeat runs, repeated controls, and poor patient care. (Preview)


Business and industry have demonstrated significant cost savings as a result of implementing Six Sigma Quality Management. Can you expect similar savings in healthcare applications? We think so and here’s why!


The implementation of Six Sigma will actually save you money that is already being spent due to poor quality. This is money that is being wasted and therefore can be reclaimed. While it’s difficult to measure these savings, it’s easy to illustrate the magnitude of the savings that might be made.

Within a laboratory, defective quality control procedures are already costing you money, particularly if you’re using the “tried and true” Levey-Jennings chart with 2 SD control limits. Careful design of QC procedures will reduce the false rejection of analytical runs which otherwise waste personnel time and laboratory materials and resources.
Defective test results also cost your organization money when laboratory tests are misinterpreted and misused. Appropriate QC will assure the detection of medically important errors that would otherwise cause improper diagnosis and treatment of patients, wasting the time of physicians and nurses and consuming resources of the organization.

Quality Costs – the costs of good quality and poor quality

These savings are understandable if you consider all the costs associated with quality. These costs are described by the industrial model for “quality-costs”[1], which includes the costs of good quality (preventive-costs and appraisal-costs) and the costs of poor quality (internal and external failure-costs), as shown below. In industry, the costs of good quality are the planning and design of the process, the training of the line workers, and the time and effort in measuring and monitoring the quality of the product. The costs of poor quality are the rework and waste of a production process – doing things over to get the product right, or scrapping the product altogether.

For a laboratory, the costs of good quality include the cost of planning for quality (a preventive-cost) and the cost doing QC (an appraisal-cost). The cost of doing QC is probably the only cost the laboratory keeps track of. The costs of poor quality include the wasted time, effort, and materials due to repeat analysis of controls, new controls, and patients (internal failure-costs) and the cost of repeat orders and additional test orders by the physician to confirm laboratory results, as well as the costs associated with the wrong diagnosis and treatment due to erroneous test results (external failure-costs). These failure-costs would be high if they were measured in laboratories and hospitals, but they usually aren’t.

Careful planning of QC minimizes internal failure-costs by reducing the false rejections of analytical runs and reduces external failure-costs by assuring that medically important errors are detected. We adopted the industrial quality-costs model in the mid-1980s [2] and that concept is embedded in our thinking and also in the quality-planning process and tools that have been developed since then.

The Problem with High False Rejection

Poor QC is costing you money right now. It’s a little known and oft-denied fact that most laboratories in the US and around the world are performing poor QC. I’ve heard too many personal anecdotes and read too many studies to say otherwise. I’m sorry to be the bearer of bad news, but someone has to speak out.

Before you protest, “not in my laboratory,” answer these questions honestly: Does your laboratory still use 2 SD control limits? Does your laboratory repeat controls again and again, analyze new controls, and then re-analyze the new controls until they’re finally “in”? If so, most likely you have a false rejection problem that is costing you time and money. It’s also a dangerous practice because analysts get accustomed to false alarms and no longer respond to true alarms, which leads to the reporting of erroneous test results. These actions are a result of poor quality planning, are costing you time and money, and are most likely giving your laboratory a bad reputation.

We must realize that many of the repeat runs, repeat controls, and new controls that we use to replace “bad” controls are the result of false rejection. The reason we don’t immediately reject the run, but take these other actions first, is because we know instinctively that there really wasn’t something wrong with the run in the first place. There isn’t a problem with the samples – there’s a problem with the control rules being used to detect the problem.

Here’s a table with some hopefully eye-opening numbers about false rejection:

 

Control Rules
Number of Controls per Run
1 2 3 4
12s 5% 9% 14% 18%
12.5s 1% 2% 3% 4%
13s 0% 0% 1% 1%
13.5s 0% 0% 0% 0%
13s/22s/R4s 0% 1% 2% 2%

Notice anything startling about the first row of this table? The 12s rule (i.e., 2 SD control limits) has a terribly high level of false rejections. There’s no wonder this rule causes so many rejections and repeat runs. If you’re running 2 controls and everything is working perfectly, you should be getting an out-of-control flag almost once in every ten runs (9%); with 3 controls, it’s one out of every 6 or 7 runs; with 4 controls, it should be one out of every 5 runs. If there’s actually a problem with the method, there will be even more flags, but who can tell a real rejection from a false rejection?


For those who aren’t yet convinced that this is a serious problem, let’s put the same condition into another context. If your laboratory has a fire alarm that goes off once a week even though there’s no fire, would it save the laboratory time and money to get a better fire detector – one that only goes off when there’s actually a real fire in the building? Obviously, a more reliable alarm or detector would save a lot of time if we’ve been evacuating the laboratory once a week. If we aren’t evacuating the laboratory, then we’re running the risk of a terribly high cost if a real fire occurs.

The Cost of False Rejections

If you’re still not convinced, then maybe some actual figures in $$$$ will be persuasive.

Look at the next table, where the yearly costs of repeat runs have been calculated. The first column shows how many runs (from 1 to 4) are performed per day. The second column shows the total runs per year, assuming the laboratory operates all 365 days per year. The probability of false rejection is given in the 3rd column (0.09 for 2 control measurements per run and 0.14 for 3 per run) and is multiplied by the number of runs per year to give the extra runs needed, as shown in column 4. The remaining columns show the failure-costs or waste due to false rejections for 1 test or method, 5 methods, and 20 methods. The cost of analysis per sample is assumed to be $0.50. Sections A and B estimate the costs of repeating all patients and controls, assuming there is an average of 20 patients per run. Sections C and D show the costs if only the controls are repeated.

Quality-Costs I: Internal Failure-Costs (Waste & Rework)

Runs/day Runs/year Pfr Ex.runs/year Cost/run 1 method 5 methods 20 methods
A. Cost of repeating run of 20 specimens and 2 controls when cost of each is $0.50
1 365 0.09 32.85 $ 11.00 $ 361.35 $ 1,806.75 $ 7,227.00
2 730 0.09 65.7 $ 11.00 $ 722.70 $ 3,613.50 $ 14,454.00
3 1095 0.09 98.55 $ 11.00 $ 1,084.05 $ 5,420.25 $ 21,681.00
4 1460 0.09 131.4 $ 11.00 $ 1,445.40 $ 7,227.00 $ 28,908.00
B. Cost of repeating run of 20 specimens and 3 controls when cost of each is $0.50
1 365 0.14 51.1 $ 11.50 $ 587.65 $ 2,938.25 $ 11,753.00
2 730 0.14 102.2 $ 11.50 $ 1,175.30 $ 5,876.50 $ 23,506.00
3 1095 0.14 153.3 $ 11.50 $ 1,762.95 $ 8,814.75 $ 35,259.00
4 1460 0.14 204.4 $ 11.50 $ 2,350.60 $11,753.00 $ 47,012.00
C. Cost of repeating 2 controls only when each control costs $0.50
1 365 0.09 32.85 $ 1.00 $ 32.85 $ 164.25 $ 657.00
2 730 0.09 65.7 $ 1.00 $ 65.70 $ 328.50 $ 1,314.00
3 1095 0.09 98.55 $ 1.00 $ 98.55 $ 492.75 $ 1,971.00
4 1460 0.09 131.4 $ 1.00 $ 131.40 $ 657.00 $ 2,628.00
D. Cost of repeating 3 controls only when each control costs $0.50
1 365 0.14 51.1 $ 1.50 $ 76.65 $ 383.25 $ 1,533.00
2 730 0.14 102.2 $ 1.50 $ 153.30 $ 766.50 $ 3,066.00
3 1095 0.14 153.3 $ 1.50 $ 229.95 $ 1,149.75 $ 4,599.00
4 1460 0.14 204.4 $ 1.50 $ 306.60 $ 1,533.00 $ 6,132.00

Let’s assume that the laboratory adheres to the QC procedure religiously, i.e, both patients and controls are repeated whenever there’s a rejection. For a small laboratory that performs only a single test or method once a day, the cost is $361 per year. If the laboratory performed 4 different tests, the cost would be $1,806; if there were 2 runs per day, the costs would increase to $3,613. For a laboratory that performed 20 different tests or for an instrument that performs 20 different tests, the cost per year is $7,227 for a single run per day, $14,454 if there are both morning and afternoon runs, and $28,908 if the laboratory operated around the clock, performing a third run on second shift and a fourth run on 3rd shift. For 3 controls per run instead of 2, the cost would be $47,012.

If you wondered why many laboratories don’t repeat the patient samples when a run is out-of-control, now you know – the cost is too high. They keep the costs down by repeating only the controls, as shown in sections C and D of the table. But even those costs run into the thousands.

It would be better, of course, to utilize a QC procedure that has a lower false rejection rate. A multirule procedure with Ns of 2 and 3 would have only 1-2% false rejections, rather than the 9-14% for 2 SD control limits. You can see that a lot of money is wasted by common practice of using 2 SD control limits. At the same time, you can also see how the false rejection problem can be reduced by selecting other QC procedures, such as a 13s, 12.5s, or multirule procedure. You can do the math and figure out the savings.

In summary, the costs of false rejections are a big problem in many laboratories. This cost can be reduced by selecting QC procedures that have a low probability for false rejection. Any laboratory that uses 2 SD control limits will experience false rejections; if not, they’re somehow widening their control limits, perhaps by using the manufacturers’ limits, bottle values that include lab to lab variation, supposedly clinical limits, or even drawing the CLIA total error criteria directly on control charts – all bad practices.

 

Six Sigma, 2nd EditionWant to read more of this article?

We invite you to purchase the Six Sigma QC Design and Control manual, Second Edition, available at our online store. You can also download the Table of Contents and a Sample chapter.