Tools, Technologies and Training for Healthcare Laboratories

Error Rates in the Total Testing Process

For over a decade, the prevailing wisdom has been that analytical errors rarely happen and that pre-analytical and post-analytical errors are more important. A 2011 study of 5 years of laboratory data calls this emphasis into question. Perhaps some errors are not more equal than others.

Sten Westgard, MS
April 2011

Any error in any phase of the total testing process is bad - no one disagrees with this principle. But there has been an emphasis on trying to fix pre-analytical and post-analytical errors first, and an effort to diminish the importance of analytical errors. This focus is understandable for many reasons. Pre-analytical and post-analytical errors tend to be more palpable - when you mislabel a specmen, or lose a specimen, for example, there is a tangible, physical problem that you can see and immediately comprehend. Pre-analytical and post-analytical errors also tend to be easier to count than analytical errors - it's not hard to keep track of which specimens were mislabeled, missing, or mishandled.

On the other hand, it's hard to track analytical error rates. Even when there is an analytical error in a test result, it's an error that you have to recognize mentally, but don't see physically (unless you detect something in the abstraction of a control chart). The test result with an analytical error, with the exception of gross outliers, is still just a number on the report or screen. The erroneous number looks just like a normal number. The other difficulty with counting analytical errors is that you have to define the allowable total error first. So counting analytical errors requires more initial work - and for laboratories that don't define quality requirements, it's becomes hard to determine if their analytical results are in error. For many of the studies that counted laboratory errors and error rates, quality requirements were not defined, nor was the method imprecision and/or method bias measured, so it wasn't possible to estimate the analytical error rate. Thus, it's not surprising that many past studies found pre-analytical and post-analytical error rates were the most important - they weren't really looking for, or appropriately counting, analytical error rates.

A recent study in Clinical Chemistry and Laboratory Medicine has some interesting findings about laboratory error rates:

Quality Indicators and specifications for key analytical-extraanalytical processes in the laboratory. Five years' experience using the Six Sigma concept. Antonia Llopis, Gloria Trujillo, Isabel Llovet, Ester Tarres, Merce Ibarz, Carme Biosca, Rose Ruiz, Jesus Alsina Kirchner, Virtudes Alvarez, Gloria Busquets, Vicenta Domenech, Carme Figueres, Joana Minchinela, Rosa Pastor, Carmen Perich, Carmen Ricos, Mireia Sansalvador, and Margarita Simon Palmada, Clin Chem Lab Med 2011;49(3):463-470

From 2004 to 2008, a group of laboratories (initially 15 labs, but eventually just 13 labs) affiliated with the Catalan Health Institute collected data on a set of quality indicators. From the data collected, the averages of median error rates for the laboratories were calculated. Next, they converted those error rates into defect rates compatible with Six Sigma calculations. We therefore have a set of Six Sigma metrics for common laboratory processes.

The whole table is worth examining, see the study here, but we'll highlight the worst 5 laboratory processes:

Laboratory Process / quality indicator Average of median error rate % Sigma-metric
Reports from referred tests exceed delivery time
(postanalytical)
10.9% 2.8
Undetected requests with incorrect patient name
(preanalytic within laboratory)
9.1% 2.9
External control exceeds acceptance limits
(analytical)
3.4% 3.4
Total incidences in test requests (preanalytic) 3.4% 3.4
Patient data missing (preanalytic) 3.4% 3.4

Given these error rates in just the usual % error format, it can be hard to know if (for example) a 0.2% rate for  insufficient sample (ESR) is good or bad. On the Sigma scale, that transforms into a 4.4 Sigma, which is considered  good. Surprisingly, the error rate for Hemolyzed serum samples, often considered the leading cause of pre-analytical errors, was actually pretty good in these laboratories. Just 0.6% samples (average of median laboratory rates) werer hemolyzed, for a Sigma-metric of 4.1.

The study found only 2 laboratory processes below 3 Sigma, which is considered the threshhold for acceptable performance in other industries. The majority of the other processes were between 3 and 5-Sigma.

What's interesting here is the fact that the analytical process is in the top 5 (in a three-way tie for third place) worst processes in the laboratory.

Now, when we look at the definition of the analytical quality indicator - External control exceeds acceptance limits - we find even more interesting information: "This indicator reflects the number of external control results that are more [than] 2 SD from the group mean of participants using the same method. It allows comparison of the performance of each laboratory with respect to the other laboratories operating under the same conditions." This analytical indicator is really measuring whether the individual laboratory is failing in comparison with an EQA group. This number is not truly reflective of the analytical outliers within the laboratories, nor are the limits on performance set based on the quality required by the tests. This is more of a consensus assessment - how many labs didn't get the same results as their fellow labs. Thus, this number may in fact be optimistically low. If you applied the proper QC design to these laboratories, you might find even more of these labs are exceeding the appropriate QC limits. [Of course, part of the reason why the study may not have tracked the internal analytical performance is the challenge it would face in determining the correct control limits for each laboratory and getting accurate reporting on the number of outliers.]

Nevertheless, this is a significant study in the literature, not only because it is tracking at least one measure of analytical quality (even though the indicator is less than ideal, it is an accomplishment that some level of analytical performance was tracked), but also for the fact that it converts these error rates into Six Sigma metrics. The benefit of transforming the error rates into Six Sigma metrics is that it makes plain which processes need improvement and which processes are acceptable.

Again, here's the full reference for the study:

Quality Indicators and specifications for key analytical-extraanalytical processes in the laboratory. Five years' experience using the Six Sigma concept. Antonia Llopis, Gloria Trujillo, Isabel Llovet, Ester Tarres, Merce Ibarz, Carme Biosca, Rose Ruiz, Jesus Alsina Kirchner, Virtudes Alvarez, Gloria Busquets, Vicenta Domenech, Carme Figueres, Joana Minchinela, Rosa Pastor, Carmen Perich, Carmen Ricos, Mireia Sansalvador, and Margarita Simon Palmada, Clin Chem Lab Med 2011;49(3):463-470

The lesson here - and the reason why this article is included with our Basic QC Practices lessons - is that analytical performance cannot be taken for granted. We cannot assume analytical errors don't occur, or that the errors that occur are minor and can be ignored. We have to pay as much attention to analytical errors as we pay to pre-analytical and post-analytical errors.