Tools, Technologies and Training for Healthcare Laboratories

A look at pre-analytical error rates, 2014

A new study looks at the frequency of errors in laboratory processes across the Total Testing Process. The study covers pre-analytical error rates from 2011 at the County Emergency Clinical Hospital in Timisoara, Romania. We looked at pre-analytic rates a few years ago. Have labs improved significantly since then?

Another look at Laboratory Error Rates, 2014

Sten Westgard, MS
APRIL 2014

Quality Indicators in the Preanalytical Phase of Testing in a Stat Laboratory, Daniela Stefania Grecu, Daliborca Cristina Vlad, Victor Dumitrascu, Lab Medicine, Winter 2014l45;1:74-81.

This recent study looked at seven of the quality indicators that are approved by the IFCC working group on "Laboratory Errors and Patienty Safety" (WG-LEPS), concentrating on quality indicators in the pre-analytical phase. Not only did they quantify the error rates, they also converted them into Sigma-metrics (going from Defects-Per-Million (DPM) to Sigma-metrics is only a quick table look-up. Remember, when we count defects to determine Sigma-metrics, the usual number reported is the short-term Sigma.

Here's a quick summary of Sigma-metrics found in the Timisoara County Emergency Clinical Hospital in Romania in 2011:

Laboratory Process, 2011 Defect rate
Sigma-metric
(short-term)
Pre-Analytical Phase
QI-5: %Requests with Patient ID errors 0.01% 5.3
QI-7: %Requests with missing input errors on tests 0.002% 5.6
QI-8: %Samples lost 0.05% 4.8
QI-9: %Samples collected in tube
with inappropriate anticoagulant
0.002% 5.6
QI-10: %Samples hemolyzed (biochemistry) 0.40% 4.2
QI-11: %Samples clotted (hematology) 0.77% 4.0
QI-13: %Samples with inadequate
anticoagulant ratio
0.05% 4.8

The Sigma-metrics here are quite good, definitely a marked improvement over the first Sigma-metric assessment by Nevelainen et al in 2000. The focus of journal articles on pre-analytical improvement has reaped a tangible reward. Better training, more automation, more informatics - all have helped to reduce the number of pre-analytical errors considerably.

Now, as it happens, we also had the opportunity to inquire about analytical errors. This laboratory has also been using Sigma-metrics to quantify their assay performance on the long-term Sigma scale.

Test Sigma-metric
(long-term)
Sigma-metric
(short-term)
LDH 3 4.5
Glucose 4 5.5
Creatinine 4 5.5
ALT 4 5.5
CK 4 5.5
Amylase 4 5.5
Potassium 4 5.5
AST 5 6.5
Urea 6 7.5
Sodium 2 3.5

Now there is a lot of good news in that list of performance. Most of these assays, when compared to the pre-analytical processes, end up at about the same level ofperformance. Except that sodium assay, which at 3.5 Sigma (short-term) is going to generate a lot of errors (22,750 defects-per-million).

To put this into perspective, let's consider the actual volume of samples in this study: 168,728 samples during the year of 2011. If we add up all the pre-analytical errors that they found, it totals 1,457 errors, or about 0.86% or 3.8 on the short-term Sigma scale. On the other hand, sodium alone is generating an error rate of 2.3% or 3,880 errors. That sodium method could be generating more than twice the defects of all the pre-analytical errors combined.

A different way to look at that math is to consider how QC is actually run: perhaps as little as once a day, but each control point affects multiple patient results. Let's presume that this lab runs QC only once a day, which means there are 8 to 9 days every year where the method is out of control. If we take the average number of samples per day (about 462), we're talking about 3,698 to 4,160 patient samples that would be impacted by sodium results that are significantly in error.

I realize there are a lot of assumptions built into that math. But the raw truth is that the analytical processes are running worse than the pre-analytical processes. For years, we have been told the opposite, that analytical methods are all great and we only need to focus on the pre-analytical errors. This is evidence to the contrary.

It's not helpful to engage in a zero-sum game, with each phase of the total testing process jockeying for "prominence" in error rates. The total testing process is a three-legged stool, if any of the phases breaks down, the impact to patients is always bad. The recent studies are showing that the focus of the last decade+ of the literature has paid off, improving pre-analytical processes to an excellent level of performance. Compare these rates to earlier studies of laboratory error rates.

The bad news is that the analytical phase continues to be a problem, and in fact may now be the major problem labs face. We need to make sure we're not exclusively paying attention to the pre-analytical phase with our efforts.The imprecision of our methods is often overlooked, and it can be an invisible defect, causing thousands of results to be distorted from their true clinical value.

Again, this is a great study full of lots of crucial facts on the laboratory testing process, and our thanks to the authors for sharing their additional analytical performance data:

Quality Indicators in the Preanalytical Phase of Testing in a Stat Laboratory, Daniela Stefania Grecu, Daliborca Cristina Vlad, Victor Dumitrascu, Lab Medicine, Winter 2014l45;1:74-81.