Tools, Technologies and Training for Healthcare Laboratories

A Quite Important Quality Indicator

The use of Quality Indicators (QI) is an important indicator that a laboratory has embraced and implemented a Quality Management System (QMS). But which Quality Indicators should a laboratory choose? And what does a particular QI tell us? As it turns out, some QIs are more important than others.

 

 

 

Quality Indicators: which ones are Quite Important?

MAY 2013
By Sten Westgard, MS

ISO 15189 requires that laboratories develop Quality Indicators and apply them as a way of monitoring and evaluation laboratory processes. But, like so much of the standard, ISO 15189 does not say which processes to monitor or what Quality Indicators to implement. Instead, that has been left up to the laboratory. For years, the literature has been full of papers attempting to identify and harmonize the QIs that are used by laboratories.

In a recent paper,the diagnostic laboratories of the Institute of Human Behavior and Allied Sciences in Delhi, India, attempted their own monitoring of QIs:

Role of Intervention on Laboratory Performance: Evaluation of Quality Indicators in a Tertiary Care Hospital, Agarwal R, Chaturvedi S, Chhillar N, Goyal R, Pant I, Tripathi CB,  Ind J Clin Biochem (Jan-Mar 2012) 27(1):61-68

The results are interesting, particularly because this study picked a few unique QIs that other laboratories and studies have missed. From january to December of 2010 they looked at 42,562 samples and tracked the errors that occurred. There are 11 QIs that are of interest to us, which were spread equally throughout the Total Testing Process. That is, there were 4 indicators in the pre-analytical phase, 4 indicators in the analytical phase, and 9 indicators in the post-analytical phase.

In addition to reviewing these indicators, we're going to put them on the short-term Sigma scale here, just to help give an objective assessment of the performance.

Quality Indicator Frequency % Sigma
Pre-analytical phase
  Wrong identification 23 0.05%  4.8
  Incomplete forms  3,360 7.89%  3.0
  Sample rejection 2,056 4.91% 3.2
  Hemolyzed vial 316 0.74% 4.0
  Inappropriate vial 10 0.023% 5.0
  Insufficient quantity 1,162 2.75% 3.5
  No. accidents reported 0
Analytical phase
  Random errors 18 0.04% 4.9
  Systematic errors 8 0.02% 5.1
  PT failure (non-conformity with QC) 14
  Number of repeat testing 9,753 22.91% 2.3
Post-analytical phase
  Urgent sample reporting 125 0.29% 4.3
  Critical value reporting 46 0.11% 4.6
  Turnaround time (TAT) 11 0.025% 5.0

 Now that you see the numbers and metrics, many questions are probably coming to mind. For instance, the common wisdom is that the pre-analytical phase has the largest number of errors. However, when you take into account the number of repeat testing, for this hospital, the biggest problem is the analytical phase of testing. Indeed, the single largest source of errors is the number of repeated tests, which has a Sigma-metric below 3.

The study authors address this fact with a different perspective: "Repeat testing of sample to reconfirm the results was done on regular basis, both in case of doubt at laboratory level but also on request of physicians and was taken as a positive indicator for monitoring the reliability of results."

To be frank, we would make the opposite conclusion. You repeat testing when the reliability of your results are suspect. You repeat testing when you're not confident in the analytical result. You repeat testing when the test result gives you an answer that doesn't fit the rest of the clinical picture you're seeing. And you repeat testing when you don't believe that you're getting the right result the first time. This is not a positive indicator. It's a negative indicator. For this hospital, while pre-analytical problems are apparent, the larger issue may be that neither the laboratory nor the clinicians trust the results being reported. If you combine all the pre-analytical errors listed above, they still are a smaller problem than the issue of repeated testing.

What's more intriguing (or perhaps troubling) is that the rate of QC outliers and rejection is quite low. Random and Systematic errors have some of the highest Sigma-metrics in performance. How does this reconcile with the repeated testing? You can conjure a number of scenarios to explain this discrepancy: 1. The analytical tests are actually very good but neither the laboratory nor the clinicians know this, so they continue to mistrust the results and order repeated testing. 2. The QC system is not properly designed and, thus, it is not a good indicator of clinically significant errors. Thus, errors are going out the door of the laboratory and reaching clinicians and patients, and these variable results are causing confusion, delay, and repeated testing. Of the two possibilities, we find it more credible to believe that the QC system is probably not designed correctly and is thus not detecting all the errors it should be.

The problem of 14 PT failures is hard to assess, because teh study doesn't tell us the frequency of PT testing nor does it inform us how many tests are in PT. We suspect that if a US lab experienced 14 PT failures in a year, that would be a big deal and some kind of analytical problem would be suspect. The other issue with a PT failure is that this is an error that has a much larger impact that one single specimen. Typically, in the US you have 2-3 PT events per year, and a failure in one event may indicate that months of patient test results are in error.

So here in a nutshell we can see the problems with indicators, which are the same problems we see with any statistic: the number depends a lot on the formula and assumptions used to create that number. In this study, the Quality Indicators may be indicating the lack of quality and the lack of a good quality system to detect medically important errors. Yet this particular indicator, Number of Repeat Testing, is not one of the QIs that are emerging as the standard set of recommended QIs.

There is a famous quotation attributed to Peter Drucker, "If you can't measure it, you can't manage it." As laboratories develop and identify Quality Indicators, this study is a warning not to rule out or ignore analytical QIs. Those indicators might be telling a different story than the one you're used to hearing, a story that may be the most important one of all.