Tools, Technologies and Training for Healthcare Laboratories

Six Sigma across the Total Testing Spectrum

A recent paper accomplished a trifecta of sorts: measuring quality indicators not only in the PRE-analytical and POST-analytical phases of testing, but also in the analytical phase of testing. We convert the analytical Sigma-metrics to match the short-term Sigma results to compare the error rates of all the different processes in the testing spectrum.

Sigma-metrics across the Total Testing Spectrum

April 2015
Sten Westgard, MS

[Note: This QC application is an extension of the lesson From Method Validation to Six Sigma: Translating Method Performance Claims into Sigma Metrics. This article assumes that you have read that lesson first, and that you are also familiar with the concepts of QC Design, Method Validation, and Six Sigma. If you aren't, follow the link provided.]

This analysis looks at performance on the short-term Sigma-metric scale on three phases of the total testing process, the pre-analytical, analytical and post-analytical:

Evaluating laboratory key performance using quality indicators in Alexandria University Hospital Clinical Chemistry laboratories, Risk MM, Zaki A, Hossam N, Aboul-Ela Y, Journal of the Egyptian Public Health Association 2014, 89:105-113.

One interesting part of this study is that there were two phases separated by an improvement phase. That is, they started by assessing their current performance through Quality Indicators, made changes and improvements based on that first assessment, and then made another assessment to determine the success of their improvement efforts.

"The laboratory is an ISO 15189-2007-certified laboratory affiliated to a public university hospital serving a population of 234,403... In 2012, 262,100 requests and 2,668,984 tests werer performed. A study was carried out on all inpatient tests presented to the Clinical Chemistry laboratory for a period of 7 months."

The Phase 1 assessment

First the laboratory measured how often the test request form was incomplete:

Quality Indicator
(N=31,9444)
 Phase 1
% error
Phase 1
Sigma
 Incomplete patient information 1.02% 3.9
 Missing admission number 1.85% 3.6
 Missing patient preparation 2.97% 3.4
Missing physician ID 6.35% 3.1
Missing diagnosis 1.75% 3.7
Missing data and time 9.79% 2.8
Missing type of sample 9.94% 2.8

The laboratory next measured the different sample rejection errors:

Quality Indicator
(P1 N=50440)
 Phase 1
% error
Phase 1
Sigma
 Hemolysis 3.14% 3.4
 Clotted 0.98% 3.9
 Wrong sample to anti coag ratio 0.35% 4.2
Mis-identification 0.08% 4.7
Lipemic 0.03% 5

 

What was the impact of better training on the analytical quality? Even more dramatic.

Next, they measured the Sigma-metric for 18 chemistry analytes. They used the analytical Sigma-metric equation, which calculates Sigma on the long-term scale. To make that metric equivalent on the short-term scale, we must add 1.5s. The short-term scale, it must be remembered, builds in a "1.5s shift" to reflect the expected allowable drift of a process. This is a fudge factor that was built into Six Sigma long before it began to be used in healthcare or the laboratory; this is just part of the Six Sigma methodology.

Note: in this study, the levels were not detailed, so the raw data to make Sigma-metric calculations was not available. Therefore, we had to accept what was reported for the Sigma-metrics. It appears that the allowable total errors applied were dervied from the CLIA goals.

Analyte N=50,440)  Phase 1
% error
Phase 1
Sigma
 Albumin 0.00034% 5.38
 Total Bilirubin 8.1% 2.92
 Calcium 0.19% 4.46
Chloride 15.9% 2.5
Cholesterol 1.8% 3.85
Creatinine 21.2% 2.37
Glucose 15.9% 2.55
Potassium 0.00085% 5.83
Total Protein 0.07% 4.77
Sodium 9.7% 2.88
Triglycerides 0.00034 >6
Urea 93.3% 0
Uric Acid 8.1% 2.9
ALT 24.2% 2.19
ALP 27.4% 2.07
AST 9.7% 2.86
CK 0.00034 >6
LDH 1.1% 3.8

 If we apply these error rates to the 50,440 received samples, this lab was generating about 13% of its results in error. Some tests are not generating any errors, but others (like Urea) are generating almost exclusively errors. There's a large variance in performance from test to test.

 

Finally, herer are the post-analytical errors that were observed (out of 27,612 test reports received):

Quality Indicator
(N=50,440)
 Phase 1
% error
Phase 1
Sigma
 Missed result 1.07% 3.9
 Missed sample (order) 0.72% 4
 Missed sample (received) 0.58% 4.1
Incomplete results 0.08% 4.7
Total 5.11% 3.2

To summarize phase 1, here are the different numbers of errors and percentages:

Phase

 Phase 1
errors
Phase 1
N
% of Errors
Request form errors 10,753 31,944 8.03%
 Sample rejection 2.314 50,440 1.73%
 Analytical errors 119,372 50,440 * 18 =
907,920
89.18%
post-analytical result errors 1,410 27,612 1.05%
Total 133,849 n/a  

The results here are in stark contrast to the conventional wisdom of error frequency and distribution. Most studies of pre-analytical and post-analytical quality indicators assert that the analytical quality is the source of the least amount of errors. Of course, most of those studies also don't calculate the analytical Sigma-metrics - if you don't want to acknowledge a source of errors, it helps if you ignore them and refuse to count them. Here, in contrast, when the laboratory does calculate the performance of every phase of the total testing process, it finds that the source of nearly 90% of the problems is in the analytical methods themselves.The study ranked 13 out of the 18 chemistry assays as below 3 Sigma and problem tests (below the category of fit for purpose).

Phase 2 Quality Indicators

After the initial assessment, the laboratory undertook multiple improvement efforts: "Educational lectures and video films were presented to nurses and techicians together with workshops witha  special focus on quality awareness, quality participation, ISO 15189 standars, AUHL mission and vision. Sessions were divided for the preanalytical team to include phlebotomy skills and preanalytical standards and errors. Analytical team sessions included quality concepts, westguard rules, inventory management, and proficiency testing evaluations."

Here are the improved preanalytical metrics:

Quality Indicator
(N=28,286)
 Phase 2
% error
Phase 2
Sigma
 Incomplete patient information 0.24% 4.4
 Missing admission number 0.28% 4.3
 Missing patient preparation 1.68% 3.7
Missing physician ID 3.41% 3.4
Missing diagnosis 0.65% 4.0
Missing data and time 1.78% 3.7
Missing type of sample 1.60% 3.7

Major improvements were acheived on the preanalytical side.

How about sample rejection?

Quality Indicator
(P2 N=45,180)
 Phase 1
% error
Phase 1
Sigma
Phase 2
% error
Phase 2
Sigma
 Hemolysis 3.14% 3.4 1.86% 3.6
 Clotted 0.98% 3.9 0.69% 4
 Wrong sample to anti coag ratio 0.35% 4.2 0.21% 4.4
Mis-identification 0.08% 4.7 0.07% 4.7
Lipemic 0.03% 5 0.01% 5.3

Everything improved with the sample handling and reception.

What was the impact of the additional training on analytical performance? Even more dramatic:

Analyte (N=45,180)  Phase 1
% error
Phase 1
Sigma
Phase 2
% error
Phase 2
Sigma
 Albumin 0.00034% 5.38 0.011% 5.26
 Total Bilirubin 8.1% 2.92  0.26% 4.33
 Calcium 0.19% 4.46  0.013% 5.74
Chloride 15.9% 2.5  0.26% 4.33
Cholesterol 1.8% 3.85 0.012% 5.65
Creatinine 21.2% 2.37  0.0032%  5.51
Glucose 15.9% 2.55  0.1% 4.69
Potassium 0.00085% 5.83 0.00034% >6
Total Protein 0.07% 4.77 0.00034%  >6
Sodium 9.7% 2.88  0.13% 4.54
Triglycerides 0.00034 >6 0.00034% >6
Urea 93.3% 0 0.1% 4.64
Uric Acid 8.1% 2.9 0.00034%  >6
ALT 24.2% 2.19  0.00034% >6
ALP 27.4% 2.07  0.00034% >6
AST 9.7% 2.86 0.013% 5.74
CK 0.00034 >6  0.00034% >6
LDH 1.1% 3.8  0.011% 5.23

Some of the assays are making larger than 3 sigma improvements, leaping from under 3 Sigma to greater than 6. The Average error rate dives from over 13% to less than 0.05%. That's a very impressive improvement!

Quality Indicator
(P2 N=24,507)
 Phase 1
% error
Phase 1
Sigma
Phase 2
% error
Phase 2
Sigma
 Missed result 1.07% 3.9 0.58% 4.1
 Missed sample (order) 0.72% 4 0.35% 4.2
 Missed sample (received) 0.58% 4.1 0.23% 4.4
Incomplete results 0.08% 4.7 1.33% 3.8
Total 5.11% 3.2 2.48% 4.9

 Again, some impressive improvements achieved.

 

Phase  Phase 1
errors
Phase 1
N
% of Errors Phase 2
Errors
Phase 2 N % of errors
Request form errors 10,753 31,944 8.03% 2727 28,266 54.1%
 Sample rejection 2.314 50,440 1.73% 1285 45,180 25.5%
 Analytical errors 119,372 50,440 * 18 =
907,920
89.18% 418 45,180 * 18 =
813,240
8.3%
post-analytical result errors 1,410 27,612 1.05% 608 24.507 12.1%
Total 133,849 n/a n/a 5.038 n/a n/a

The improvements are very dramatic, but the analytical reduction of defects is the largest. From nearly 120,000 defective test results to less than 500. That's a great improvment.

Notice that in the improvement phase, the frequency of error occurrence seems to match the conventional wisdom: more than 75% of the errors occurring are in the pre-analytical stage, a little over 12% of the errors are occurring in the post-analytical phase, and less than 10% of errors occur in the analytical phase.

Conclusion

The take-home lesson from this study might be that labs that don't monitor their analytical quality closely may be missing the bulk of their errors. For labs that only look at their pre-analytical and post-analytical errors, they may only be seeing the tip of the iceberg. Labs that look at all phases of the total testing process, however, have a chance to identify the biggest error sources - the methods themselves. If tests are assessed using Sigma-metrics, the specific problems can be identified and major improvements can be made. Once the analytical methods are performing above 4 Sigma and better, the number of errors

One other observation to be made about this laboratory: regardless of phase 1 or phase 2, it was ISO 15189 certified. It appears that the ISO 15189 certification did not critically assess the number of errors occurring in the total testing process. While ISO 15189 principles undoubtedly helped the laboratory in its improvement errors, the study begs the question: should labs applying for ISO 15189 certification have a minimum performance for the total testing process? Should labs be required to reduce their error rates below a particular threshold in order to qualify for ISO 15189 certification? When an accreditation standard is poised at a very high level, it may not be able to see down to the details. Remember, those who climb to high altitudes suffer the danger of hypoxia. Labs that focus only on the high level "systems" approach are in similar danger of overlooking and ignoring the devil in the details of their true performance.