Tools, Technologies and Training for Healthcare Laboratories

Evaluation of 4 POC instruments

An analysis of 4 different point-of-care (POC) analyzers and one POC device for HbA1c, based on a study published in 2012. The focus of the study was to find practical POC devices that could support faster decision making for the country's large HIV population. The question is, do any POC devices provide adequate quality for that type of clinical care?

Evaluation of four POC analyzers and one HbA1c analyzer

APRIL 2014
Sten Westgard, MS

[Note: This QC application is an extension of the lesson From Method Validation to Six Sigma: Translating Method Performance Claims into Sigma Metrics. This article assumes that you have read that lesson first, and that you are also familiar with the concepts of QC Design, Method Validation, and Six Sigma. If you aren't, follow the link provided.]

This analysis looks at a paper from a 2012 publication of Clinical Laboratory which examined the performance of four point-of-care (POC) chemistry analyzers and one POC HbA1c device:

Multi Point of Care Instrument Evaluation for Use in Anti-retroviral Clinics in South Africa, Verena Gounden, Jaya George, Clin Lab 2012;58:27-40.

The Imprecision and Bias Data

The imprecision data used in the study was collected thus:

"Two levels of quality control samples provided by the different instrument manufacturers werer run in duplicate twice a day for 3-5 days."

As for bias, the study compared the POC devices against the instrument used in the central core laboratory, a Siemens Advia 1800:

"[Comparability between a POC instrument and central analyzer] was evaluated using the CLSI EP9 protocol for method comparison. Specimens for analysis were obtained from routine samples sent to our laboratory at Charlotte Maxeke Academic Hospital, Johannesburg. For a few analytes on certain POC instruments an EP9 evaluation could not be performed as the specimen type required for analysis differed from that used by the current laboratory analyzer....In these cases samples suitable for both the POC instrument and the current analyser were simultaneously obtained from twenty volunteers."

Below is the table of average imprecision and bias for each instrument:

Cobas c111
CV% Bias %
Cholesterol 1.65% 2.86%
Triglycerides 4.2% 3.3%
HDL 5.35% 7.0%
Glucose 1.96% 3.56%
HbA1c 0.89% 3.84%
Lactate 4.61% 4.77%
Creatinine 1.91% 0.26%
ALT 1.59% 4.44%
AST 0.89% 7.0%

 

Vitros DT60
CV% Bias %
Cholesterol 2.25% 12.31%
Triglycerides 2.1% 1.86%
HDL 8.81% 3.8%
Glucose 2.51% 4.7%
Lactate 3.36% 2.6%
Creatinine 2.8% 14.0%
ALT 7.6% 2.79%

 

Roche Reflotron
CV% Bias %
Cholesterol 1.45% 6.0%
Triglycerides 5.7% 1.07%
HDL 11.84% 5.0%
Glucose 3.42% 9.8%
Creatinine 3.37% 1.3%
ALT 3.17% 27.0%

 

Cholestech LDX
CV% Bias %
Cholesterol 2.34% 12.0%
Triglycerides 3.41% 5.9%
HDL 9.55% 18.0%
Glucose 2.1% 4.5%
ALT 10.83% 25.0%

 

DCA Vantage
CV% Bias %
HbA1c 2.36% 1.78%

As usual, we have a LOT of numbers. We have only average imprecision and average bias, so keep in mind that the performance might be better or worse at the upper and/or lower parts of the range for these tests.

Looking at the raw numbers, you may find it difficult to judge the method performance. From experience, you might be able to tell when a particular method CV is high or low. But the numbers themselves don't easily tell the story.

If we want an objective assessment, we need to set analytical goals - specify quality requirements - and use those to calculate the Sigma-metric. 

Determine Quality Requirements

Now that we have our imprecision and bias data, we're almost ready to calculate our Sigma-metrics. But we're missing one key thing: the analytical quality requirements.

In the US, traditionally labs look first to CLIA for guidance. While this study was not conducted in the US, it may be advisable to use these goals since we're working at the point-of-care, where more error is expected, and indeed tolerated. But some of these analytes are not regulated by CLIA, so we'll need to use other resources as well, such as the desirable specifications for total allowable error based within-subject biologic variation (sometimes known as the "Ricos Goals").

 

Analyte
Source Allowable Total Error (TEa) %
Cholesterol CLIA 10%
Triglycerides CLIA 25%
HDL Ricos 11.63%
Glucose CLIA 10%
HbA1c CAP/NGSP 6.0%
Lactate Ricos 30.4%
Creatinine CLIA 15%
ALT CLIA 20%
AST CLIA 20%

 

CLIA does provide some of these goals in a dual format, with a units-based goal for the lower end of the range, and a %-based goal for the upper end of the range. Given that we are working with averages of imprecision and bias, we chose to use the %-based CLIA quality requirements for those analytes (glucose, creatinine).

Calculate Sigma metrics

Now the pieces are in place. Remember the equation for Sigma metric is (TEa - bias) / CV.

Example calculation: for the Cobas c111, the goal for cholesterol is 10%. We also know from the comparison study that there is a 2.86% bias and an imprecision of 1.65%:

(10 - 2.86) / 1.65 = 7.14 / 1.65 = 4.33

So the c111 is delivering good performance for cholesterol.

Recall that in industries outside healthcare, 3.0 Sigma is the minimum performance for routine use. 6.0 Sigma and higher is considered world class performance.

We'll simplify the table below and calculate all the Sigma-metrics.

Cobas c111
CV% Bias %
Sigma-metric
Cholesterol 1.65% 2.86% 4.33
Triglycerides 4.2% 3.3% 5.17
HDL 5.35% 7.0% 0.87
Glucose 1.96% 3.56% 3.29
HbA1c 0.89% 3.84% 2.43
Lactate 4.61% 4.77% 5.56
Creatinine 1.91% 0.26% 7.72
ALT 1.59% 4.44% 9.79
AST 0.89% 7.0% 14.61

 

Vitros DT60
CV% Bias %
Sigma-metric
Cholesterol 2.25% 12.31% negative
Triglycerides 2.1% 1.86% 11.02
HDL 8.81% 3.8% 0.89
Glucose 2.51% 4.7% 2.11
Lactate 3.36% 2.6% 8.27
Creatinine 2.8% 14.0% 0.36
ALT 7.6% 2.79% 2.26

 

Roche Reflotron
CV% Bias %
Sigma-metric
Cholesterol 1.45% 6.0% 2.76
Triglycerides 5.7% 1.07% 4.2
HDL 11.84% 5.0% 0.56
Glucose 3.42% 9.8% 0.06
Creatinine 3.37% 1.3% 4.07
ALT 3.17% 27.0% negative

 

Cholestech LDX
CV% Bias %
Sigma-metric
Cholesterol 2.34% 12.0% negative
Triglycerides 3.41% 5.9% 5.60
HDL 9.55% 18.0% negative
Glucose 2.1% 4.5% 2.62
ALT 10.83% 25.0% negative

 

DCA Vantage
CV% Bias %
Sigma-metric
HbA1c 2.36% 1.78% 1.79

Now we have even more numbers, and some of them are good, while others are not good at all.

For all of the instruments, the performance is not all good or all bad. There are some better assays and worse assays.

[What does "negative" Sigma mean? It means that the bias actually exceeds the allowable total error. The POC device method is significantly differrent than the central laboratory method. It's like they're aiming at completely different targets.]

Of the four chemistry analyzers, the cobas c111 has the best overall performance. We have the most amount of performance information for that instrument, and while it has some assays below 3 sigma, it does not have any that are "negative."

Summary of Performance by Sigma-metrics Method Decision Chart

If the numbers are too much to digest, we can put this into a graphic format with a Six Sigma Method Decision Chart. Here's the chart for Ricos goal for desirable allowable total error.

POC performance comparison, South Africa

Here's where the graphic display helps reveal issues with performance. You can see that the c111 has three analytes in the bull's-eye, while the DT60 has only two, plus other assays that have worse performance.

Conclusion

The authors developed their own scoring system for these point-of-care devices, but reached the same conclusion that we arrived at through Sigma-metrics. But they also commented on the practical challenges of using the system in South Africa:

"The Cobas c111 had the best overall analytical performance, however, strictly speaking, this is not a true point of care instrument. The complicated operating procedures required, which would have been beyond the scope of clinical staff, led to the c111 not being chosen as one of the instruments to proceed to the next phase of the project."

As the Sigma-metric review indicates, there are a lot of imperfections with these POC devices:

"A significant observation of this study is that none of the POC analysers evaluated could provide us with a test menu that incorporated all the analytes that were requested for our ARV [anti-retro-viral] clinic setting."

For all the POC optimists out there, this is continuing evidence that POC devices cannot be relied upon to achieve "lab quality" results. POC devices can perform well for select analytes, but the broad menu analyzers still have a long way to go to improve if they want reach the same level of quality as a central laboratory instrument. For the ARV clinics in South Africa, using these POC devices might lead to faster decisions, but making the wrong decision faster is not something to which we should aspire.

And for those hoping that we can reduce the frequency or effort of QC with POC devices, this study should serve as a credible refutation. The evidence simply doesn't support the idea that any of the instruments studied here could reduce their QC to once a week, once a month, or rely solely on electronic QC. The current performance is already too risky - diminishing the monitoring of these methods will only increase patient risk.

847.714.3904 [Cell]