Tools, Technologies and Training for Healthcare Laboratories

Six out of Eight assays on ARCHITECT c16000s cannot meet pU goals

A continuing investigation into assay capability to meet new performance specifications for permissible measurement uncertainty (pU). Can two ARCHITECTs hit these targets?

 6 of 8 assays from two Abbott ARCHITECT analyzers cannot meet 2021 pU% goals

Sten Westgard, MS
February 2022

2021 target pU 1000

Using the performance specifications for permissible measurement uncertainty from the 2021 CCLM article by Braga and Panteghini https://pubmed.ncbi.nlm.nih.gov/33725754/ we decided to look at the ARCHITECT chemistry analyzer

A 2021 article about the analytical performance of two Abbott ARCHITECT c16000 analyzers provides the opportunity to test whether today's instruments can meet the new performance specifications:

Assessment of analytical process performance using the Six Sigma method: A comparison of two biochemistry analyzers. Saniye Basak Oktay, Sema Nur Ayyildiz. Int J Med Biochem . 2021; 4(2): 97-103 | DOI: 10.14744/ijmb.2021.14633 https://internationalbiochemistry.com/jvi.aspx?un=IJMB-14633&volume=4&issue=2

We're going to focus on the 8 assays that we analyzed in previous articles, to get a sense of diagnostic capability across the industry.

The Braga and Panteghini pU and the performance

The imprecision was estimated from IQC data from January to March of 2020 using Technopath controls. Bias was estimated by comparison of actual means versus the control target means for the same period.

Here are the results for the first ARCHITECT c16000:

ARCHITECT c16 1 Measurand   Milan Model APS for standard MU, % APS for desirable lab CV% Level 1 CV Level 2 CV Level 1 |Bias| Level 2 |Bias|
creatinine Biological Variation (2nd best) 2.2% 1.1% 2.77% 3% 0.31% 0.36%
 glucose outcome-based (best)  2.00% 1.00% 3.1% 2.2% 0.92% 1.31%
 sodium Biological Variation (2nd best)  0.27% 0.14% 1.65% 1.77% 0.67% 1.27%
 potassium Biological Variation (2nd best)   1.96% 0.98% 2.27% 1.88% 0.9% 0.09%
 chloride Biological Variation (2nd best)   0.49% 0.25% 1.76% 1.85% 2.11%  2.57%
 total calcium Biological Variation (2nd best) 0.91% 0.46% 2.55% 2.25% 0.88% 0.5%
 urea Biological Variation (2nd best) 7.05% 3.03% 4.62% 2.07% 1.91% 0.13%
 alanine aminotransferase Biological Variation (2nd best) 4.65% 2.38% 9.43% 2.17% 9.6% 2.04%

Here are the results for the second ARCHITECT c16000:

ARCHITECT c16 2 Measurand   Milan Model APS for standard MU, % APS for desirable lab CV% Level 1 CV Level 2 CV Level 1 |Bias| Level 2 |Bias|
creatinine Biological Variation (2nd best) 2.2% 1.1% 3.34% 3.04% 1.99% 0.68%
 glucose outcome-based (best)  2.00% 1.00% 2.69% 2.19% 1.18% 1.28%
 sodium Biological Variation (2nd best)  0.27% 0.14% 1.9% 1.64% 1.6% 1.89%
 potassium Biological Variation (2nd best)   1.96% 0.98% 2.73% 1.98% 0.13% 0.35%
 chloride Biological Variation (2nd best)   0.49% 0.25% 1.87% 1.78% 2.4%  2.11%
 total calcium Biological Variation (2nd best) 0.91% 0.46% 1.88% 1.97% 0.46% 0.27%
 urea Biological Variation (2nd best) 7.05% 3.03% 2.78% 2.62% 2.55% 1.65%
 alanine aminotransferase Biological Variation (2nd best) 4.65% 2.38% 4.36% 1.35% 8.05% 3.74%

Does pU Pass or Fail?

When the performance specification is applied to the imprecision measured on this instrument, what is the verdict? Note that the MU and pU are specifications that (mostly) ignore bias. Measurement Uncertainty can't be combined across all the levels if bias exists. So typically the approaches assume (1) either bias is so small ( which is left unspecified) that it can be ignored or (2) the bias varies over the long term, so it can be incorporated as just like another imprecision, or (3) the bias must be eliminated before any of the measurementuncertainty approaches can be applied.

Here, we're just going to pretend the bias isn't that important, and see if the imprecision alone can meet the pU goals.

ARCHITECT c16 1 Measurand   Milan Model APS for standard MU, % APS for desirable lab CV% Level 1 CV Level 2 CV
creatinine Biological Variation (2nd best) 2.2% 1.1% FAILS FAILS
 glucose outcome-based (best)  2.00% 1.00% FAILS FAILS
 sodium Biological Variation (2nd best)  0.27% 0.14% FAILS FAILS
 potassium Biological Variation (2nd best)   1.96% 0.98% FAILS FAILS
 chloride Biological Variation (2nd best)   0.49% 0.25% FAILS FAILS
 total calcium Biological Variation (2nd best) 0.91% 0.46% FAILS FAILS
 urea Biological Variation (2nd best) 7.05% 3.03% FAILS PASSES
 alanine aminotransferase Biological Variation (2nd best) 4.65% 2.38% FAILS PASSES
ARCHITECT c16 2 Measurand   Milan Model APS for standard MU, % APS for desirable lab CV% Level 1 CV Level 2 CV
creatinine Biological Variation (2nd best) 2.2% 1.1% FAILS FAILS
 glucose outcome-based (best)  2.00% 1.00% FAILS FAILS
 sodium Biological Variation (2nd best)  0.27% 0.14% FAILS FAILS
 potassium Biological Variation (2nd best)   1.96% 0.98% FAILS FAILS
 chloride Biological Variation (2nd best)   0.49% 0.25% FAILS FAILS
 total calcium Biological Variation (2nd best) 0.91% 0.46% FAILS FAILS
 urea Biological Variation (2nd best) 7.05% 3.03% PASSES PASSES
 alanine aminotransferase Biological Variation (2nd best) 4.65% 2.38% FAILS PASSES

Technically, only one assay passes the pU goals here, Urea. And only on the second instrument, not the first. For most of the levels and assays, there is consistency across both instruments, unforunately in a negative way. Only the upper levels for ALT meets the pU goals. This is not a great look for ARCHITECT.

But this is the fourth look at current performance data showing that the majority of assays of the major platforms cannot hit pU performance specifications. Increasingly, it looks like the goals are the problem, not the instruments.