Tools, Technologies and Training for Healthcare Laboratories

Albumin Quality Goals

 Continuing in our series of performance and goal comparisons, we evaluate the quality requirements and method performance of Albumin.

 

 

 

Albumin Quality: What's the right Goal? What's the actual performance?

Sten Westgard
October 2013

In September, we took a look at Potassium performance and quality requirements to make an assessment of which goals were achievable, and which methods were performing well. This month we're turning to a different analyte: Albumin.

What's the Right Goal for Albumin?

On a more specific level, even for laboratories that are attempting to assess and assure quality, the question of goals, targets and requirements are a challenge. The main question for labs is, "What is the Right Goal for this Test?"  While the 1999 Stockholm hierarchy identifies an approach, the ideal quality goals - goals developed based on the evidence of how the test results are actually used by clinicians - are rare indeed. Even for well-studied assays that have been used in medicine for decades, there is still a wide disparity on the actual goal that should be used for quality.

Analyte Acceptance criteria / quality requirements
CLIA Desirable
Biologic
Goal
RCPA Rilibak SEKK

Spanish
Minimum
Consensus

Albumin ± 10% ± 3.9% ± 1.0 g/L < 1.0 g/L
± 10% > 1.0 g/L
 ± 20%  ± 12% ± 14%

This table is adapted from a study by Friedecky, Kratochvila and Budina in a 2011 article in CCLM. You can see that the Desirable Biologic "Ricos goal" is small for Albumin, while Rilibak, SEKK, and the Spanish Minimum Consensus set the target mich larger, above 10%. The latter three sources, (Rilibak, SEKK and the Spanish Minimum Consensus goals) establish their goals by a process akin to "waiting for the arrow to land before painting the bulls-eye around it." That is, the Spanish consensus minimum goal is set so that 90% of labs can achieve the goal. Thus, no matter what performance of the assay is, the vast majority of labs will pass. The "Ricos goal," in contrast, is specified entirely along an evidence-based methodology, without regard to whether or not there are methods on the market that can hit such a small target. Interestingly, the RCPA goal is very similar to the CLIA goal.

Where can we find data on method performance?

Here is our standard caveat for data comparisons: it's really hard to get a good cross-instrument comparison study. While every lab has data on their own performance, getting data about other instruments is tricky. If we want true apples-to-apples comparisons of performance, moreover, we need to find a study that evaluates different instruments within the same laboratory under similar conditions. Frankly, those studies are few and far between. Not many labs have the time or the space to conduct head-to-head studies of different methods and analyzers. I've seen a lot of studies, but few gold-standard comparisons.

Proficiency Testing surveys and External Quality Assessment Schemes can provide us with a broad view, but the data from those reports is pooled together. Thus, we don't get individual laboratory standard deviations, we get instead the all-group SD or perhaps the method-specific group SD. As individual laboratory SDs are usually smaller than group SDs, the utility of PT and EQA reports is limited. If we want a better idea of what an individual laboratory will achieve with a particular method, it would be good to work with data from studies of individual laboratories.

In the imperfect world we live in, we are more likely to find individual studies of instrument and method performance. A lab that evaluates one instrument and its methods, compares it to a "local reference method" (i.e. usually the instrument it's going to replace). While we can get Sigma-metrics out of such studies, their comparability is not quite as solid. It's more of a "green-apple-to-red-apple" comparison.

Given those limitations, here are a few studies of major analyzer performance that we've been able to find in recent years:

  • Evaluation des performances analytiques du systeme Unicel DXC 600 (Beckman Coulter) et etude de la transferabilite des resultats avec l’Integra 800 (Roche diagnostics), A. Servonnet, H. Thefenne, A. Boukhira, P. Vest, C. Renard. Ann Biol Clin 2007: 65(5): 555-62
  • Validation of methods performance for routine biochemistry analytes at Cobas 6000 series module c501, Vesna Supak Smolcic, Lidija Bilic-Zulle, elizabeta Fisic, Biochemia Medica 2011;21(2):182-190
  • Analytical performance evaluation of the Cobas 6000 analyzer – special emphasis on trueness verification. Adriaan J. van Gammeren, Nelley van Gool, Monique JM de Groot, Christa M Cobbeart. Clin Chem Lab Med 2008;46(6):863-871.
  • Analytical Performance Specifications: Relating Laboratory Performance to Quality Required for Intended Clinical Use. [cobas 8000 example evaluated] Daniel A. Dalenberg, Patricia G. Schryver, George G Klee. Clin Lab Med 33 (2013) 55-73.
  • The importance of having a flexible scope ISO 15189 accreditation and quality specifications based on biological variation – the case of validation of the biochemistry analyzer Dimension Vista, Pilar Fernandez-Calle, Sandra Pelaz, Paloma Oliver, Maria Josa Alcaide, Ruben Gomez-Rioja, Antonion Buno, Jose Manuel Iturzaeta, Biochemia Medica 2013;23(1):83-9.
  • External Evaluation of the Dimension Vista 1500 Intelligent Lab System, Arnaud Bruneel, Monique Dehoux, Anne Barnier, Anne Bouten, Journal of Clinical Laboratory Analysis 2012;23:384-397.

These studies in turn can give us multiple data points (for instance, CV for high and low controls) for some instruments and we can attempt to evaluate these instruments' ability to achieve different quality requirements. We don't have as many studies for this case as we have included in our earlier analyses, simply because albumin wasn't included in some of those papers.

Can any instrument hit the Desirable Biologic TEa Goal for Albumin?

Here is a Method Decision Chart using the "Ricos Goal" of 3.9%. [Please note, we're using the Normalized Method Decision chart to make our evaluation, so that the differences in quality requirements become more apparent.]

[If you're not familiar with Method Decision Charts and Six Sigma, follow the links here to brush up on the subject.]

2013-Albumin-Ricos-NMEDx

As you can see, this a quality requirement that proves to be very, very hard to hit. Note that two of the points for Vista are from one study and two are from another. It may be that the bias is only from one of the studies and that further data could reveal which performance is more typical.

In any case, no method here is much better than another. If 3.9% were our goal, we might conclude that we need to run our testing in duplicate or find other drastic efforts necessary to achieve the needed performance.

Can any instrument hit the Rilibak interlaboratory comparison TEa goal for Albumin?

2013-Albumin-Rilibak-NMEDx

Here we see that basically all methods are in the bull's-eyes. If we accept that 20% is an appropriate target for Albumin, than any method will do.

Can any instrument hit the CLIA TEa Goal for Albumin?

2013-Albumin-CLIA-NMEDx

In this final graph, we see a few methods are edging into the bull's-eye, but the bulk of method performance varies from 4 to 5 Sigma. This means that most methods would need to use a set of "Westgard Rules" to properly monitor performance.

Conclusion

With quality requirements, there is no one source that is completely perfect and practical. Laboratories need to be careful when selecting quality goals, making sure that they choose the goals that they are mandated to meet (In the US, labs must be able to achieve CLIA targets), but also making sure that their goals are practical and in harmony with the actual clinical use of the test in their health system.

It appears here that CLIA is the more realistic target, possible to hit but not easy, providing some discriminatory power among different methods. The Rilibak goal, on the other hand, is probably too easy to achieve. The Ricos goal, finally, appears to be currently out of reach of the performance of today's methods.

Remember, from our perspective, a goal that no one (no method) can hit is not useful, except perhaps to instrument research and design engineers (for them, this stringent target becomes an aspirational goal, to design the next generation of methods to achieve that ultimate level of performance). Neither is it helpful to use a goal that every method can hit - that might only indicate we have lowered the bar too much, and actual clinical use could be tightened and improved to work on a finer scale.

Like so much in laboratory medicine, there is still a lack of standardization and harmonization (we didn't even discuss the difference between BCG and BCP), and a healthy debate about what needs to be achieved. In future articles, we hope that we can make it easier to see the differences in quality requirements, method performance and the practical choices that labs can make.

If you liked this article, you might also like