Tools, Technologies and Training for Healthcare Laboratories

Potassium Quality Goals

Continuing in our series of performance and goal comparisons, we evaluate the quality requirements and method performance of Potassium.

 

 

 

Potassium Quality: What's the right Goal? What's the actual performance?

Sten Westgard
September 2013

In August, we took a look at sodium performance and quality requirements to make an assessment of which goals were achievable, and which methods were performing well. This month we're turning to another electrolyte: potassium.

What's the Right Goal for Potassium?

On a more specific level, even for laboratories that are attempting to assess and assure quality, the question of goals, targets and requirements are a challenge. The main question for labs is, "What is the Right Goal for this Test?"  While the 1999 Stockholm hierarchy identifies an approach, the ideal quality goals - goals developed based on the evidence of how the test results are actually used by clinicians - are rare indeed. Even for well-studied assays that have been used in medicine for decades, there is still a wide disparity on the actual goal that should used for quality.

Analyte Acceptance criteria / quality requirements
CLIA Desirable
Biologic
Goal
RCPA Rilibak SEKK

Spanish
Minimum
Consensus

Potassium ± 0.5 mmol/L ± 5.8% ± 0.2 mmol/L < 4.0 mmol/L
± 5% > 4.0 mmol/L
 ± 8%  ± 8% ± 8%

This table is adapted from a study by Friedecky, Kratochvila and Budina in a 2011 article in CCLM. You can see that the Desirable Biologic "Ricos goal" is small for Potassium, while Rilibak, SEKK, and the Spanish Minimum Consensus set the same wider goal of 8%. The latter three sources, (Rilibak, SEKK and the Spanish Minimum Consensus goals) establish their goals by a process akin to "waiting for the arrow to land before painting the bulls-eye around it." That is, the Spanish consensus minimum goal is set so that 90% of labs can achieve the goal. Thus, no matter what performance of the assay is, the vast majority of labs will pass. The "Ricos goal," in contrast, is specified entirely along an evidence-based methodology, without regard to whether or not there are methods on the market that can hit such a small target. Interestingly, the RCPA goal is tighter than the Ricos goal near the upper end of the reference range.

Where can we find data on method performance?

Here is our standard caveat for data comparisons: it's really hard to get a good cross-instrument comparison study. While every lab has data on their own performance, getting data about other instruments is tricky. If we want true apples-to-apples comparisons of performance, moreover, we need to find a study that evaluates different instruments within the same laboratory under similar conditions. Frankly, those studies are few and far between. Not many labs have the time or the space to conduct head-to-head studies of different methods and analyzers. I've seen a lot of studies, but few gold-standard comparisons.

Proficiency Testing surveys and External Quality Assessment Schemes can provide us with a broad view, but the data from those reports is pooled together. Thus, we don't get individual laboratory standard deviations, we get instead the all-group SD or perhaps the method-specific group SD. As individual laboratory SDs are usually smaller than group SDs, the utility of PT and EQA reports is limited. If we want a better idea of what an individual laboratory will achieve with a particular method, it would be good to work with data from studies of individual laboratories.

In the imperfect world we live in, we are more likely to find individual studies of instrument and method performance. A lab that evaluates one instrument and its methods, compares it to a "local reference method" (i.e. usually the instrument it's going to replace). While we can get Sigma-metrics out of such studies, their comparability is not quite as solid. It's more of a "green-apple-to-red-apple" comparison.

Given those limitations, here are a few studies of major analyzer performance that we've been able to find in recent years:

  • Evaluation des performances analytiques du systeme Unicel DXC 600 (Beckman Coulter) et etude de la transferabilite des resultats avec l’Integra 800 (Roche diagnostics), A. Servonnet, H. Thefenne, A. Boukhira, P. Vest, C. Renard. Ann Biol Clin 2007: 65(5): 555-62
  • Validation of methods performance for routine biochemistry analytes at Cobas 6000 series module c501, Vesna Supak Smolcic, Lidija Bilic-Zulle, elizabeta Fisic, Biochemia Medica 2011;21(2):182-190
  • Analytical performance evaluation of the Cobas 6000 analyzer – special emphasis on trueness verification. Adriaan J. van Gammeren, Nelley van Gool, Monique JM de Groot, Christa M Cobbeart. Clin Chem Lab Med 2008;46(6):863-871.
  • Analytical Performance Specifications: Relating Laboratory Performance to Quality Required for Intended Clinical Use. [cobas 8000 example evaluated] Daniel A. Dalenberg, Patricia G. Schryver, George G Klee. Clin Lab Med 33 (2013) 55-73.
  • The importance of having a flexible scope ISO 15189 accreditation and quality specifications based on biological variation – the case of validation of the biochemistry analyzer Dimension Vista, Pilar Fernandez-Calle, Sandra Pelaz, Paloma Oliver, Maria Josa Alcaide, Ruben Gomez-Rioja, Antonion Buno, Jose Manuel Iturzaeta, Biochemia Medica 2013;23(1):83-9.
  • External Evaluation of the Dimension Vista 1500 Intelligent Lab System, Arnaud Bruneel, Monique Dehoux, Anne Barnier, Anne Bouten, Journal of Clinical Laboratory Analysis 2012;23:384-397.
  • Analytical evaluation of the clinical chemistry analyzer Olympus AU2700 Pluss, Jasna Juricek, Lovorka Derek, Adriana Unic, tihana Serdar, Domagoj Marijancevic, Marcela Zivkovic, Zelljko Romic, Biochemica Medica 2010;20(3):334-40.
  • Bias (Trueness) of Six Field Methods Estimated by Comparison to Reference Methods and Comparability of Results within a Family of Analyzers, D. Armbruster, Y. Lemma, G. Marseille, Poster at 2011 AACC conference.

These studies in turn can give us multiple data points (for instance, CV for high and low controls) for some instruments and we can attempt to evaluate these instruments' ability to achieve different quality requirements.

Can any instrument hit the Desirable Biologic TEa Goal for Potassium?

Here is a Method Decision Chart using the "Ricos Goal" of 5.8%. Please note, we're using the Normalized Method Decision chart to make our evaluation.

[If you're not familiar with Method Decision Charts and Six Sigma, follow the links here to brush up on the subject.]

Potassium Comparison - Ricos Goal

As you can see, this a quality requirement that provides significant differentiation of method performance. We have some methods that can hit the bulls-eye all the time or some or the time. We have some methods that are more marginal on average. The DxC method has good precision, but a huge bias; it might be that's a problem of methodology comparison, not necessarily the DxC method's fault. We'd probably like to see another DxC study before we concluded that the method itself has a problem.

Can any instrument hit the Rilibak interlaboratory comparison TEa goal for Potassium?

Potassium Performance Comparison Rilibak Goal

Here we see that more methods are getting into the bulls-eye, but even this quality requirement is not a home run for all methods.

Can any instrument hit the CLIA TEa Goal for Potassium?

Potassium Comparison CLIA goal

In this final graph, we see that almost all methods can hit the bulls-eye. Given that the CLIA quality requirement is a "units-based" goal, this means that at lower levels, the effective goal is quite large: not 5% or 8% but as large as 14%, 24%, etc. So that certainly makes CLIA an easier goal to achieve. Nevertheless, not every method can make it. There are still methods that can't hit this target in the bulls-eye, or hit it there consistently.

Conclusion

With quality requirements, there is no one source that is completely perfect and practical. Laboratories need to be careful when selecting quality goals, making sure that they choose all the goals that they are mandated to meet (In the US, labs must be able to achieve CLIA targets), but also making sure that their goals are practical and in harmony with the actual clinical use of the test in their health system.

It appears here that CLIA is one of the wider targets, but it still provides some discriminatory power. Rilibak and Ricos goals are not impossible to achieve and can provide further differentiation of method performance. This looks like a case where labs are free to choose whichever quality goal they prefer (except for the US labs, of course).

Remember, from our perspective, a goal that no one (no method) can hit is not useful, except perhaps to instrument researchers and design engineers (for them, this becomes an aspirational target, to design the next generation of methods to achieve that ultimate level of performance). Neither is it helpful to use a goal that every method can hit - that might only indicate we have lowered the bar too much, and actual clinical use could be tightened and improved to work on a finer scale.

Like so much in laboratory medicine, there is still a lack of standardization and harmonization, and a healthy debate about what needs to be achieved. In future articles, we hope that we can make it easier to see the differences in quality requirements, method performance and the practical choices that labs can make.

If you liked this article, you might also like