Dietmar Stockl, QC Reality Check, Part Five
Dr. Dietmar Stockl continues his series on quality control. This article discusses the need for stability and its relationship to the quality requirement.
Part V: How variable/stable do I want it?
- The ASAP concept
- Application Serum sodium
- General application
- How stable? ASAP in a nutshell
The title question leads us inevitable to “analytical quality specifications” (“analytical goals”, “analytical performance specifications”). Analytical quality specifications should be based on the Stockholm approaches and the Stockholm hierarchy (1): i) clinical outcome; ii) questionnaires to clinicians, iii) biological variation; iv) expert opinion; v) state-of-the art.
Despite 3rd in the Stockholm hierarchy, specifications based on biological variation are the most straightforward ones and are easily available (http://westgard.com/biodatabase1.htm). Therefore, they will be used as a basis for discussion here. However, quality specifications based on biological variation are not “monolithic”. There are several ones and they are not final, some concepts are still improving (2). I will develop in this essay a concept that I call “ASAP”, meaning As Simple As Possible.
In the biological variation concept, typically, analytical quality specifications for monitoring are more demanding than those for diagnosis (http://www.stt-consulting.com/doc/1248959417.ppt and other parts). Typical mainstream assays are used for both situations, diagnosis as well as monitoring, thus they should fulfil the quality specifications for monitoring. The most simple specification for monitoring is CVa (a = analytical) < ½ CVw (w = within-subject biological variation) (3, 4). Using this as starting point, we come to a specification for systematic error (SE), or more precisely, for a change in systematic error (Delta-SE) during monitoring Delta-SE < 1/3 CVw (5). Note, these fractions are valid ONLY when one or the other error component is absent (either no RE or no SE). In real-life situations, both are present and have to be split according a non-linear relationship (5, http://www.stt-consulting.com).
Anyway, the 2 simple formula’s will do for the purpose of this essay (see references 6 – 8 for some additional reading). The application of the concept results for the great majority of the mainstream serum-, plasma-, and blood-analytes in CVa values within boundaries of 0.4 – 15% and for Delta-SE within boundaries of 0.2 – 10% (http://www.westgard.com/biodatabase1.htm).
I want to stress here that manufacturers need to supply tests with better “stable” performance, because working with tests that just fulfill the maximum specifications will lead to “out-of-specification” in 50% of the time due to unavoidable variation in test performance over time. Here is where internal quality control comes into play.
Application – Serum sodium
The within subject biological variation of serum-sodium (S-Na) is 0.7%, resulting in Delta-SE is < 0.23% (the maximum bias from diagnosis is 0.3%). You will immediately ask: do we really need such a tight control for S-Na measuremenets? To answer that question, we will look into the long-term stability of patients’ data in a laboratory (9). The 50th-percentile of a laboratory varied over a period of several years from 138 mmol/L to 141.5 mmol/L (difference = 2.5%). This corresponds to a difference of about ±1.3% from the long-term average of 140 mmol/L. When we investigate the fraction of results <135 mmol/L (indicating mild hyponatraemia), we observe a triplication of mild hyponatraemia in negatively biased periods compared to positively biased periods (15% versus 5%). This triplication is considered far too much and process variation <1% is clearly indicated (note: hyponatraemia is by far the most commonly encountered electrolyte disorder in a hospital). Because of the utmost importance of stable S-Na values, some even proposed an “operational definition of normonatremia” of 138 to 142 mmol/L (note: “every mmol counts!”) (10).
But are such numbers realistic? When we look into stability data of another laboratory, we observe a “mmol/L” stability of the 50th percentile over 7 years! Indeed, long-term stability of laboratory tests better than 1% are achieved and, thus, should be required from manufacturers.
I have shown that analytical quality specifications established from biological variation are, indeed, relevant. Those are most stringent for S-Na, however, taken S-Na as benchmark for what can be achieved in practice, process stability of better than 1% is a reality. Naturally, there are practical limits and it is questionable whether a stability of 0.2% can be achieved (11). But a 1% stability would suit nearly all analytes except S-Na and S-Cl (http://westgard.com/biodatabase1.htm).
Having defined a low-limit for stability in the laboratory (better than 1%) I propose a high-limit of 5% for the manufacturer. Remember from above that Delta-SE is within boundaries of 0.2 – 10% for the great majority of the common analytes. However, to achieve this kind of stability in the laboratory, the laboratory itself has to receive a test with better stability from the manufacturer: there must be some room for internal quality control!
How stable? – ASAP in a nutshell
(1) Hyltoft Petersen P, Fraser CG, Kallner A, Kenny D. Strategies to set global analytical quality specifications in laboratory medicine. Scand J Clin Lab Invest 1999;59:585.
(2) Oosterhuis WP. Gross overestimation of total allowable error based on biological variation. Clin Chem 2011;57:in press.
(3) Cotlove E, Harris EK, Williams GZ. Components of variation in long term studies of serum constituents in normal subjects. III. Physiological and medical implications. Clin Chem 1970;16:1028-32.
(4) Harris EK. Statistical principles underlying analytical goal-setting in clinical chemistry. Am J Clin Pathol 1979;72:374-82.
(5) Hyltoft Petersen P, Fraser CG, Westgard JO, Lytken Larsen M. Analytical goal-setting for monitoring patients when two analytical methods are used. Clin Chem 1992;38:2256-60.
(6) Klee GG. Tolerance limits for short-term analytical bias and analytical imprecision derived from clinical assay specificity. Clin Chem 1993;39:1514-8.
(7) Klee GG, Schryver PG, Kisabeth RM. Analytic bias specifications based on the analysis of effects on performance of medical guidelines. Scand J Clin Lab Invest 1999;59:509-12.
(8) Stöckl D, Sluss PM, Thienpont LM. Specifications for trueness and precision of a reference measurement system for serum/plasma 25-hydroxyvitamin D analysis.
Clin Chim Acta 2009;408:8-13.
(9) Stepman HCM, Stöckl D, Stove V, Fiers T, Couck P, Gorus F, Thienpont LM. Long-term stability of clinical laboratory data – Sodium as benchmark. Clin Chem 2011;57:accepted.
(10) Wald R, Jaber BL, Price LL, Upadhyay A, Madias NE. Impact of hospital-associated hyponatremia on selected outcomes. Arch Intern Med 2010;170:294-302.
(11) Stöckl D. Desirable performance criteria for quantitative measurements in medical laboratories based on biological analyte variation - hindrances to reaching some and reason to surpass some. Clin Chem 1993;39:913-4.