Tools, Technologies and Training for Healthcare Laboratories

Specification too strict? Trying lowering your standards

 It's becoming something of a trend in Europe: lowering standards, postponing implementation of standards, and inventing new reasons why the rules don't apply.

lowering quality standardsSpecifications too strict? Try lowering your standards

An emerging European approach to performance specifications (A Bah, Humbug essay)

February 2024
Sten Westgard, MS

A recent paper from Stavelin and Sandberg raised an eyebrow:

Analytical performance specifications and quality assurance of point-of-care testing in primary healthcare, Crit Rev Clin Lab Sci. 2023 Oct 1:1-14. doi: 10.1080/10408363.2023.2262029.

In it the authors laid out a comprehensive strategy to lower the standards of quality at the point-of-care.

The special challenge of POC

POC devices have exploded (in some cases, literally) onto the healthcare marketplace. The promise of rapid results is irresistable. These devices "are typically small and easy to operate, usually requiring little to no laboratory experience. It is tempting to think that users cannot make errors and the device will display an error message in case of failure. Consequently, POCT users, who are often not educated in laboratory medicine, may not see the value of performance quality assurance (QA) on their devices..."

Given that we put POCT devices in increasingly unskilled hands, one might think there is a larger need for QA and QC and associated activities. The authors, however, argue the opposite. Instead, device selection is paramount: make sure the device is fit-for-purpose, subject it to CLIS EP15 user verification. Even more critically, select devices that have been pre-verified through the use of the unique Scandinavian evaluation of laboratory equipment for point of care testing (SKUP). SKUP is demanding, and devices that pass SKUP, are therefore worthwhile. Still, those select POCT are operated by humans, and those humans will need comprehensive training.

What are performance specifications for POCT?

Performance specifications for core laboratory and "traditional" laboratory testing have existed for decades. It has been an article of faith that goals set for the core laboratory are also the goals that should be set for any other area of testing. If a serum glucose test should be good within 8% of its true value, then why should a glucose meter be given a budget of 20% analytical total error? If the staff within the core lab are highly skilled and get only 8%, why are the less skilled, more error prone staff given more room for error?

In a first, the authors offer a rationalization

Reasons for Lowering Standards

1. The intended use of the test at POCT might be different than that of the core laboratory. When you send a sample to the core laboratory, you might want different results than when you test at the POCT. In the core laboratory, of course, the test request does not often come with the clinician motivation and intent, therefore the core laboratory must impose the strictest possible performance specification. But at the POCT, where the clinician knows what they expect - and doesn't have to communicate it to anyone - they can also allow lower standards. "Thus, for HbA1c POCT for example, different APS can be set depending on whether the test is intended for monitoring or for diagnosing...." The net effect is that for every test, there must be multiple performance specifications, some of which are set only inside the clinicians' heads, and are not communicated to the core laboratory. If the clinician would include in their test order that they were monitoring or diagnosing, the core laboratory could also toggle between harder and easier performance specifications on demand.

2.  When you need it fast, it doesn't have to be all that correct. "[W]here a glucose measurement is needed to differentiate between hyper- and hypoglycemia in an unconscious diabetic patient...[I]n this situation, it is not necessary to have too strict APS for POC glucose meters." The extended logic is that if you ran the specimen over to the laboratory as a STAT order, they, too, could lower their standards. Depending on the the clock, you may have one goal, but a few minutes later, you could have another goal.

3. "[R]epetitive results can be obtained within short time intervals and therefore a trend in the analytical results could be more important to study. Repetitive testing is often not feasible in a hospital lab, which is why POC instruments are commonly used." Now this, actually, has a mathematical basis. By repeating more measurements and interpreting the mean of those results, you can actually reduce the uncertainty of that result, i.e. tighten the confidence interval about that result. At the POCT, as long as you make more measurements, you can tolerate worse and worse quality. Extend that to the core laboratory, you could place in the testing order to run a sample 5-10 times, and they could therefore lower the quality standards there, too. And if we run enough samples, we can lower our quality standards entirely, and put the healthcare system out of business.

4. "Analytical results can be obtained in geographical areas away from the central laboratory to be able to reach out to more people." The performance specification is therefore directly related to how far away you are from care. A test result for a person who is 100 miles from the hospital does not require as good quality as one who is only 50 miles from the hospital. There is a certain logic to it, "[I]t can be accepted that the benefits gained from the use of HIV POCT, with increasted testing rates and reaching out to a larger population, outweigh any undesirable effects of lower sensitivities and specificities." So as you get closer to a core laboratory, we care more about the quality of your results. Geography, again, is destiny. If you're in a hospital, we care more about getting your diagnosis correct. If you're in a rural area, we care less. Callous, but effective.

Are the net effect of any of these rationalizations better quality?

The 4 reasons don't actually necessarily mandate lower quality. You could conceivably construct a scenario where you need more quality at POCT when you're about to make a major decision. AMI or not? What's the POCT say? That's a scenario that might take far away from a core laboratory, but arguably needs increased quality. But the authors conclude the major direction is downward: "In most of these situations, the APS of POC instruments will be less stringent compared to those used in a central laboratory, and the main reason is that the requester is closer to both the analytical and the clinical situation."

The 4 reasons for reduced quality enable a lowering of standards. The worst quality, therefore, can occur when the clinician is right in front you with POCT.

Why bother with QC for POCT at all?

The authors push the logical conclusions of their thesis even further. "There is only circumstantial and anecdotal evidence that IQC for POCT is useful, no studies have proven that performing IQC on POC instruments will improve patient outcomes, although it has been shown that good analytical quality is associated with the use of IQC. The importance of analyzing liquid control materials is limited for POCT, and therefore, the IQC routines of hospital laboratories cannot be automatically adopted....Performing IQC has no value if there is no response to alarms."

Take a breath, thank the heavens, here is the solution to all of our problems. Worried about unskilled users making critical diagnoses in a short time period? Particularly users who don't understand QC? The solution is to eliminate QC. Again, we could even alleviate the burden of QC in the core laborotory, simply by employing even more unskilled staff who don't understand or respond to QC. Remember, the further away the lab is from a patient, the less quality we need. If we move far away enough, and if we never were trained on how to respond to an out-of-control event, we don't have to run QC anymore.

In a way, this is a great way to reward ignorance. As the skills of the users decline, the rationale for running QC declines. If they don't really know what they're doing - and most importantly, if they don't respond to their QC alarms - we just don't have to do it. Think of the savings! Laboratories simply need to keep reducing the salaries of their staff to reach the desirable level of ignorance.

It's time to stop being silly about something so serious

Perhaps the authors' intent was to shock laboratory scientists into taking quality of POCT more seriously. But the solution to a quality problem isn't to define quality down. It's to build a better quality system. The increasing lack of skill at POCT is not something to be enabled, but resisted. Building tortuous excuses to accelerate the degradation of quality are a professional breach. 

Quality can be done well at the POCT. It's not a foregone conclusion we have to lower our standards. It may be the easiest approach, but our ethics should guide us higher.