Tools, Technologies and Training for Healthcare Laboratories

20 Questions on Hematology QC

At the LabRoots Webinar on Hematology Quality Assurance, we had a lively exchange with participants. Here are 20 questions that we weren't able to answer in the time alotted. It turns out, many of the questions were about more than just hematology...

20 Questions on Hematology QC

Sten Westgard, MS
September 2018

TIMEFORQUESTIONSQuestion 1: Are the QC principles similar for molecular assays? Are there additional QC requirements?

A: Molecular assays is a broad term, encompassing many types of testing methodologies. In principle, if an assay is generating numerical results, and a material can be tested that generates a normal distribution, or a log-transformed normal distribution, then we can apply the traditional statistical quality control techniques. There are definitely two areas where molecular assays need additional QC effort: the first is in the establishing of allowable analytical total errors. Laboratories need to understand the context of interpretation of the assay results, so they can derive an appropriate allowable total error. Second, when these molecular assays have complex pre-analytical steps, they will require much more non-statistical quality control monitoring.

 

Question 2: Can we assess the difference between lots using the vendor provided values?

A: The gold standard now for comparing the acceptability of between-lot differences of reagent is through the testing of patient samples. CLSI EP26 is a specific guideline on how to conduct this comparison. Where EP26 is a little vague is the allowable difference, which they refer to as a critical difference. The guideline doesn’t provide any specific numbers for any analytes for this term – but it can be surmised that this is a fraction of the allowable total error. So if you know the allowable total error, you can, as a rule of thumb (an heuristic, not a scientific calculation), accept about 1/3 of the allowable total error as an acceptable difference between reagent lots, as measured by patient sample differences.

 


Question 3: Are there articles in the literature the need for additional hematology QC guidelines?

A: When it comes to actual practice of QC, there’s usually very little in the literature, but there is at least one paper by Cembrowski et al encouraging labs to adopt “insensitive” rules for QC. This terminology is a bit confusing – he is not advocating for less sensitivity, but instead he is acknowledging that we don’t need to be so over-sensitive (that is, using very tight control limits) on a number of hematology assays.
Rationale for using insensitive quality control rules for today's hematology analyzers Int J Lab Hematol. 2010 Dec; 32(6p2): 606–615.
doi: 10.1111/j.1751-553X.2010.01229.x
PMCID: PMC3038198
PMID: 20402824



Question 4: To define a mean and standard deviation for a hematology control, what is the solution you propose?

A: There are a number of options. The first ideal option is to perform a considerable overlap of old and new controls, taking up to 5 days and accumulating multiple measurements per day, trying to gather 20 data points so that you can establish a preliminary standard deviation and mean for the new lot before the old lot expires or is retired. In biochemistry, the recommendation is to consider a 20-day overlap, which in hematology feels impractical (given the short lifespan of control is already only 2-3 months at best). Another approach is to quickly establish a new mean (just 8-10 measurements can confidently establish a new mean, so a short overlap is all that is necessary), then use the OLD CV and multiple that percentage by the new mean. OLD CV * NEW MEAN = TEMPORARY NEW SD. You are essentially assuming that the CV previously observed is stable and will be replicated in the new control lot – if that assumption is not warranted, don’t use this approach. This temporary new SD can be used to set control limits while the appropriate number of measurements is accumulated. Once you have 20 or more data points of the new control lot, recalculate the SD and CV and reset the control limits.


The use of the manufacturer ranges, package insert ranges, and other externally imposed ranges is highly discouraged. No matter how the manufacturer sweet-talks you about using a wider range, the best practice is to use YOUR RANGE and YOUR MEAN. Whenever the manufacturer tries to get you to use a different mean or a different (wider) mean, they are not trying to control the method, they are trying to control you (and get you to stop asking for technical support).

 


Question 5: Is it correct to ask having at least 30 days running 2 different lots of control at the same time until I have an accurate estimation of the new mean and SD?

A: This is similar to the previous question. In biochemistry, this practice is more common, and ideally we would like to replicate it in hematology. We would get better means and SDs if we could have a long over lap of the new and current control lots. But this would essentially mean that most of the time we would be overlapping controls, and only rarely would we be running just one set of controls. So yes, this is a great practices, and no, I don’t think most labs have the resources and time to implement it.

 

 

Question 6: How do you determine the Lab’s Level of Sigma Performance?

A: Let’s take two approaches at this question. The first, simplest, approach is to restate the equation: Your Sigma-metric is (TEa – Bias)/CV , where TEa represents the allowable analytical total error in percent, bias is the absolute percentage of bias, and CV is the percentage of imprecision (You can do this equation in units, too, but it’s most commonly worked in percentages.

The second, more complicated, approach to this question is to focus on the word “level.” That is, at which level is the Sigma-metric calculated? Given that many hematology controls are run in triplicate, does this mean you have three Sigma-metrics per test? Or do you average them? Or choose the best? Or worst? We advocate that labs choose the most critical decision level in the medical interpretation of the test results, and focus on the performance data and Sigma-metric closest to that.

 

Question 7: Should I reset/restart the Westgard multirules on a 1:3s rejection rule or continue to apply?

A: Again, there are multiple interpretations of this specific question. The first is rule interpretation. If the more modern and optimized versions of the “Westgard Rules” start with the 1:3s rule, then when that rule is violated, you are already rejecting the run – you don’t need to check any other rules to determine if you need to reject the run. Note that under simplified “Westgard Sigma Rules” even a Six Sigma test will still use the 1:3s rule as a rejection rule. And this will also apply for any test that has 5, 4, 3, 2, and even lower Sigma. A 1:3s rule can be used as a rejection rule in nearly every test.

However, if we take the question to mean how do we use the “Westgard Rules” to trouble-shoot, then we CAN keep interpreting all the rest of the Westgard Rules. So while the 1:3s violation has told us that we need to reject the run, we can also check the 2of3:2s, 3:1s, 6:x rules to see if it looks like there is a shift or a drift. And we can check the R:4s to see if there may be a random error. These additional “Westgard Rule” interpretations will help us trouble-shoot and diagnose what kind of error might be occurring.

It may be easier to think of it this way: we want to use as few rejection rules as possible, but once we have to reject a run, we want to use any rule at all that will be helpful to us in trouble-shooting.

 

Question 8. What is the Best Practice for the use of hematology controls in the laboratory?

A. CLSI has covered this ground thoroughly with H26-A2.

 

Question 9. What if we use to judge our rejection if any rule is violated in the 3 level of controls?

A. If we assume that this means using the full set of “Westgard Rules” for 3 levels, that practice of using all the rules for rejection is appropriate if you have a 3 Sigma quality test (traditionally thought of as the minimally acceptable level of quality). If you have a 4, 5, or 6 Sigma test, however, you are doing “QC overkill” using more rules than necessary, and possibly rejecting runs that may have small errors present, but not errors that will be medically important.

 

Question 10. We have [Company X] hematology analyzers and we use controls provided by Company X which are manufactured by Company Y. Are these third party controls?

A. If the manufacturer of the control and the manufacturer of the reagent are separate, then the control is independent. The danger of using a control from the manufacturer is when the manufacturer makes the control AND the reagent. Then there is a temptation, too often indulged, to massage the control so that it gives consistently happy results about the reagent. Every impulse in the manufacturer’s financial structure encourages them to manipulate their controls and modify their user control limits – for every silenced customer leads to higher profit margins.

 

Question 11. What are your thoughts of an IQCP for hematology due to the lack of variations between lot to lot?

A. We are strong critics of the IQCP – particularly when it is used arbitrarily to reduce QC frequency to a lunar-based schedule (once a month, that is, literal lunacy). IQCPs are not really risk management tools, they are rationalizations for reduced QC. We don’t encourage labs to adopt IQCP for chemistry or any other discipline, unless absolutely mandated. So we don’t encourage labs to use it for hematology either. That said, we do obviously have the new QC frequency recommendations included with the latest “Westgard Sigma Rules” – which could be incorporated into an IQCP. In that case, you have a data-driven, evidence-based technique to help you determine QC frequency, and we would support that.

 


Question 12. In a single CBC run, with several parameters, will Sigma-metrics vary? How should we collectively rank Sigma in a single CBC?


A: Each parameter will have its own Sigma-metric, just as each biochemistry analyte has its own Sigma-metric in the chemistry world. Even though sodium, chloride, and potassium may all be electrolytes, they will have separate Sigma-metrics. This will mean that some instruments may provide a flawed CBC, where the hemoglobin is great by the erythrocyte count is poor. The lab needs to evaluate the quality of all the CBC parameters and choose the best overall instrument. We don’t expect pure perfection in any instrument, but we want to get as close as possible.

 

Question 13. Is a 4:1s violation a rejection rule? Should I perform corrective action of this rule violation? What might it suggest?

A. Before we answer this question, we need to ask ourselves another question? What is the Sigma-metric of this test? If we have a 4 Sigma or worse test, then a 4:1s rule is indeed a rejection rule and its violation would suggest that a systematic error has occurred and your trouble-shooting might examine whether or not a recalibration can fix this problem. However, if the test is 5 or 5 Sigma, a 4:1s rule is not a significant medical error – you don’t need to reject the rule. You could decide to treat the 4:1s as a warning of some developing problem, but it wouldn’t need an immediate rejection of patient samples.

 

Question 14. For historical CV, can we use PT CV instead if they have the same analytical concentration?

A. We would not consider this a good practice. Using a group CV from the EQA/PT program, even if it is at the similar level as your control, incorporates too much variation from multiple labs. It will give you an artificially wide control range, which will leave you blind to possible errors.

 

Question 15. [Company Z] is using a 6 Sigma to set the SDs on their instruments. Can we assume that their 6 Sigmas are correct or do we still need to calculate our own?

A. I must confess I haven’t seen the internal calculations that [Company Z] is claiming to use to determine Sigma-metrics or control ranges for their customers. From looking at the way customer ranges are being set, however, I am distrustful of this practice. The ranges look too wide – and there is no transparency about where the metrics, SDs, ranges, etc are coming from. Anytime the manufacturer gives you a wider range but doesn’t explain and share the data used to justify this range, it’s a bad sign. It doesn’t matter if they are invoking “6 Sigma” as the reason – it still looks like they are simply pushing your ranges wide so that you stop complaining and they save money on technical support.


Question 16. How do you implement 3rd party controls under a service contract that depends on vendor controls for trouble-shooting?

A. Here we deviate from science more into economics and politics. Often the higher level decision makers, be they in the hospital administration, the health system or the government funding mechanisms, place an overwhelming emphasis on bringing down costs while assuming that quality is not impacted. In this case, undoubtedly the use of manufacturer controls are “cheaper” because the manufacturer will make more money on the instrument and reagent side and can afford to sell controls below cost. However, the quality is degraded. I can’t pretend that I can rewrite policies or change laws or renegotiate contracts. But at the contracting and bidding level, the language needs to be written in from the very beginning that independent controls must be used, and that the quality control of the tests, indeed the Sigma-metric of the test, must meet a minimum acceptable level of performance, or the manufacturer must pay some sort of penalty to improve the quality. Right now manufacturers get away with low bids that end up in longer, higher costs of ownership. Buying a cheap car that is prone to engine failure and has faulty dashboard monitors ends up costing you a lot more than the investment in buying a reliable vehicle in the first place.

In your immediate short-term, you need to get the manufacturer and contract language amended so that third party controls are now included. If the manufacturer refuses, this tells you a lot: they are unwilling to make you, their customer, happy, at the expense of their profit margin. Perhaps that shows they don’t really care about quality at all.

 

Question 17. What is total allowable error? What data of allowable total error can be used for Six Sigma calculations?

A. There’s a vigorous debate, decades old now, and rekindled recently, about the best way to estimate errors in laboratory measurements. Whenever you have one measurement, you inherently have a total error in the result that combines both inaccuracy and imprecision. The Total Error theory and approach was introduced by James O. Westgard many decades ago, and this is the dominant paradigm by which laboratories assess their error. Bear in mind, almost every EQA/PT program sets their goals for acceptance in a way that can only be an allowable total error. (Also note, this allowable total error is actually just the analytical allowable total error – there are other errors that occur in the pre-analytical and post-analytical phases of the testing process).

When you are looking for allowable total errors, there are multiple resources available around the world for you. Westgard.com has tried to consolidate those in our own Quality Requirements section.

Ultimately, you as a laboratory director need to assess each allowable total error and decide if it is appropriate for your lab and your patient population. Sometimes the government (CLIA) or the accreditation body (CAP) will define the goals for you, but often you need to use your judgment.

 

Question 18. Is the use of the “patient control” obsolete?

A. Far from it. I think the new tools and techniques for quality control are showing us how to reduce the number of rules, and number of controls, and the frequency of running QC. But that opens the question – what do we do in between these reduced runs of controls? If we have more and more patient samples running in between controls, how will be recover if the next control is out? One answer is to implement “moving averages” – a technique well known in hematology but rarely used in other areas of the laboratory. Known by many terms, “average of patients,” “average of normals”, “patient data QC”, this technique tries to take advantage of the “Free QC” that patient samples provide and through filtering of abnormal values out, create a real-time average that should be stable. And if that average, updated continuously, starts to fluctuate, that is a good indicator that something may be happening that’s wrong with the method (and now is a good time to run some new controls to figure out what kind of problem may be occurring). So the future technology trends, in our opinion, point toward more use of “patient controls” not less – the main obstacle is having the informatics support and developing the algorithms to generate that moving average.

[Note: if the term “patient control” is referring to the practice of building patient pools and using those instead of commercial controls, then yes, that practice is obsolete for many tests. The only time this is a recommended practice is when you can’t find a commercial control at all for a new exotic test.]

 

Question 19. What strategies can we use if we have a bad QC material, especially due to deterioration during shipping)?

A. This can be particularly frustrating for hematology controls, which have such short shelf lives. However, ultimately these failures are the fault of the vendor, either the control vendor or the manufacturer, and they should immediately expedite replacement controls to your laboratory. I don’t believe it is ethical or legal in the US for vendors to force their customers to use failed products. If you can conclusively demonstrate that the control material is compromised, you are the customer, you pay for this service, and the vendor needs to treat you right. If a vendor refuses to appropriately respond to a shipping failure or control failure, they are showing that they really don’t respect you as a customer – they enjoy taking your money, but they don’t want to provide a real service in exchange. Your next step is to make sure they understand the consequences of that disrespect – changing vendors is a very strong indicator and incentive to vendors to provide higher quality. The longer we tolerate poor quality methods and poor quality service from vendors, the more likely this will become the industry standard.


Question 20. With multiple instruments within 4 labs on 3 sites in our health system, we create a single QC gate using data from all the instruments. It is impossible to use the 10x rule. Each instrument has a slight bias to the mean. What rules would you recommend?

A. This is a question we see popping up more and more frequently as labs get bigger and health systems get more consolidated and standardized. It would be convenient to use the same mean and same SD on all instruments in all labs, particularly if they are all from the same vendor. As many of you already know, however, there is no such thing as an “identical” instrument. They are all different – it’s a question of how different each instrument is. I have seen labs that use rigorous weekly patient specimen testing to keep instruments as close as possible to the same mean and same SD. Under some circumstances (for example, if all instruments were performing at 6 Sigma), there is justification for using a standardized mean and SD and control rules. If we take a 6 Sigma example for instance, then the 10x violations are not only not relevant, they are not necessary for our QC design. However, in order to assure that your unified mean and SD are appropriate, you will still need to maintain the individual performance data on each instrument. You will need to know the individual mean and individual SD of each instrument, so that if one of those instruments starts to separate from the herd, you can segregate it from the rest of the data and use the individual performance characteristics to run the QC, until such time as it returns to the herd in its performance.

Thanks again to all who participated in the LabRoots webinar - and to those who posed such interesting questions.