Tools, Technologies and Training for Healthcare Laboratories

FAQ's about OPSpecs Charts

Frequently-Asked-Questionsabout OPSpecs charts.
Plus, some questions aabout Coag PT and PTT testing (scroll down past the first section)

Coagulation PT & PTT Testing

March 1997

Frequently Asked Questions about OPSpecs Charts

Why isn't the x-axis labeled CV since imprecision is presented in percent?

We use smeas to maintain consistency with the terminology in the original publications that describe the quality planning models used to calculate the operating specifications. The value is presented in % so it really is a coefficient of variation, or CV.

Why isn't the size error on the x-axis presented in units of concentration?

If concentration units were used on the x-axis, a different OPSpecs chart would be needed for every test application. By using multiples of the standard deviation and by presenting this as a percent, the same OPSpecs chart can be used for any test that has the same quality requirement. For example, an OPSpecs chart prepared for TEa of 10% can be used for potassium (TEa of 0.5 mmol/L at 5 mmol/L), cholesterol (TEa of 10% at 200 mg/dL), and albumin (TEa of 10% at 3.5 g/dL) because they have the same allowable total error, even though they have different units.





Why is the y-axis labeled biastotl instead of biasmeas?

Biastotl is the sum of biasmeas plus biasmatx, which are the individual bias terms that can be entered in the parameters screen of the QC Validator program. When we originally described the quality-planning models, we thought it would be useful to have different inputs for the estimates of bias obtained from method validation studies and proficiency testing surveys, thus we included two bias terms. This has been very confusing for a lot of analysts, so in the future, we will probably drop the biasmatx entry and just use biasmeas for any estimate of bias, regardless of the experimental design or source of data.

In practice, we usually enter the bias-value as biasmatx when using an estimate from a proficiency testing survey and then leave biasmeas as zero. We enter the bias-value as biasmeas when the estimate comes from a method evaluation study, such as the comparison of methods experiment.

What do TEa and Dint mean in the headings of OPSpecs charts?

TEa refers to the allowable total error for a test. It represents an analytical quality requirement of the form commonly stated for proficiency testing criteria for acceptability.

Dint refers to a decision interval which is a clinical quality requirement in the form of a medically important change in test results. One or the other of these terms will appear in the heading of an OPSpecs chart to identify the type of quality requirement for which the chart was calculated.

How do I decide whether to use an analytical or clinical quality requirement?

In part it depends on what information is available. For many tests in the US, CLIA requires that laboratories participate in proficiency testing and satisfy certain criteria for acceptable performance, which are analytical quality requirements in the form of allowable total errors. In such situations, it is important to know whether or not your laboratory will be sure to pass proficiency testing. In situations where PT criteria are not available, it may be easier to assess the clinical quality that is required by your users. You can discuss medically important changes with your users, whereas you can't talk to them about precision, accuracy, and total error. In cases where the clinical and analytical quality requirements are different, it would be important to evaluate both and then design your testing process to satisfy the most demanding requirement.

What is the meaning of the SE in the %AQA(SE) heading on an OPSpecs chart?

SE stands for systematic error. OPSpecs charts can be prepared for either systematic or random error (RE). In general, we recommend optimizing the QC procedure for detection of systematic error, particularly for tests performed by automated systems. There may be occasions, particularly with manual methods, where it would be appropriate to optimize QC performance for detection of random errors.

What does %AQA mean?

%AQA means % Analytical Quality Assurance, which corresponds to the expected error detection, presented as a percent rather than as a probability. Therefore, 90% AQA means the same as a probability of error detection (Ped) of 0.90. Remember that our general objective in selecting a QC procedure is to achieve 90% detection of medically important systematic errors. That's why we have an OPSpecs chart for 90% AQA(SE). For detection of random error, we recommend 80% detection, thus the OPSpecs chart will show 80% AQA(RE). OPSpecs charts may also be prepared for 50% AQA and 25% AQA for both SE and RE.

Why is 80% used instead of 90% AQA for the random error OPSpecs chart?

The power curves for detection of increases in random error (RE) are flatter than those for detection of systematic error (SE). It takes about the same QC effort to achieve 80% detection of random error as 90% detection of systematic error. It would be much more costly to try to achieve 90% detection of increases in random error.

Aren't 50% and 25% AQA too low to be useful?

It's always best to achieve 90% AQA because that means the QC procedure will generally detect a medically important error in the first run it occurs (average run length, ARL=1/Ped; ARL for 90%AQA = 1/0.9=1.1 runs). However, there are situations where it may be too costly to provide 90% and another strategy is necessary. One strategy is to design a multi-stage QC procedure that has a startup design with high error detection to be used initially to qualify the method is working properly, then use a second design with low false rejection (and moderate error detection) to monitor ongoing performance. With very stable methods (i.e., a low frequency of problems or errors), it may also be practical to settle for 50% AQA or even 25% AQA, which would mean medically important errors wouldn't be detected generally until the second or fourth runs in which they occur. This initially seems unreasonable until you consider that all QC would be a waste of time and effort for a perfectly stable method that never had any problems. While we may not have any perfectly stable methods, we certainly have methods with very good stability, as evidenced by the sometime daily, weekly, monthly, or even quarterly calibration of some tests.

What's the meaning of the "maximum limits of a stable process" line?

The maximum limits line corresponds to the condition where the QC term in the quality planning model goes to zero, which assumes a perfectly sensitive QC procedure that can detect all errors. In practice, there are no such QC procedures with such sensitivity, particularly for the low Ns used in clinical laboratories. The power curves show that 90% error detection isn't generally achieved until systematic shifts are at least 2 to 3 times the standard deviation of the method.

Is there any practical use for the "maximum limits" line?

If the z-value is changed from 1.65 to 2, 3, or 4, this line could be used to represent a total error criterion for acceptable method performance, corresponding to the

  • bias + 2s criterion originally recommended in 1974 by Westgard, Carey, and Wold, or
    bias + 3s criterion for acceptability recommended in 1990 by Ehrmeyer and Laessig, or the
  • bias +4s criterion recommended in 1990 by Westgard and Burnett.

With QC Validator 2.0, the user can change the maximum limits line to any one of these total error criteria. All three of these criteria are included in a "method evaluation decision" chart (MEDx chart) described by Westgard in a paper in Clinical Laboratory Science. This paper also suggests that the OPSpecs chart could be used directly during method validation studies to judge the acceptability of a method, as well as in QC planning for selecting appropriate control rules and Ns. See the references below for more details and discussion.

  1. Westgard JO, Carey RN, Wold S. Criteria for judging precision and accuracy in method development and evaluation. Clin Chem 1974;20:825-33.
  2. Erhmeyer SS, Laessig RH, Leinweber JJ, Oryall JE. 1990 Medicare/CLIA final rules for proficiency testing: Minimum interlaboratory performance characteristics (CV and bias) needed to pass. Clin Chem 1990;1736-40.
  3. Westgard JO, Burnett RW. Precision requirements for cost-effective operation of analytical processes. Clin Chem 1990;36:1629-32.
  4. Westgard JO. A method evaluation decision chart (MEDx Chart) for judging method performance. Clin Lab Science 1995;8:277-83.

Coagulation PT & PTT Testing

This question comes from a hematology supervisor at a laboratory in Plattsburgh, NY:

Question: We use unassayed control materials for coagulation PT and PTT testing. Ranges must be established for each lot. Each time ranges are established, the limits are very tight, giving way too many QC outliers. What are your suggestions to help this?

Answer: If the ranges for a new lot are established in a few assays, or a few runs, or over a short period of time, the QC ranges may only reflect within-run imprecision rather than the longer term variability of the process. That might account for seeing a larger number of outliers than expected.

One strategy for establishing control limits on new lots of materials is to (a) make at least 9 assays over at least 3 days to estimate the new mean, (b) apply the previous estimate of the SD to calculate the new QC limits, then (c) periodically update the mean and control limits after accumulating more data on the new lot of materials. In general, a month's worth of routine data is needed to get a good estimate of the method SD.