Tools, Technologies and Training for Healthcare Laboratories

QP-15: Frequently-Asked-Questions about Quality Planning

Usually, we put the FAQ's in our Questions section, but after 14 lessons in Quality Planning, a few questions have come up and it's better to answer them right here. Dr. Westgard clears up some of the common areas of confusion in quality planning. If you still have a question after reading this, please let us know and we'll answer you.

Note: This material is covered in the Basic Planning for Quality manual. Updated coverage of these topics can be found in Assuring the Right Quality Right, as well as the Management and Design of Analytical Quality Systems online course.

Why oh why do I have to worry about quality planning?

Someone needs to worry about the quality of the test results being produced by your laboratory. Don't assume that manufacturers have solved all the problems! Don't assume that methods work perfectly in the real world! Someone in your laboratory has to assume responsibility for managing quality in a quantitative manner. Quality planning is the ingredient that is missing in most laboratories. You can learn how to do it yourself, improve your own skills, and contribute that expertise to your laboratory.

The first step in the quality-planning process is to define the quality required for the test. Where can I find that information?

To get started, it's easiest to use the acceptability criteria from proficiency testing programs or external quality assessment programs. These criteria are generally in the form of an allowable total error, such as Target Value plus or minus a certain percentage or a fixed concentration. For example, the acceptability criterion for a cholesterol test is TV +/- 10%; the criterion for calcium is TV +/- 1.0 mg/dL. For US laboratories, about 80 different tests are covered by the CLIA acceptability criteria.

Aren't the CLIA PT limits too wide to be useful?

You hear that comment from people who are only considering the stable performance of the method, i.e, comparing the imprecision and inaccuracy to the CLIA criterion using a total error equation for stable method performance, such as TE = bias + 2SD. When you want to consider QC performance, another term must be added to account for the "sensitivity" or error detection of the control rules and number of control measurements being used. This term is in the neighborhood of 2 to 3 times the SD for common QC procedures with low numbers of control measurements. Therefore, a quality-planning model that includes QC performance is more demanding and requires that the allowable total error encompass bias plus 4 to 5 times the SD of the method.

What should I do if CLIA doesn't include acceptability criteria for my test?

There are other sources of information about quality requirements, such as the European recommendations for allowable SDs (or CVs) and allowable bias. You can calculate an allowable total error as the sum of the bias plus 1.65 times the SD (be careful that everything is in the same units). Ricos has recently provided a database that includes 316 different quantities for which total error criteria have been calculated from biologic data. You can also use another type of quality requirement, called the clinical decision interval, which can be thought of as the grey zone for interpreting a test result or the medically important change in a test result. However, you will need access to a computer program, such as the EZ Rules program, which can be used to prepare a chart of operating specifications from a clinical decision interval type of requirement.

How are the method's bias and SD determined?

Initially, the estimates of inaccuracy and imprecision can be made from the method validation studies performed when a test system is being installed. Bias would be estimated from a comparison of methods experiment and the method SD or CV would be determined from a replication experiment. Later on, during the routine operation of a method, bias could be determined from peer comparison programs or from proficiency testing surveys and the method SD or CV could be determined from routine quality control data.

What should I do if there isn't any data on method bias?

Begin by assuming method bias is zero. You should always be able to estimate method imprecision from routine QC data or by performing a replication experiment. That's the essential data. Remember that you can always "replan" the QC procedure when you have better estimates of method performance.

Don't all QC procedures detect errors?

They all detect large errors, but have different sensitivities for small errors. For many tests, the errors that are medically important fall into the small error classification, therefore the choice of control rules and the number of control measurements is very important for assuring the quality of the tests.

What characteristics describe the performance of a QC procedure?

Two terms - the probability of false rejection (Pfr) and the probability of error detection (Ped) - are important for describing the performance of a QC procedure. For practical purposes, we aim for a probability of 0.90 or a 90% chance of detecting medically important errors and a probability of 0.05 or a 5% or less chance of a false rejection.

What QC procedures provide high error detection?

Error detection depends on the control rules and number of control measurements. The narrower the control limits, the higher the rejection rate - both for error detection and false rejection. The more rules in a multirule combination, the higher the error detection. The higher the number of control measurements, the higher the error detection. Multirule procedures with Ns of 4 to 6 provide the maximum performance QC procedures that are practical for most laboratories.

What QC procedures have high false rejection rates?

The most serious problem is with the 12s control rule, which applies when a Levey-Jennings control chart has 2SD control limits. When N=1, Pfr is 0.05 or 5%, which corresponds to the 1 out of 20 points that we expect to see outside of 2 SD control limits. When N=2, Pfr is 0.09 or 9%, which means that approximately 2 out of every 20 runs are actually expected to be judged out-of-control. When N=3, Pfr is 0.14 or 14%. When N=4, Pfr is 0.18 or 18%, which is approximately 1 out of 5 runs that will be observed to be out-of-control when everything is working perfectly. Since CLIA requires a minimum of 2 control materials or a minimum N of 2, the use of 2 SD control limits leads to nearly 10% false rejections, 10% repeat runs, and 10% waste. The frequent false rejections that result from use of 2SD control limits are a good situation, which is why the use of 2 SD control limits should generally be avoided.

Does the OPSpecs chart provide information about probabilities of rejection?

Yes, the probability of error detection is given in the title of the chart in terms of percent, e.g., 90%AQA indicates 90% error detection. The probability for false rejection will be found in the key area of the chart, next to the control rules.

Is the OPSpecs chart the only quality-planning tool available?

No, the OPSpecs chart isn't the only quality-planning tool - just the best tool. You can also use a critical-error graph, which has the size of a medically important error drawn on power curves of QC procedures to show exactly the error detection that will be available. Many analysts find that it is initially easier to understand the critical-error graph and therefore prefer this tool in the beginning. In the long run, however, the OPSpecs chart is a more useful tool and is actually easier to use, even though it is harder to understand because of all the information available on a single chart.

Why are the units on an OPSpecs chart given in percent, rather than concentration?

If concentration units were used, a different OPSpecs chart would be needed for every test application. By presenting the quality requirement, method imprecision, and method inaccuracy all as percentages, a single OPSpecs chart can be used for all tests that have the same quality requirement. This is an advantage for manual applications where a set of OPSpecs charts can be preprinted to cover a wide range of potential applications.

Since the units are already in percent, why is a normalized OPSpecs chart needed?

The original OPSpecs chart must be prepared for a specific quality requirement, e.g., 10%, 15%, or 20%. The chart can only be used for a test having that quality requirement, therefore you need a whole workbook of preprinted charts - an atlas of maps to provide guidance in the different states of the country. The normalized OPSpecs chart displays the allowable bias and allowable imprecision as a percent of the quality requirement, which allows the chart to be used with any quality requirement. The advantage is you only need a small number of charts. The disadvantage is you have to make some calculations to convert the observed bias and observed imprecision into a percent of the quality requirement.

What's the meaning of the "90%AQA(SE)" in the title of an OPSpecs chart?

The "%AQA" means percent Analytical Quality Assurance, which corresponds to the expected error detection in percent, rather than as a probability. Therefore, 90%AQA means there is a 90% chance of detecting a medically important error. The "(SE)" in the title stands for Systematic Error, therefore an OPSpecs chart with 90%AQA(SE) describes the conditions necessary to provide 90% detection of medically important systematic errors.

Isn't 50% AQA too low to be useful?

It's always best to achieve 90% AQA whenever possible in order to detect medically important errors in the first run. However, there are situations where it may be too costly to provide 90% AQA because of the high number of control measurements that would be needed. In such cases, a different strategy is needed, such as using a multistage QC procedure that has a startup design that provides 90% AQA (with moderate false rejections if necessary) and a monitoring design that provides moderate error detection with a very low chance of false rejections. Achieving 50% AQA would be okay for a monitoring design.

How does the Total QC strategy compare to the quality-system approach that is being widely touted by NCCLS in it's document for QC of unit-use devices?

The quality system approach is to identify all the potential sources of errors or problems in a process, then select an appropriate quality monitor or method of control. NCCLS provides an example "sources of error matrix" that identifies over 100 possible error sources. The resulting quality system makes use of those quality monitors that are necessary to monitor the potential sources of error of an individual method. The Total QC strategy is similar in that it includes other quality monitors or methods of control, but it makes maximum use of statistical QC to monitor as many error sources as possible. In contrast, the NCCLS unit-use guidelines attempt to avoid or minimize the use of statistical QC.

Can I make copies of the worksheet and normalized OPSpecs charts for my own applications?

Yes, you have permission to make copies of quality-planning worksheet, flowchart, and normalized OPSpecs charts for use in your own applications. You can download them from this website to your computer and then print copies as needed.