Basic QC Practices
QC - The Chances of Rejection
Dr. Westgard explains how an analytical testing process works to reject the bad runs and keep the good runs. False rejection and error detection are explained. The different kinds of problems (precision , accuracy, etc.) are also described. If you've ever wondered whether there was method to your laboratory madness, this article is for you.
- QC error alarm
- QC rejection characteristics
- Expected behavior of different control rules
- Known rejection characteristics
- What to do?
- How do you select QC procedures with appropriate rejection characteristics?
The room or building you're in most likely has a fire alarm or a whole system of fire detectors. What's the chance that a fire will detected by your alarm system if the source of the fire is:
- one match?
- a whole matchbook?
- a wastebasket?
- your whole desk?
We all want to believe that the alarm system would do its job and we would get out safely, but that assumes that the installation was carefully planned. The actual chance of detection depends on how many detectors were installed, where those detectors are located, and how sensitive the detectors are. If there were a serious fire (i.e. your desk), you would like to be certain that it will be detected before it got out-of-control, i.e., you want a 100% chance of detection or true alarms. On the other hand, as long as there isn't a serious fire (i.e. one match), you don't want the alarm to go off and interrupt what you're doing, so you want a 0% chance of false alarms.
The fire we want to detect in an analytical testing process is any analytical error that would burn a physician or patient, i.e., destroy the value of the test result that we are providing to the physician or patient. Like a fire detector, a QC procedure is an analytical error detector that sounds an alarm when something happens to the analytical testing process or "method.". The alarm is supposed to detect situations of unstable method performance with 100% certainty (or probability of 1.00), and ideally, shouldn't give any false alarms (0.00% chance or probability of 0.00) when performance is stable or the method is working okay. You would expect that the chance of detecting an analytical problem will depend on the size of the error occurring, the number of controls used to monitor method performance, and the sensitivity of the statistical rules being applied. You want a high chance or probability of detecting medically important errors, but you don't want to be interrupted by false alarms when the method is working okay.