Tools, Technologies and Training for Healthcare Laboratories

Why Six Sigma Risk Analysis?

Risk Management has been in the news over the past few years. In some cases, Risk Management has failed us, and failed us badly (see global financial meltdown, 2008). In other fields, Risk Management is being promoted as the solution to our problems (see POC devices, EQC options, from 2003-2011) . How do balance the strengths and weaknesses of Risk Management? By tapping the strength of another quality management technique: Six Sigma.

Why Six Sigma Risk Analysis?


Here’s a story from Dilbert, slightly modified to fit the context of risk analysis in a medical laboratory.

Dilbert:  “As requested, I did a risk management assessment to evaluate our control plan.”
Pointy haired Boss:  “What did you find?”
Dilbert: “ I concluded that there was no risk of any management.”
Pointy haired Boss:  “Sounds like a good plan!”

Risk management, indeed, sounds like a good plan!  In medical laboratories, it is the responsibility of the top managers to approve the results of risk analysis in the form of an Analytic QC Plan.  In the absence of an adequate understanding of risk analysis by laboratory management, there’s a good chance that risk analysis will lead to bad QC Plans, which, in turn, means there is little risk of any good management coming out of the ISO/CLSI guidelines for risk analysis.

Need for scientific management of quality

In principle, risk management makes good sense and everyone intuitively understands risk in qualitative and subjective terms.  However, in practice, it is difficult to estimate risk in quantitative terms, evaluate the effectiveness of risk mitigation strategies, and decide whether the residual risks are acceptable.  There are notable public examples, such as the recent failures of risk management in off-shore oil drilling in the Gulf of Mexico and in the collapse of financial institutions on Wall Street.  In explaining “How Markets Fail,” John Cassidy describes the failure of risk management [1]:

“Risk management, long an imprecise and intuitive discipline, appeared to have been converted into a hard science.  A fog of acronyms and mathematical symbols obscured what should have been obvious – too many financial institutions were lending heavily to an overheated property market.  Like pilots on a modern airliner, the regulators and Wall Street CEOs had come to rely on the computers to tell them if anything was amiss.  However, the risk management systems that had been put in place at firms such as Citigroup, UBS, and Merrill Lynch had a much shorter record than the autopilots installed on Boeings and airbuses, and they turned out to be far less reliable.  When severe turbulence hit, many of them stopped working.”

The initial deployment of risk analysis for development of QC Plans in medical laboratories likewise is being promoted as science, but the ISO/CLSI approach is qualitative and subjective and, most seriously, requires scientific information that is not readily available in laboratories.  Furthermore, there is no history of practice in medical laboratories upon which to evaluate the ISO/CLSI guidance and to assess its practicality and reliability.  In contrast, Statistical QC procedures have a proven track record in analytical quality management, particularly when integrated with Six Sigma Quality Management which is inherently risk oriented.  Therefore, the introduction and application of risk analysis for analytical quality management is best done in conjunction with Six Sigma metrics and related tools.

The sigma concept of a tolerance limit provides guidance for defining intended medical use in the form of an allowable Total Error.  A Method Decision Chart provides a tool for evaluating safety characteristics such as precision and bias in terms of the sigma performance of a method or its sigma-metric.   The selection SQC procedures to assure detection of medically important errors can be related to sigma performance using Sigma-metric QC Selection tools or charts of operating specifications (OPSpecs charts), the latter having same format as the Method Decision Chart.  Long term performance of analytical methods can also be monitored through EQA and PT programs using sigma-metrics to provide measures of test quality.  Thus, Six Sigma tools and metrics are an integral part of the Analytic Quality System.

Need for objective and quantitative estimates of risk

For risk analysis to become an objective and quantitative tool, it should be integrated into the existing quality management framework, particularly integrated with Six Sigma concept of defects and the estimation of defect rate or the number of defective results as the practical measures of quality.  Defects and defect rates are fundamental concepts at the heart of Six Sigma Quality Management.  They provide the basic construct for assessing risk on the basis of the quality required for intended use, the observed precision and bias (safety characteristics) of the testing process, and the effectiveness of control mechanisms (detection) for risk mitigation. For analytic quality and laboratory QC Plans, risk should be estimated as the residual defect rate of the analytical testing process and the number of potentially harmful test results that may be produced by the laboratory.

A critical weakness of the current ISO/CLSI methodology is the use of scoring or ranking scales that mainly reflect subjective opinions and qualitative judgments, or what is sometimes called “preferences” in the risk analysis literatue.   The problems of such scoring methods have been identified by Hubbard [2] as one of the main reasons for the failure of risk analysis:

“Since risk analysis cannot be just the application of preferences independent of real-world measurements, the application to risk analysis of methods meant specifically for assessment of preferences can be in error.  Yet, some users have applied such methods as the primary method for risk analysis itself.  Whatever the value of the methods may be, the most important thing to keep in mind is that risk analysis must in some verifiable sense be a forecasting or predictive method.  This does not mean that each event must be predicted exactly, but it does mean that, on the average, the results or risk analysis must outperform unaided human judges at predicting the likelihood of various events.”

For Analytic QC Plans in a medical laboratory, risk analysis should help us predict the number of harmful or defective test results that might be produced by our analytic methods.  To do this, the risk model should consider probability of occurrence (OCC) as a defect rate.  Severity of harm becomes a multiplier that converts the probability of occurrence to a probability for harmful test results (OCC*SEV).   For no harm, SEV is 0.0; for serious harm, SEV is 1.0. Some judgment is required, but when in doubt, safety concerns should guide us to err on the side of high severity.  The potential harm of those test results can be mitigated by the QC procedure’s probability of error detection (Ped), which becomes another multiplier in the risk calculation (OCC*SEV*DET, where DET is 1-Ped and a perfect QC detector would have a Ped of 1.0).   Therefore, effective control mechanisms could identify medically important error conditions, trigger corrective actions, and prevent those errors from reaching the patient.  The end result of the risk calculation is the residual defect rate, which can be translated into the expected number of potentially harmful test results by accounting for the laboratory workload, i.e., multiplying by the number of tests requested or performed.

With this approach, the evaluation of acceptable risk is straightforward.  Ideally, there should be no harmful test results, i.e., zero.  Any number above zero represents patients who may be affected by the poor quality of a laboratory test.  The measure of risk is now more quantitative,  more relevant to patient safety, and more understandable in the laboratory.  The decision on the acceptability of residual risks can now become more objective.  How many potentially harmful test results is the laboratory willing to report?  Six Sigma provides some guidance in relation to defects per million, but this decision must ultimately be made by the laboratory, in accordance with its own goals for quality.

In managing the analytical quality of a testing process, the ultimate issues are: (1) how good must a test be to be useful for its intended medical application (i.e., what is the tolerance interval or quality requirement) and (2) what number of defective or potentially harmful test results are acceptable to the laboratory, or what is the acceptable residual risk.  The answers are not simple, but these are the fundamental questions that must be dealt with in the application of risk analysis in analytical quality management.

Need for uniform standards for quality in laboratory testing

Finally, we must recognize that the motivating force for employing risk analysis to change QC practices is what the financial industry has called “regulatory arbitrage.”  This has been described by Nouriel Roubini and Stephen Mihm [3] as the “purposeful movement of financial activity from more regulated to less regulated venues.”

“In the wake of the [financial] crisis, the consensus holds that these nonbank financial institutions – the shadow banks – have to be regulated like ordinary banks…   A selective application of regulations would be a profound mistake: it would simply open the door to more regulatory arbitrage…  For this reason, any new regulations must be applied across the board to all institutions…”

The analogy in healthcare is the shadow testing that goes on in the form of Point-of-Care and “waived” applications.  Both manufacturers and medical laboratories are interested in performing such testing in a less regulated environment, even though that may not be in the best interest of our patients.  If there are different standards when tests are performed in different locations, the testing will migrate towards the areas of lower standards.  If different regulators or accrediting organizations have different standards, medical laboratories will migrate towards the accreditor with the lower standards, as described by Roubini and Mihm [3]:

“There is considerable evidence that U.S. commercial banks, for example, change regulatory jurisdictions to take on more risk.  This should come as no surprise: banks are looking to maximize returns, and they would have no reason to voluntarily submit to rules that put them at a competitive disadvantage.  As a consequence, there’s a ‘race to the bottom,’ as banks and other financial firms search for the regulator that will regulate them the least…  So, regulators have every incentive to be lenient in order to attract more financial institutions into their regulatory nets.  Here too we have a race to the bottom.  Such is the paradox of ‘regulatory shopping.’”

Such is the paradox of risk analysis and the shopping for the lowest level of QC that is in compliance with regulatory and accreditation requirements!  Risk analysis, when properly applied, should reduce risks and improve patient safety.  If casually applied, without adequate study and understanding, without proper regulatory or accreditation guidance, or without serious intent to perform the necessary QC, risk analysis will likely lead higher risks in the shadow areas of laboratory testing where testing personnel have the lowest levels of training, experience, and competence, where the regulatory requirements and accreditations standards are the lowest.

Six Sigma Risk AnalysisTo learn a quantitative approach for Risk Analysis and Management, check out the new book Six Sigma Risk Analysis

References:

  1. Cassidy, J. How Markets Fail.  New York: Farrar, Straus and Giroux, 2009.
  2. Hubbard DW. The Failure of Risk Management: Why it’s broken and how to fix it.  John Wiley & Sons, Hoboken, NY, 2009.
  3. Roubini N, Mihm S. Crisis Economics. The Penguin Press, New York, 2010.
  4. ISO 15189.  Medical laboratories – Particular requirements for quality and competence.  ISO, Geneva, 2007.