Tools, Technologies and Training for Healthcare Laboratories

Failure Modes of Risk Assessment

What are the Risks of Risk Management? In this essay, we discuss known weaknesses (failure modes) of the Risk Asssessment process, as outlined in recent (2009-2010) reports from the Interntional Risk Governance Council, and relate those failure modes to the medical laboratory.

Failure Modes of Risk Assessment

November 2010
Sten Westgard, MS

  1. Missing, Ignoring (or conversely), Exaggerating the warning signs of emerging Risk
  2. Lack of adequate knowledge about hazards, probabilities of events, and associated consequences
  3. Inaccurate perceptions of Risk, determinants of Risk, and consequences of Risk
  4. Failure to identify, consult or involve stakeholders in Risk Assessment
  5. Failure to gauge the acceptability of Risk to stakeholders
  6. Failure to correctly present Risk information to stakeholders, either by biased or selective data
  7. Failure to understand the consequences of complexity
  8. Failure to recognize a fundamental change in the process
  9. Failure of models to account for all aspects of reality
  10. Failure to imagine and plan for surprises and unusual possibilities

Risk Management, while new to US laboratories, is not new elsewhere. Outside healthcare, Risk Management has a long history. While the use of Risk Management in fields like finance and disaster preparation has been marked by recent failures, in other areas, the track record is better.

But Risk Management isn't foolproof. It's been around long enough that its weaknesses have become evident. It shouldn't come as a surprise that Risk Management isn't a silver bullet against all adversity. The recent past has made it abundantly clear, too, that it is possible to perform Risk Management badly - and expose yourself to more risk even as you believe you are reducing that risk.

In recent years, a systematic study of these weak points was completed. The International Risk Governance Council (IRGC http://www.irgc.org) compiled several reports which are freely available. The original report (92 pages), titled Risk Governance Deficits: An analysis and illustration of the most common deficits in risk governance, came out in December 2009. A more recent follow-up report (24 pages), titled Risk Governance Deficits: Analysis, illustration and recommendations, tried to both summarize the original report and make policy suggestions on how to reduce and mitigate the risks associated with Risk Management.

The entire reports are worth reading, particularly in their use of a wide range of examples of Risk Management failures - the subprime crisis in the US, the financial meltdown in the US, the dot-com bubble in the US, Hurricane Katrina, EMF dangers, genetically modified foods, the health implications of tobacco products, etc.

Our attempt here is to summarize these different failure modes and relate them as directly as possible to issues relevant to medical laboratories.

The "Top Ten" Failure Modes of Risk Analysis:

1. Missing, ignoring or (conversely) exaggerating the warning signs of emerging risk.

Do we have an early warning system? Is it able to detect emerging threats? Are we able to determine that these emerging problems are significant? Is there any detection system at all?

The parallel to the medical laboratory is easy: our quality control procedures.  For waived tests and devices, robust QC procedures may not be required by the manufacturer or laboratory regulations.  For non-waived devices, QC procedures are mandated, but we may have the problem of either false negatives (i.e. our error detection capability is so low that we miss errors that occur) or false positives ("false rejects" that occur when there is actually no problem at all).

2. Lack of adequate knowledge about hazards, probabilities of events, and associated consequences.

Do we know what might go wrong with process? Do we know how often a particular hazard might occur? Do we know what might happen if a particular hazard did occur?

It's not surprising to learn that we don't always know everything about our processes.  In the laboratory there are gaps in our knowledge: we don't know exactly how our analysts operate (we might think we do, but they don't always follow the procedure manual, do they?), we frequently don't know what might go wrong with our methods and instrumentation, and we often have little idea of how our test results are used by clinicians. All of these blind spots mean that when we perform Risk Management, we might not identify all the possible risks, or correctly judge the dangers of some risks.

The IRGC report notes that scientific fields are particularly susceptible to this risk:

"[I]nadequate knowledge can also result when well-funded scientists cling to outmoded theories, apply the wrong or one-sided methods when investigating a new risk or fail to investigate.... Additionally, scientists or decision-makers may simply fail to ask the important questions, or they may even ask the wrong questions." [original report, p15]

3. Inaccurate perceptions of risk, determinants of risk and consequences of risk

Do we accurately understand the risk?  Do our customers (patients and clinicians) accurately understand the risk?

Even if we know about a risk and can agree about what we think is the relative importance of that risk, there is probably a difference between our risk perception and that of other parties. Indeed, it might be that the public risk perception is not in agreement with the factual evidence - clinicians may think it's perfectly safe to perform tight glycemic control on the basis of test results from waived blood glucose meters, for example, even though the evidence-based literature makes use of more quantitative analytic systems.

Obviously, if we have different perceptions of the risk, we will take different actions (or inactions) to mitigate the risk.

These first three Risks of Risk Assessment are all related to the gathering and interpretation of knowledge. And while it's easy to point out deficits in our ability to identify, collect and understand risks, it's also important to admit that perfect knowledge is not possible:

"[T]here will never be sufficient capacity to assess all the information relevant to a systemic risk. " [p.11]

4. Failure to identify, consult or involve stakeholders in Risk Assessment.

Do we know all the people and organizations that are involved upstream and downstream of this risk? Have we solicited their expertise and opinions?

A process might take place within the laboratory, but it has more stakeholders involved with it than just the laboratory technicians and the laboratory manager. Whoever designed and delivered the device to the laboratory, as well as those people who are impacted by the results, should be involved in the Risk Assessment process. Trapped in your laboratory silo, you may not be able to objectively assess every hazard, but the outside views of our customers or vendors may help.

Upstream, it's important for laboratories to ask, no, demand, risk information from their vendors. Labs should be able to ask and receive a document of Risk Information for every diagnostic device they purchase. While CLSI considered making this a standard, it looks like now this is going to be left up to the market. That is, the burden is on the laboratory to ask for this from the manufacturer (Manufacturers seem to have adopted the attitude of "Don't Tell, unless Asked").

Downstream, Point-of-care testing often involves nurses, doctors, and other clinicians. While those clinicians may not have as much knowledge about testing processes as laboratory professionals, their viewpoints on the impacts of a process risk are vital. Their actual use of test results may not match your perception.

5. Failure to gauge the acceptability of Risk to stakeholders.

Do we know the "Risk appetite" of our vendors and customers? Do we understand their "Risk aversion" or "Risk Attitude"? Does our "Risk Attitude" match those of our other stakeholders?

Whether or not we consider a risk to be acceptable depends on our willingness to tolerate a poor outcome. 'First, do no harm' describes an attitude of Risk Aversion that is supposed to guide medical decisions. However, patients are usually willing to try therapies that have a small chance of failure – and the follow-up question is, what is considered a small chance of failure? 10%? 1%? 0.1%? Risk Attitude tends to vary from person to person - and from situation to situation.

Patients certainly have a unique perspective on how much Risk they are willing to tolerate in their care. For many clinicians "occasional" errors, false negative, and false positives are part of the job. They are unpleasant, but unavoidable. For patients, however,  the trauma of an incorrect test result and/or diagnosis is major - and we would do well to incorporate that into our Risk Assessments.

6. Failure to correctly present Risk information to stakeholders, either by biased or selective data

Do we accurately explain the magnitude of the risk to stakeholders? Or are our assessments corrupted by biases, a selective presentation of data, or conscious or unconscious omissions? Are we even aware of our ability to accurately or inaccurately estimate the risk?

When we talk about risks of our processes to certain audiences, do we understate or overstate the risk?  It's a frequent problem that the laboratory and the clinicians speak different languages to each other. Labs often hedge the specificity of a test result by invoking confidence intervals, while clinicians tend to accept the latest test result as a "true value."

When laboratories talk with manufacturers, we often are hard-pressed to separate marketing gloss from performance data. The manufacturer has many incentives to minimize the risks inherent in their instrument. The laboratory has the imperative to discover and demand as much risk information as possible from their vendors.

7. Failure to understand the consequences of complexity.

Are we able to appreciate the interconnected systems and processes - and the multiple dimensions of risk that they may produce? Does the complexity of our system produce tight coupling and possibly different risks or heightened risks?

Increasingly, the laboratory is highly automated. Not only have the boxes gotten bigger, the number of steps included in the box have increased. "Total automation" systems involve pre-analytical, analytical and post-analytical processes. Unforeseeable problems in any step of these systems can magnify along the automation path.  Again, the laboratory needs to get a great deal of risk information from the vendor or manufacturer. However, that is no guarantee that all bases have been covered - the manufacturer themselves probably haven't figured out all the possible failure modes that could happen, particularly in situations when multiple processes interact locally.

On the clinical side, laboratories often don't know the extent of the clinical pathways in use. A small change in a test result may in fact trigger a whole host of therapies and diagnoses - just think what would happen if your HbA1c result crosses the 6.5% threshold. Sometimes small amounts of error that seem acceptable from the analytical laboratory perspective are much more significant when viewed from the clinician perspective (rightly or wrongly).

8. Failure to recognize a fundamental change in the process.

When a process fails, are we able to detect it? How big does the error have to get before we will notice it, understand that it really is a failure, and react to it?

In the laboratory, we have the perennial problem with 2s false rejections and repeated controls. In many labs, trust in the quality control process has been degraded over the years, to the point that the technicians will repeat controls multiple times in order to get results back "in." This degradation of trust in QC means that real errors will persist in the system longer before the laboratory recognizes them. In labs that redefine the ranges, widening limits so that fewer "false rejects" occur, errors can not only persist, they can get very large before the QC notices.

Clinicians may not understand that different methods and different instruments produce different results. They would like to read all test results on the same chart, but in the absence of standardization, difference in results may only be due to methodological biases.

9. Failure of models to account for all aspects of reality.

Do our models provide us sufficient power to approximate what happens in the "real world"? Or do our models over-simplify reality and give us an unwarranted optimistic image of a risk?

Most manufacturing and laboratory processes are based on models. Quality Control depends on the assumption of a normalized distribution of results. Total Error, QC Design, and Sigma-metrics depend upon a model of analytical error that incorporates bias and imprecision. Clinicians rely on models of disease progression, which they freely admit varies by individual.

Again, with the complexity and automation and interaction of different processes, it's not possible to account for all possible variables. Inevitably, our models make simplifications. If we become overly reliant on the models - or worse still, forget that our models are not all-encompassing - we expose ourselves to more risk.

Not only the models we use to build, operate and interpret our processes are at risk of error, but even the models of Risk Management itself (acceptability matrices, RPNs, FMEA) are vulnerable to over-simplifications, too.   For example, industrial practices utilize a 3-factor risk model that includes probability of occurrence (OCC), severity of harm (SEV), and detection (DET), whereas in the ISO and CLSI guidelines, detection drops out of the model, and the focus shifts to prevention of errors in the patient testing process.

10. Failure to imagine and plan for surprises and unusual possibilities.

Are we expecting only the expected? Preparing for risks only by thinking "in the box"? ?Are we vulnerable to "unknown unknowns" and "Black Swan" events?

"Past experience has taught us to expect surprises. No one can reliably predict the future. No matter how good an early warning system is, or how thoroughly risk assessments are conducted, it is important to acknowledge that risk assessment relies on decisions about what, conceivably, could go wrong. In setting the boundaries for the formal risk assessment process, decision-makers need to remain conscious of the fact that surprises or events outside expected paradigms (so called "Black Swans") are always possible and that it is necessary to break through embedded cognitive barriers in order to imagine events outside the boundaries of accepted paradigms." [policy brief, 10]

Not only do our models simplify the reality of what we know, they probably don't address extremely rare events, strange confluences of events that are hard to imagine. 100-year storms, those surprises that are supposed to be extremely rare, seem to be happening with more frequency than we have expected (see Wall Street subprime bubble, Hurricane Katrina, the implosion of Long Term Capital Management, etc).

Is Risk Assessment Too Risky?

The IRGC council does not intend to scare people and organizations off from performing Risk Assessment, nor is that our intention here. But these days, if you will allow a bit of hyperbole for a moment, the current fad for Risk Management suggests it is a panacea that will allow us to transform some of our unreliable processes into processes that are reliable. The reports suggest that Risk Management has its limits - it can't make everything better. Knowing these limitations will help organizations perform better Risk Management, as well as alert them to dangerous processes that really can't be controlled.