Tools, Technologies and Training for Healthcare Laboratories

Total Allowable Error and the Brain to Brain Loop

Total Error. Total Analytical Error. Total Allowable Error. There are a number of terms in use about errors, and it appears that a recent editorial from Clinical Chemistry and Laboratory Medicine has mistaken what the meanings of these terms are. Another popular concept right now is the "Brain-to-Brain" loop. How does Total Analytical Error interface with the Brain-to-Brain Loop? Are the two approaches in conflict or complementary?

Total Analytical Error and the Brain-to-Brain Loop

By James O. Westgard, PhD
December 2011

In addressing some of the current issues in laboratory quality management, it seems that I’m caught up in history, or mired in the past.  I recently reviewed the history of laboratory QC in order to describe the evolution of QC practices and what might be expected for “QC for the Future”.  Here’s another historical perspective for understanding quality today, particularly the concept of Total Analytic Error and the evolution of quality-planning models to consider the intended clinical use of a laboratory test.

In a recent editorial in Clinical Chemistry and Laboratory Medicine [1], it was urged that the total analytical error model be expanded to include sources of variation in the pre-analytic and post-analytic phases of the total testing process, as well as the pre-pre and post-post phases of the clinical testing process. The authors describe the patient testing process as a “brain-to-brain loop” that starts when the physician decides to order a test and ends when the physician acts on the test result. This loop is a 9-step process (following the model originally presented by Lundburg [2]) that includes ordering, collection, identification, transportation, separation or preparation, analysis, reporting, interpretation, and clinical action.  They argue that quality goals or requirements need to be defined in the context of this brain-to-brain model, rather than just focusing on analytic quality.  While that may be the ultimate goal, it is necessary and practical today to set narrower goals for individual steps in the process, such as the analysis step, or multiple steps, such as pre-analytic plus analysis steps.

Although the editorial discusses many challenging issues related to quality goals, including their derivation, format, and application, it’s important to respond to a few specific points that relate to  “total error.” The comments reveal a misunderstanding of the ideas in the original paper that defined the concept of Total Analytic Error, as well as later papers that expanded the Total Analytic Error model to encompass additional sources of errors, variability, and uncertainty and led to the formulation of both analytical and clinical quality-planning models.

  • “The definition and use of the term ‘total error’ refers only to the analytical phase, and should be better defined as ‘total analytical error’ to avoid any confusion and misinterpretation.” [1, p 1131]
  • Short response – it was originally defined as total analytic error.  Common usage has shortened the term to “total error.”
  • “The current concept of ‘total error’ (or ‘total allowable error’, as depicted by James Westgard [3]) is simply misleading.  In fact, it only refers to the analytical performance of laboratory testing, thus missing the final goal that is the improvement of patient outcomes because it overlooks all the remaining processes involved in the brain-to-brain loop.” [1, p 1132]
  • Short response – Total analytical error was and is intended to apply only to analytic error. Later quality-planning models have expanded the coverage. So there are models that address the additional steps in the loop. Either the editorial has missed these publications or is purposely ignoring them.
  • “If more than 50% of urgent laboratory tests were never consulted in hospital settings, and if the extent of failure to follow-up diagnostic tests in ED ranged from 1.0% to 75%, would clinical laboratories really need to perform costly and time-consuming techniques for setting and monitoring highly stringent analytical quality specifications?” [1, p1132]
  • Short response – if a test is ordered, the laboratory’s goal must be to provide a correct test result because we don’t know which 50% of the tests don’t matter.

If we follow the logic of the editorial, should we randomly determine when we try to achieve quality by the flip of a coin? Heads, we'll deliver quality. Tails, we'll assume the test results don't matter. But that can't really be what the editorial intends.

For more detailed explanations, please read on!

The Original Total Analytical Error Model

In 1974, we published a paper titled “Criteria for judging precision and accuracy in method development and evaluation” [4].  To my knowledge, that was the first paper in the clinical chemistry literature to introduce total analytical error as a way of estimating the analytical performance of a measurement procedure.  At that time, precision (imprecision) and accuracy (inaccuracy, bias) were considered separate sources of errors and were judged individually.  In purely analytical laboratories, that made sense because replicate measurements were made on almost all samples.  However, in medical laboratories, only a single measurement was being made on each specimen, thus the overall or total effect of precision and accuracy was not being considered.  It was in that context that the concept of total analytical error was introduced:

“To the analyst, precision means random analytic error.  Accuracy, on the other hand, is commonly thought to mean systematic analytic error.  Analysts sometimes find it useful to divide systematic error into constant and proportional components, hence, they may speak of constant error or proportional error.  None of this terminology is familiar to the physician who uses the test values, therefore, he is seldom able to communicate with the analyst in these terms.  The physician thinks rather in terms of the total analytic error, which includes both random and systematic components.  From his point of view, all types of analytic error are acceptable as long as the total analytic error is less than a specified amount.  This total analytic error…is more useful; after all, it makes little difference to the patient whether a laboratory value is in error because of random and systematic analytic error, and ultimately, [the patient] is the one who must live with the error.”

The total analytic error model was intended to estimate the combined effects of random and systematic errors and to expand the earlier error model that considered precision and bias separately.  The paper focused on evaluation of analytical performance and recommended that the judgment on acceptability be based on the sizes of the errors relative to a defined allowable total error.  A specific recommendation was made to estimate TE by combining the estimate of bias from a method comparison study with a multiple of the SD or CV from a replication study.  The original multiplier was 2, but later papers recommended this be increased to 4 [5], and ultimately, with adoption of Six Sigma concepts [6], a Method Decision Chart was recommended that includes 5s and 6s performance criteria [7].

Expanded Total Analytical Error Quality-Planning Model

While the initial focus of the TE model was to evaluate the performance of a measurement procedure and judge its acceptability for routine use, the next step was to expand that model to support the selection and design of SQC procedures.  This meant adding parameters that represented the sensitivity (or uncertainty) of the particular control rules and number of control measurements being used during the routine operation of the measurement process.  The performance of QC procedures could be characterized by power curves that showed the probability of rejection an analytic run in relation to the size of errors occurring in that run [8].  In effect, the performance of the QC procedure itself had to be integrated into the Total Error model to provide an analytical quality-planning model [9] that described the relationship between the precision and bias that were allowable and the QC that was necessary.  A graphical tool called the Chart of Operating Specifications [10] was developed to provide a “picture” of that relationship between a quality requirement and precision, bias, and QC.  The term “operating specifications” was introduced to represent the precision and bias allowable for a method and the control rules and number of control measurement required to verify the achievement of a defined quality requirement.

Expanded Clinical Decision Interval Quality-Planning Model

A further expansion was necessary to describe for the relationship between a “clinical” type of quality requirement, the operating specifications for an analytical method, plus the pre-analytic and biologic components of variation [11].  The general form of this clinical quality-planning model can accommodate additional components of bias and variability.  As such, it can consider pre-analytic sampling variation and matrix biases, account for within-subject biologic variability, and address the effects of replicate analyses of a sample as well as serial analysis of multiple samples.  This is a more complicated model and practical applications require computer support to conveniently perform the necessary calculations and provide a graphical representation of the relationship between the many variables. For that reason, we developed a program [12] to support applications of both clinical and analytical quality-planning models.

Our specific purpose was to derive specifications for precision, bias, and QC from a defined requirement for quality that is stated in the form of a clinical decision interval, i.e., the difference between two test results that would provide different clinical interpretations.  For example, the NCEP recommendations were that a test result of 200 mg/dL required no treatment, but a value of 240 mg/dL required clinical follow-up.  That difference of 40 mg/dL at a decision level of 200 mg/dL represents a 20% decision interval. Based on that requirement for the intended clinical use and taking into account certain pre-analytic variables, particularly within-subject biologic variability, this clinical quality-planning model allows the user to assess the precision and bias allowable for the method and the SQC rules and number of control measurements required to monitor routine performance.

This clinical decision interval type of quality requirement clearly fits at the top of the hierarchy of quality specifications recommended by the Stockholm consensus conference [13]: “assessment of the effect of analytical performance on specific clinical decision making.” It represents a clinical decision criterion that describes the intended clinical use of a test in the form of a “gray zone” between test results that lead to different clinical interpretations and different clinical actions.  The associated quality-planning model shows how analytic performance characteristics interact with pre-analytic variables and biologic within-subject variability to affect the achievement of the desired clinical quality.  Finally, the inclusion of QC performance in the model allows the laboratory to satisfy the ISO 15189 requirement to “design quality control procedures that verify the attainment of the intended quality of results” [14].  In this context, the Clinical Decision Interval quality-planning model is both practical and necessary for laboratory quality management today!

Current Quality-Planning Models vs the Brain-to-Brain Model

The most important outcome of the Stockholm conference on strategies to set analytical quality specifications [13] was the agreement that there are different ways to define quality requirements.  While there is a recommended hierarchy that places clinical quality at the top, it was recognized and accepted that there are other types of quality criteria that are useful and sometimes necessary, depending on the evolution of a test and availability of  biologic data, clinical information, and regulatory and accreditation requirements.

The danger in declaring the “brain-to-brain” loop as the necessary perspective for defining quality goals is that little or no information is available about the requirements for pre-pre and post-post analytic quality.  Therefore, it is still necessary to keep the brain-to-brain loop open in order to define quality goals on the basis of the information that is currently available. Regulatory criteria still must be satisfied, even though they usually define only the allowable total analytical error. Nonetheless, such recommendations for allowable Total Analytical Error can be translated (by an analytical quality-planning model) into operating specifications for precision, bias, and QC to optimize the performance and reliability of laboratory test results.  For broader quality goals, clinical treatment guidelines can often be interpreted as decision interval criteria that can be translated into operating specifications by a related clinical quality-planning model.  Pre-analytic factors can be considered and it is especially important to account for within-subject biologic variability in such clinical quality-planning models.  These analytic and clinical quality-planning models can be expanded to include other sources of errors and variability found in additional steps in the brain-to-brain loop.

Thus, there exists a framework for building more comprehensive error models based on the concept of Total Analytic Error and documented expansions to form analytical and clinical quality-planning models.  The Total Error model is not misleading, but it is limited to considering the analytic step in the patient testing process.  The analytical and clinical quality-planning models are a natural and scientific expansion to include pre-analytic error components, as well as within-subject biologic variability.  Additional parameters can be added to further expand the reach of these models and to provide broader coverage of the brain-to-brain loop.

In developing more complete models of the brain-to-brain loop, it will be critical to define the intended purpose of each model.  For example, an outcome model that is intended to show how laboratory test results affect the cost of care would be quite different from a model whose purpose is to optimize the laboratory testing performance to improve the quality of care.  Furthermore, different dimensions of quality (turnaround time, optimal test selection, cost of care, correctness of test results) will require different models.  From the perspective of the medical laboratory whose purpose is to provide reliable analytical quality, current quality-planning models can facilitate improvements in analytical quality, even though they are focused on the analytic process and provide limited covered of pre-analytic variables.

The Chicken vs the Egg – which comes first?

By implying that high quality test results aren’t needed because physicians often don’t review or follow-up on laboratory tests, the CCLM editorial is intentionally provocative!  Even so, such a comment is unfortunate because, once in print, it can be taken out-of-context and misinterpreted.  For example, we already have a situation where laboratories believe that pre-analytic and post-analytic errors are more important than analytic errors because of their frequency of occurrence.  Yet Carraro and Plebani made clear in the same study that analytic errors are the biggest cause of patient mistreatment [15].

Consider which must come first – correct use of test results by physicians or production of correct test results by a laboratory?  We must recognize that the core competency of a medical laboratory is to provide correct test results.  If a laboratory cannot deliver correct test results, physicians have reason to ignore them and not to order laboratory tests.  If physicians do order laboratory tests, our goal must be to provide correct results!  That may indeed require complex and extensive quality systems, but we have many tools today that can help us provide cost-effective QC.  Cost-effectiveness has long been one of our goals in quality management, as described in our 1986 book that adopted and adapted principles of industrial quality management for application with laboratory testing processes [16].

Any suggestion that efforts to achieve high quality analytical test results are mis-directed because of shortcomings in defining quality goals misses the point that most laboratories today don’t apply quality goals in any form!  If we are to argue for expanding the role of the laboratory to improve evidence-based medicine, it must begin with evidence that we perform tests correctly and provide reliable testing services.  In the words of ISO 15189, “the laboratory shall design internal quality control systems that verify the attainment of the intended quality of results”. As Plebani stated in a 2007 article on laboratory medicine and patient safety, “analytical quality, in particular, is still a major concern and, first and foremost, clinical laboratories are expected to assure and continuously improve upon it" [17]. Our weakness today is the lack of application of goals for “intended quality of results” in any form, not the overuse of currently available guidelines, recommendations, and practices for defining quality goals!


  1. Plebani M, Lippi G.  Closing the brain-to-brain loop in laboratory testing.  Clin Chem Lab Med 2011;49:1131-3.
  2. Lundberg GD. Acting on significant laboratory results. J Am Med Assoc 1981;245:1762-3.
  3. Westgard JO. Managing quality vs. measuring uncertainty in the medical laboratory.  Clin Chem Lab Med 2010;48:31-40.
  4. Westgard JO, Carey RN, Wold S. Criteria for judging precision and accuracy in method development and evaluation.  Clin Chem 1974;20:825-33.
  5. Westgard JO, Burnett RW.  Precision requirements for cost-effective operation of analytical processes.  Clin Chem 1990;36:1629-32.
  6. Westgard JO. Six Sigma Quality Design and Control: Desirable precision and requisite QC for laboratory measurement processes. Madison, WI: Westgard QC, 2001.
  7. Westgard JO. Basic Method Validation, 3rd ed.  Madison, WI:Westgard QC, 2008.
  8. Westgard JO, Groth T. Power function graphs for statistical control rules. Clin Chem 1979;25:863-9.
  9. Westgard JO. Assuring analytical quality through process planning and quality control.  Arch Pathol Lab Med 1992;116:765-9.
  10. Westgard JO. Charts of Operational Process Specifications (‘OPSpecs Charts’) for assessing the precision, accuracy, and quality control needed to satisfy proficiency tesing performance criteria.  Clin Chem 1992;38:1226-33.
  11. Westgard JO, Hyltoft-Petersen P, Wieba DA.  Laboratory process specifications for assuring quality in the U.S. National Cholesterol Education Program. Clin Chem 1991;37:656-61.
  12. Westgard JO, Stein B, Westgard SA, Kennedy R. QC Validator 2.0: a computer program for automatic selection of statistical QC procedures for applications in healthcare laboratories. Computer Methods  Programs Biomed 1997;53:175-86.
  13. Hyltoft Petersen P, Fraser CG, Kallner A, Kenny D, eds. Strategies to set global analytical quality specifications in laboratory medicine.  Scand J Clin Lab Invest 1999;59:475-585.
  14. ISO 15189:2007.  Medical laboratories – particular requirements for quality and competence.
  15. Carraro P, Plebani M. Errors in a stat laboratory: Changes in type and frequency since 1996. Clin Chem 2007:53:1338-1342.
  16. Westgard JO, Barry PL.  Cost-Effective Quality Control: Managing the quality and productivity of analytical processes.  Washington DC:AACC Press, 1986.
  17. Plebani M. Errors in laboratory medicine and patient safety: the road ahead. Clin Chem Lab Med 2007;45:700-707.