Tools, Technologies and Training for Healthcare Laboratories

An Architect c16000

Our first look at Abbott's newest, biggest chemistry instrument, using a method validation study that was shown at the 2007 AACC conference.

July 2007

[Note: This QC application is an extension of the lesson From Method Validation to Six Sigma: Translating Method Performance Claims into Sigma Metrics. This article assumes that you have read that lesson first, and that you are also familiar with the concepts of QC Design, Method Validation, and Six Sigma. If you aren't, follow the link provided.]

The AACC/ASCLS annual meeting is always a good opportunity to see posters with data on instrument performance. While most of these instrument evaluations will never grace the pages of a journal, they provide critical information for laboratory users.

In this example, we're going to take a look at a method validation study performed for the Abbott Architect c16000 Clinical Chemistry System (D-50: C. Kasal, J. Lucio, P. Bathory, K. Mucker, S. Hall, D. Bozimowski, Evaluation of General Chemistry Assays on the Abbott Architect c16000 Clinical Chemistry System).

The CVs, Bias and Sigma metrics

According to the abstract, this evaluation study adhered closely to CLSI (formerly NCCLS) protocols, calculating total precision for two levels instead of the more common (and more attractively lower) within-run precision. Also, they provided specific Bias figures at medical decision levels of interest. All that we need to do is supply the quality requirements and calculate the Sigma metrics.

Assay Worst-case CV%
Bias%
TEa
Sigma metric
Albumin BGG
0.6%
0.7%
10%
15.5
Albumin BCP
0.4%
0.3%
10%
24.25
Bilirubin (total)
1.6%
0.9%
20%
11.94
Calcium
0.8%
1.4%
9.0%*
10.13
Cholesterol
1.0%
0.9%
10%
9.1
Creatinine
2.3%
1.6%
15%
5.83
Glucose
1.2%
2.9%
10%
5.92
Total Protein
0.5%
2.6%
10%
14.8
Triglyceride
1.7%
0.8%
25%
14.23
Urea Nitrogen (BUN)
1.9%
2.9%
9%
3.21

*Calcium, as we've noted before is a special case for the CLIA quality requirement. CLIA requires testing to meet a +/- 1.0 mg/dL quality requirement at every level. In this case, the quality requirement was evaluated at a decision level of 11 mg/dL, resulting in a 9% quality requirement.

Not much needs to be said here. Most of the methods are world class. Only Urea Nitrogen is less than Six Sigma.

Here's a graphic depiction of these analytes, normalized so they can be presented together on a Normalized OPSpecs chart:

As you can see, there are a lot of operating points on "dry ground." For many of these analytes, there is plenty of wiggle room for variation while still maintaining world class performance.

Note also that the Normalized OPSpecs chart displays some unusual rules (see the key at right). For seven of the analytes on the c16000, a single control with control limits set at three times the standard deviation would provide more than sufficient error detection of medically important areas. This is one of those (somewhat rare) areas where the CLIA minimums are over-controlling the method. Since CLIA requires at least 2 controls per run, the other solutions would include control limits set at 3.5 or 4 times the standard deviation.

All of these QC procedures would essentially eliminate false rejection problems. If you set your limits that wide, you will only get a flag when there is a real problem. But remember, once you get that flag, you must do something about it (trouble-shoot the method), not just repeat the control.

How do you balance the good with the bad?

So in an instrument with a majority of great methods, how do you handle the "outlier" method? Here's where a more detailed analyis of the method performance can help.

Using EZ Rules 3 (in form mode), we entered the parameters for the Urea Nitrogen at both the High (1.7%CV) and Low (1.9%CV) Control levels. When Automatic QC Selection was initiated, the same recommendation appeared: Use of a "Westgard Rules" 13s/22s/R4s/41s rule. But note two additional details about this rule:

1. It requires four control measurements per run, not two.

2. It provides only 50% Analytical Quality Assurance (AQA), instead of the preferred 90% AQA. All the other methods would provide Six Sigma quality at the 90%AQA level.

For the Low Control:

For the High Control:

The Sigma-metrics graph reveals specific error detection. If we add some of the other rules we are considering into the mix, we can figure out the implications of using the QC procedures of the world class methods on this less-than-world-class method:

If you look at the Ped (probability of error detection) column in the key, you note that the 14s, 13.5s and 13s QC procedures are very low for Urea Nitrogen. If there is a problem with the method, the Average Run Length (ARL) to detect that problem would be very high. For the 14s QC procedure, it would take an average of 100 runs to detect the problem.

With the "Westgard Rules" the Ped is actually at 0.49, which would give you an ARL of approximately 2 runs. If a problem occurred, you would detect it on average on the second run. Perhaps that would be acceptable, but the problem still remains that this QC procedure requires 4 controls, not 2. How could we get that number down?

What if we could double the number of samples? That is, run 2 channels of Urea Nitrogen and get the benefit of replicate analyses. Here is the outcome of that scenario:

Note that at the bottom left of the screen, the Replicate samples analyzed entry has been changed from 1 to 2. This allows you to determine the proper QC procedure for a scenario of duplicate analyses. In this case, doubling the sample means that the "Westgard Rules" with Ns of 4 will provide more than 90% error detection. However, if you work with a "Westgard Rules" procedure and only two controls, you still fall just shy of the 90% AQA we would like to achieve. Is doubling the sample worth the payoff of an error detection of 83% is acceptable? This scenario would would bring the ARL to 1.2 (that is, most of the time you would catch an error within the first run, but sometimes you wouldn't catch the error until the second run).

What would you do if you couldn't make replicate measurements of the method and couldn't increase the number of controls to 4? How about reducing bias? If you could reduce bias to zero, here's the performance you could achieve:

If you can eliminate the bias on this method, you can control the quality with just 2 controls and fairly wide (3s) limits that will reduce the false rejection significantly.

Thus, if you had to adopt just one QC procedure for the entire set of methods listed here, it would be the 13s with two controls. That would provide appropriate error detection for these methods, while reducing false rejection to essentially zero, and avoiding the expense of additional controls. Extra care must be taken to reduce one method's bias, but that care is an investment that pays off in reduced problems down the line

Conclusions

This is another instance where there is a lot of good news. The methods we studied from the Abbott Architect c16000 show a lot of class - world class performance. While the methods weren't 100% perfect, if you look at comparative methods, you will see that this data represents some pretty stellar performance by an instrument.

This application also illustrates the utility of different QC Design tools. First we used a Sigma calculation to sort out which methods were world class and which methods were problematic. Then we used Normalized OPSpecs charts to visually assess these methods some more. With the problem methods, we used sophisticated computer analysis from EZ Rules 3 to examine the problem method in detail, and explore various improvement options. The first two tools are basically free, the last tool requires a purchase. But you gain much more power over your QC Design with that tool.