QC Validation in Veterinary Laboratories
Kathleen P. Freeman DVM, Ph.D., offers a testimonial on how she used the Westgard QC Planning Process to improve the quality of her laboratory. Hey, even pets need QC! (And if your dog's blood tests are performed with better QC than your own lab tests, shouldn't you do something about it?)
Head, Clinical Pathology and Diagnostic Services
Animal Health Trust Lanwades Park,
Newmarket Kentford, Suffolk CB8 7UU
- QC Training in Veterinary Laboratories
- Getting started with QC validation
- Applying the QC Validation Process
I've had the opportunity to apply the QC validation process to evaluate hematologic and clinical chemistry QC data at 2 different veterinary laboratories. My longstanding interest in and enthusiasm for QA/QC stems from my days as a resident (17 years ago!) and has largely been self-taught. The jobs I have held and challenges I've faced in meeting clients' questions and concerns has 'forced' me to learn about QC/QA as a way of overcoming existing problems, preventing future problems, providing superior service and staying competitive. One of my Lab Managers (an accomplished MT) once told me she thought I was one of the few individuals she knew who could break into song about the beauties of a Levy-Jennings Chart.
QC training in veterinary laboratories
Although I am currently working in the U.K., my education has been in the United States. In both locations, veterinary laboratories may utilize technicians from human-based training programs, but also may have a variety of veterinary nurses or technicians with on-the-job training since there are no stringent legal requirements. In the laboratories where I have been, MTs or MLTs have usually been in the minority. Therefore, there is great variety in the level of training and understanding of QC/QA among veterinary laboratory personnel. Veterinary clinical pathologists also vary greatly in their training. A questionnaire conducted by the American Society of Veterinary Clinical Pathology (Education Committee Report, 1998) indicated a perceived need for additional training in QC/QA and the fact that many residency training programs considered this a 'weak area' in their curricula.
Getting started with QC Validation
I first saw information about QC validation in a flyer that listed a variety of books and manuals, including those by Westgard on a variety of QC topics. I ordered two manuals (OPSpecs Manual: Operating Specifications for Precision, Accuracy and Quality Control' and Planning and Validating QC Procedures: Workshop Manual,' 2nd edition) and set about analyzing data according to the QC validation process that was recommended in the directions. I was determined to master this material and learn all I could about the fascinating subject of QC validation!
Applying the QC validation process
It was not easy to understand at first and I had to think about how the concepts applied to veterinary medicine. Determination of total allowable error and bias took the most time. However, it made me think critically about the level of performance and assurance of quality assurance that we desired in the laboratory. This was a useful exercise by itself!
Here's how I worked through the process:
- Definition of the total allowable error is the starting point for the QC validation process. The total allowable error is supposed to represent the largest amount of error that can be tolerated without invalidating the usefulness of the test. Use of the USA CLIA Requirements for Proficiency Testing provided a baseline, but they were more stringent than required for several analytes in the veterinary diagnostic laboratory setting. It should be noted that the total allowable error can be expected to vary among veterinary laboratories, depending on the experience of the pathologist, the species analyzed, the population analyzed, and reference intervals used.
- Determination of bias also represented a challenge since the veterinary proficiency testing programs do not use assayed material, but report only the mean and standard deviation of participating laboratories. I looked at our QC data relative to the manufacturer's means for the control materials and relative to the means obtained in proficiency testing by other laboratories using the same pieces of equipment and same methodology. This included data over about 6 months with several lot numbers of control materials and several intervals of proficiency testing. Significant bias was found in several hematologic analytes. I tried to identify any possible bias and, if unsure as to whether or not bias existed, used the estimate of bias that represented the highest level that I suspected may be present. Since the laboratories had a limited budget, additional analyses for determination of bias were not conducted. Determination of bias was complicated in some analyses due to shifting means of QC data over time as the control material aged/deteriorated. This was adjusted for by periodic adjustment of means over the duration of the control material since patterns in shifts were apparent.
- Determination of the imprecision of the methods was easier. Statistical data regarding means, standard deviation and coefficient of variation was obtained from statistical programs within the instruments or calculated based on 2-3 months of QC data.
- Plotting the operating points (observed bias as y, observed imprecision as x) on the OPSpecs Charts allowed estimation of allowable inaccuracy (systematic error) and allowable imprecision (random error) and indicated that we were not able to meet our goals for total allowable error for several analytes. Usually this was based on a problem with a single level of control material, but sometimes was present with all levels of control material. Since I was doing this manually (not with a computer program) it took some time to go through each analyte for each level of control (3 levels of control for haematology, 2 levels of control for chemistry).
The QC validation approach made a lot of sense to me. Previously I did not have a 'goal' in the form of total allowable error to help me determine what levels of variation (C.V.) and bias were important for a particular analyte. As a result of QC validation, we were able to identify several important factors that needed to be addressed.
- A change in some clinical chemistry reagents was needed in order to meet our goals for total allowable error and reduce bias and/or CV to within acceptable limits.
- Training in the proper handling of haematology QC material was needed in order to promote its long-term stability and reduce variation in QC data. As a result of this exercise, the technicians were shown the results and graphs and were encouraged to understand the basis for these. Introducing this topic led to them ask additional questions about QC and helped them appreciate the reason for additional training, careful handling of QC materials and their participation in QC/QA activities within the laboratory. By explaining clinically significant levels of error detection and relating the QC parameters to actually patients and critical levels of medically important error detection, many of the technicians became more aware of the significance of the numbers they were producing. Follow-up questionnaires administered to the technicians indicated that the majority appreciated the effort, additional discussion, and training.
- Different QC materials were evaluated to determine if QC performance could be further improved after technical training.
- For a few analytes (where optimal handling of QC material was already in place, no further improvements in analyte precision and accuracy were possible, and the QC validation process showed that statistical QC alone could not be relied upon for a high level of assurance of quality assurance), additional non-statistical QC methods were instituted - including repeat criteria for certain abnormal results, review of normal and abnormal patient data, and correlation with results of other types of tests. Some of these improvements had previously been considered, but the data provided extra support and emphasized the need for their implementation. Improved definition of these parameters also helped prioritize the duties of technicians for specific analyses and increased their awareness of the reasons for correlative testing, patient data review and repeat criteria.
- For many tests in both laboratories, we were able to simplify our QC rule application (from 12s in one laboratory and a complicated multi-rule in the other) by using a 13s rule which could be programmed into the analyzer to flag abnormal results as a reminder to the technical staff. We were able to decrease the number of false rejections, as well as saving time on QC analysis and documentation for these analytes. Additional time could then be spent on additional statistical or non-statistical QC/QA activities in areas that warranted it.
- Subsequent QC audits were simplified since defined levels of variation (CV) and bias were obtained for each analyte at a particular level of assurance of quality assurance and total allowable error. QC printouts could be more easily evaluated to determine that CV and bias were at or below the defined levels in order to ensure continued good performance.
QC Validation was a very useful exercise! As a result of going through the QC validation exercise, I have a better understanding of the potential performance of the various control materials and of the variation present in patient sample analyses. Many personnel in the laboratory benefited from the structure provided by this type of analysis and have increased understanding of the importance of QC, pride in their roles in QC/QA and confidence in their ability to analyze QC data. I've recommended this exercise to other veterinary laboratories in a several presentations regarding our experiences with QC validation.
I'd love to hear from other veterinary technicians and pathologists who have done this in their laboratories. I appreciate the invitation from Dr. Westgard to present my experiences in this essay.