Tools, Technologies and Training for Healthcare Laboratories

AI Slop in QC is getting out-of-control

AI is on the rise, and with it, the promises about what it can deliver. Lately, though, I have seen examples of egregiously wrong AI-generated explanations of QC.

AI Slop about QC is getting out-of-control

Sten Westgard, MS
February 2026

 

AI is going to change everything, ushering in a new age of wealth, affordability, and cures to all the ails of mankind. Oh wait, are you not drinking this kool-aid? Amidst all the hype, a dose of skepticism is a good idea.

There are plenty of promises being made about AI that cannot yet be verified. Will it cure cancer? We don't know yet. But there are a few things we can determine right now. Will it result in people losing their jobs. Absolutely. Will a few people become extremely wealthy because of this? Absolutely.

Closer to home, I have found my own LinkedIn feed inundated with AI-generated graphics that are supposed to be educational. But in truth, they are rife with errors. On LinkedIn, I have been commenting on the graphics as they appear, trying to identify the errors and encourage corrections. It has turned me into something of a QC grump.

Let's take a look at three examples (if you open these graphics separately, you'll be able to see the graphics enlarged).

2026 2 16 AI LJ example1

It's nice and friendly, packed with bullet points and cute decorations. But is it correct? 

2026 2 16 AI LJ example1 corrected

The graphics are wrong, the text is wrong, sometimes in minor ways, somethings in major ways.

Here's another example:

2026 2 16 AI LJ example2

How wrong is this example? Pretty bad.

2026 2 16 AI LJ example2 corrected

Our final example:

 2026 2 16 AI LJ example3

This one isn't quite as terrible as the previous, but it still has a number of errors:

2026 2 16 AI LJ example3 corrected

What can we learn from AI errors?

First, it's clear that AI generated Levey-Jennings charts are often wildly wrong. This is is such a technical tool that it may take a longer, larger database for AI to learn how to make them correctly. 

Second, it's clear that proof-reading is not a priority for those who are generating this images. It's not enough to provide a correct prompt to AI. You need to check to make sure that it's correct. There is another option, which is that even when someone feeds AI a correct prompt, they are not experienced enough to recognize the errors that have been made. In that case, the ignorance of the user is being magnified by the ignorance of the AI. 

Third, it seems that correcting the errors, even when pointed out, is not happening. I've revisited the posts where these errors were present. None of the images have been corrected. The misinformation persists and still gets reposted. 

 

What are people saying about the errors in AI?

What has distressed me even more than the errors in the AI-generated images is the response of the audience. Not only are the users/creators failing to proofread correctly, the audience (other LinkedIn users) do not appear to be catching these errors. For more often, there are a number of positive comments, "thanks for posting this valuable material" type blurbs, and likes. Some of those comments seem to be the auto-prompts that LinkedIn generates for the users. That is, we have reached a fully AI-generated lifecycle: AI generated graphics which are greeted by AI-generated responses. Humans seem to be unnecessary for the cycle to continue. Cory Doctorow has called this, Social media without socializing. It's how the great social media enterprises can continue to grow exponentially: they don't need new humans to engage anymore, they just hook up humans to a lifeless web of agents and avatars.

From my own tiny niche, I can really only catch the AI errors that talk about QC and Levey-Jennings and Westgard Rules. But if the frequency of errors in these areas is any sign, I shudder to think how many errors are being generated in all the other areas of medical expertise. 

One commenter even confided that sometimes AI is actually less productive than doing it the old fashioned way (yourself):

 
"Sten Westgard AI is a mess. I made a 25 page lab booklet using several AI agents and they all had mistakes and misspellings. Instead of taking me a day to finish it, it took me a month. Most of my time was spent correcting wrong info and misspellings. Sometimes it would generate the same misspelling about 10 straight times, even after pointing out the mistake. 🤪"
 

This, to me, sounds like the opposite of progress.

 

Should I just shut up and let the good AI times roll?

It's been suggested that these errors are not worth worrying about. Sure, AI makes errors, but it's just learning right now. We all have to endure a few hallucinations in the present, but the future will bring us perfection. An example of this comment:

"AI may still struggle with technical graphics, but focusing on that misses the bigger picture. When used correctly, AI is already highly valuable in many fields, including laboratory medicine. In labs, AI is increasingly useful not only to detect QC problems, but to support root-cause analysis and corrective actions. Westgard rules are good at indicating that something is wrong, but they do not explain why or how to fix it. Modern laboratory systems are far more complex than traditional photometric biochemistry, relying on advanced algorithms in areas like molecular biology and bioinformatics. In this context, fixed linear QC rules developed decades ago become limiting. Westgard rules still have value, but advanced algorithms — including AI — allow multivariate analysis, pattern recognition, and even prediction of QC failures, enabling a more proactive and evolved approach to quality management."

Even if the vaunted promises do come true, I don't believe that we need to sacrifice our standards now or in the future. My deeper worry is that AI is being built on a foundation of errors, which will prevent it from developing the robust correct solutions that we need. If AI can't generate a correct LJ chart image, can we be sure that it can actually interpret a real LJ chart with QC data?

An even stranger comment was made that suggested my criticisms were too frequent and fast:

"You should let others have their say, so you can see if the concept is so clear... For a while now I have seen that you always correct any post before others."

I suppose there is some benefit to this approach. I could see if anyone else can pick up the errors. But that would mean for every incorrect AI post I see, I have to set a clock to come back at some later time and revisit it. And in that interim period, the errors would be passed off without any warning, and the misinformation would go unchecked. 

I apologize for being the QC grouch about this. But when I see an error, I don't assume I am the first to see it, nor that someone else will catch it. I act. I respond the moment I see it for the first time. I feel that errors should be corrected as soon as possible.

If only we could be judged as leniently as today's AI. Any errors we make would be forgiven, because we're still just learning. We would be excused for our failures and celebrated for our successes and have no responsibility for the consequences of those failures. A great time to be an AI, not so good to be flesh and bone.

 

 

 

Let us know what you're interested in!

Please use this form to request more information about.

Westgard Products and Services.

Invalid Input
Invalid Input
Invalid Input
Invalid Input