You can't throw a rock, or scroll a feed, without hearing about the promise and peril of AI. What's it going to do to the laboratory?

[Please note: in the tradition of the Westgard website, we usually offer an end-of-year curmudgeonly essay. This one could be subtitled: AI, Bah Humbug!]
In 2025, AI has been everywhere, and the hype about it has been roller-coasting between great promise and deadly peril. It's the saviour of humankind. It's the devil that will render all of us obsolete. Great fortunes are being made on these speculations. Millions of jobs may disappear, replaced by cheaper algorithmic equivalents. There will be new and better jobs. There will be no entry level jobs anymore.
As some of you know, I recently read a book called "Enshittification" by Cory Doctorow. It explains why the recent experience of using our monopoly internet platforms has degraded and is likely to get even worse. Well, Doctorow's more recent work - and his next book - addresses this new AI bubble. He gave a preview of this in a recent speech.
"In automation theory, a 'centaur' is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete....And obviously, a *reverse* centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine....
"Obviously, it's nice to be a centaur, and it's jorrible to be a reverse centaur, and it's horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be."
If you think about how laboratory professionals work today, we're already well on the way toward becoming reverse centaurs. We feed the big box, we service the line, we load and unload. We do all the things that aren't easily automated. Our remaining value comes from the ability to understand and interpret the results of these automated systems.
And now AI is coming for that. AI is being promoted as being better than humans at interpreting tests and making diagnoses and things like that. The AI-optimists offer this as an enhancement, but the real pitch, the money proposition, only works out if the deployment of AI ends up slashing the workforce.
"The promise of AI - the promise of AI companies make to investors - is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half your salary for himself, and give the other half to the AI company."
That's all AI is. It's not so much a revolution in capability as a more palatable-looking chainsaw to staffing. Wholesale firing for profit is typically considered immoral. AI puts a shiny futuristic gloss on it.
What I find distressing is the naivete of AI promotion, the assumption that by using LLM and big data, [inset new techy words here], etc, etc, we'll reliably be able to extract all the correct decisions.
Current healthcare is full of mistakes, distortions, and biases. AI will only use an algorithm to deploy the most popular mistake to a medical decision, there's no ability inherent that will avoid the mistakes present in the learning set. AI doesn't think. It eats history and regurgitates the most probable response to a situation.
Let's take QC. Right now 90% of labs repeat the control as one of their troubleshooting steps. About 80% use 2 standard deviations as their control limits, generating ruinously high false rejection. We know those repeated controls are, for the overwhelming number of cases, a complete waste of time. But the AI that teaches itself QC won't know that. AI QC will perpetuate bad QC habits because they are the most popular (probable) ones. I'm sure the control vendors are happy to hear this, because it will preserve their profit margins, if AI QC systems repeat, repeat, run new controls, recalibrate, and repeat again.
Now one of the proposed solutions to this problem is called "human in the loop." That is, for all these AI decisions, they ultimately have to be reviewed and approved by a human first. But that's still a reverse centaur approach, one that is specifically called an 'accountability sink.' The human is not actually there to oversee the AI, it's to be the fall guy when the AI makes a mistake. The AI failures will be deflected onto the human who didn't catch them. So we'll be able to fire yet one more human, even as the AI fails.
For those humans that do remain in the loop, life as a reverse centaur is not guaranteed to be better. We've already seen one study that shows a reliance on AI tools leads to a degradation of skills for pathologists. And the pressure to approve a relentless stream of AI decisions will be overwhelming, making it even more difficult to spot. The technical term for this risk is Automation Blindness. Labs are well experienced in the impact of automation on the staff. Did life suddenly become easier after automation. No, automation changed the workload of the laboratory professional, but it didn't lower the burden. Expectations just rose higher. Fewer people could do more testing, and when it became possible to fire staff through automation, the workload on the remaining survivors got harder.
I don't have a specific prescription or solution to this. Stronger professional standards and organization can help: licensure, unions, legal requirements to prevent AI making medical decisions are some of the tools. Doctorow argues convincingly that AI is a massive bubble and it will collapse, unfortunately causing significant collateral damage. The myth that AI can destroy everyone's job will eventually die a lurid death, but our professional responsibility is to protect as many of our testing results as possible. Don't let your patients become guinea pigs in a AI diagnosis experiment. You don't want your patient's cancer "diagnosis" to be caused by an AI hallucination.