THE “HALLUCINATION” HUSTLE

By: JONSON**
If you live in Australia, you know the “Robodebt” smell. It’s that cold, clinical arrogance of a machine telling you that you’re wrong, while a human bureaucrat hides behind a screen and says, “The system made the call.”
As we hit the end of 2025, that smell has a new name. It’s called a “Hallucination.” It’s the most sophisticated corporate shell game in our history. They want you to believe the AI is “tripping out” like it’s had too many mushrooms. But let’s be blunt: AI doesn’t have a mind, so it can’t hallucinate. It’s just code that failed because a CEO valued speed over truth. Calling it a “hallucination” isn’t a technical description—it’s a legal shield designed to kill your right to a refund, a job, or an explanation.
1. The “Robodebt 2.0” Strategy
Section titled “1. The “Robodebt 2.0” Strategy”While we were all arguing about chatbots, the government was busy. In September 2025, the Australian Government finally agreed to a massive $475 million settlement for the latest automated debt scandal. But look at the defense they’ve been building: they’ve pivoted from “human error” to “algorithmic complexity.”
When the Department of Employment unlawfully cut off nearly a thousand jobseekers this year because of a coding “glitch,” the response was a shrug. By framing these disasters as unpredictable technical quirks—rather than gross negligence—they avoid the “intent” required for criminal charges. They’ve built a moral shield. If a politician wants to crush a welfare program but doesn’t want the blood on their hands, they let an “AI Audit” do the dirty work. When it fails? “The model hallucinated.”
2. The Legal “Slip-and-Slide”
Section titled “2. The Legal “Slip-and-Slide””Think the fancy law firms and universities are on your side? Think again. In December 2025, the Federal Circuit and Family Court (Mertz & Mertz No. 3) handed down a blistering rebuke to lawyers using “hallucinated” cases.
These weren’t “accidents.” These were high-paid professionals using AI to guess the law because they were too lazy to read it. One Victorian lawyer was recently restricted from practice after submitting fictitious citations. They aren’t “victims” of tech; they are reckless operators using a “hallucination” defense to try and dodge professional misconduct charges. They want the profit of automation without the liability of being wrong.
- The “Soft Attrition” Lie
If you’re a white-collar worker in Sydney or Melbourne, you’ve heard the pitch: “AI is here to augment your work.” Then you saw the layoffs. IBM, Amazon, and Salesforce have cut over 50,000 corporate jobs globally in 2025 alone, citing “AI efficiency.”
But here’s the blunt truth: the data shows the AI isn’t actually doing the work yet. It’s failing. The AI is just the PR cover for old-fashioned cost-cutting. It allows a CEO to fire 10% of the staff to pump the stock price while telling the remaining workers they’re part of a “technological revolution.” You aren’t being replaced by a machine; you’re being replaced by a lie.
- The Frankenstein Warning
Even the UN High Commissioner, Volker Türk, warned in late 2025 that we are creating a “Frankenstein’s Monster.” In the book, the monster takes the blame while the creator, Dr. Frankenstein, claims he “meant well.”
That is exactly how Australian boards are operating. They deploy “Agentic AI” to manage your bank account, your insurance claims, and your legal rights. If the bot denies your claim based on a “hallucination,” the company points at the machine and says, “We lost control of the tech.”
- The Deloitte “Paper-Pushing” Scandal
If you want to see this hustle in the real world, look no further than October 2025, when Deloitte Australia was forced to partially refund the federal government for a $440,000 report. The document, commissioned by the Department of Employment, was supposed to be an expert analysis of the very IT systems used to automate welfare penalties. Instead, it was “botshit.” An academic from the University of Sydney discovered the report was littered with fabricated quotes and non-existent references to court cases—the classic hallmarks of a cheap AI “hallucination.” Deloitte tried to play it down as a minor correction, but the reality is grimmer: they charged taxpayers half a million dollars for a machine to make things up, then claimed it didn’t change the “substantive findings.” It’s the ultimate corporate arrogance: charging for human expertise, delivering machine fiction, and keeping the change when they’re caught.
The Bottom Line
In Australia, our Consumer Law (the ACL) is supposed to protect us from “misleading and deceptive conduct.” It doesn’t have a “Bot Clause.”
A hallucination requires a soul. AI is just a calculator. If a bank, an airline, or a government department uses a machine that lies to you, they haven’t “tripped.” They’ve breached their Duty of Care. The tables turn the moment we stop accepting “the system did it” as an answer. The machine doesn’t have a body to kick—but the people who bought the machine do.