Most people have encountered “AI slop,” the deluge of low-quality content produced by generative artificial intelligence (AI) tools that has inundated the Internet, but is this computer-made hogwash taking over work as well?
News that Deloitte Australia would partially refund the government for a report sprinkled with apparent AI-generated errors has caused a local furor and spurred international headlines.
Australian Senator Barbara Pocock said in a radio interview that the A$440,000 (US$287,001) taxpayer-funded document misquoted a judge and cited nonexistent references.
Illustration: Yusha
The alleged AI mistakes are “the kinds of things that a first-year university student would be in deep trouble for,” she said.
Deloitte Australia did not immediately respond to my request for comment, but has said the corrections did not impact the report’s substance or recommendations, and told other outlets that: “The matter has been resolved directly with the client.”
Besides being a bad look for the Big Four firm at a time when Australians’ trust in government-use of private consulting firms was already fraught, there is a reason it has struck such a nerve.
It has reopened a global debate on the limitations — and high cost — of the technology backfiring in the workplace. It is not the first case of AI hallucinations or chatbots making things up, to surface in viral ways. It likely would not be the last.
The tech industry’s promises that AI would make us all more productive are part of what is propping up their hundreds of billions of dollars in spending, but the jury is still out on how much of a difference it is actually making in the office.
Markets were rattled in August after researchers at the Massachusetts Institute of Technology said that 95 percent of firms surveyed have not seen returns on investments into generative AI. A separate study from McKinsey found that while nearly eight in 10 companies are using the technology, just as many report “no significant bottom-line impact.”
Some of it can be attributed to growing pains as business leaders work out the kinks in the early days of deploying AI in their organizations. Technology companies have responded by putting out their own findings suggesting AI is helping with repetitive office tasks and highlighting its economic value.
However, fresh research suggests some of the tension might be due to the proliferation of “workslop,” which the Harvard Business Review defines as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” It encapsulates the experience of trying to use AI to help with your job, only to find it has created more work for you or your colleagues.
About 40 percent of US desk workers have received workslop over the past month, a survey from last month from researchers at BetterUp and the Stanford Social Media Lab. The average time it takes to resolve each incident is two hours, and the phenomenon can cost US$9 million annually for a 10,000-person company.
It can also risk eroding trust at the office, something that is harder to rebuild once it is gone. About one-third of people (34 percent) who receive workslop notify their teammates or managers, and about the same share (32 percent) say they are less likely to want to work with the sender in the future, the Harvard Business Review reported.
There are ways to smooth out the transition. Implementing clear policies is essential. Disclosures of when and how AI was used during workflows can also help restore trust. Managers must make sure that employees are trained in the technology’s limitations, and understand that they are ultimately responsible for the quality of their work regardless of whether they used a machine’s assistance. Blaming AI for mistakes just does not cut it.
The growing cases of workslop should also be a broader wake-up call. At this nascent stage of the technology, there are serious hindrances to the “intelligence” part of AI. The tools might seem good at writing because they recognize patterns in language and mimic them in their outputs, but that should not be equated with a true understanding of materials. In addition, they are sycophantic — they are designed to engage and please users — even if that means getting important things wrong.
As mesmerizing as it can be to see chatbots instantaneously create polished slides or savvy-sounding reports, they are not reliable shortcuts. They still require fact-checking and human oversight.
Despite the big assurances that AI will improve productivity, and is thus worth businesses paying big bucks for, people seem to be using it more for lower-stakes tasks. Data suggest that consumers are increasingly turning to these tools outside of the office. A majority of ChatGPT queries (73 percent) in June were non-work related, according to a study published last month from OpenAI’s own economic research team and a Harvard economist. That is up from 53 percent last year.
An irony is that all this might end up being good news for some staff at consulting giants such as the one caught up in the Australia backlash. It turns out AI might not be so good at their jobs just yet.
The more workslop piles up in the office, the more valuable human intelligence would become.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
We are used to hearing that whenever something happens, it means Taiwan is about to fall to China. Chinese President Xi Jinping (習近平) cannot change the color of his socks without China experts claiming it means an invasion is imminent. So, it is no surprise that what happened in Venezuela over the weekend triggered the knee-jerk reaction of saying that Taiwan is next. That is not an opinion on whether US President Donald Trump was right to remove Venezuelan President Nicolas Maduro the way he did or if it is good for Venezuela and the world. There are other, more qualified
This should be the year in which the democracies, especially those in East Asia, lose their fear of the Chinese Communist Party’s (CCP) “one China principle” plus its nuclear “Cognitive Warfare” coercion strategies, all designed to achieve hegemony without fighting. For 2025, stoking regional and global fear was a major goal for the CCP and its People’s Liberation Army (PLA), following on Mao Zedong’s (毛澤東) Little Red Book admonition, “We must be ruthless to our enemies; we must overpower and annihilate them.” But on Dec. 17, 2025, the Trump Administration demonstrated direct defiance of CCP terror with its record US$11.1 billion arms
The immediate response in Taiwan to the extraction of Venezuelan President Nicolas Maduro by the US over the weekend was to say that it was an example of violence by a major power against a smaller nation and that, as such, it gave Chinese President Xi Jinping (習近平) carte blanche to invade Taiwan. That assessment is vastly oversimplistic and, on more sober reflection, likely incorrect. Generally speaking, there are three basic interpretations from commentators in Taiwan. The first is that the US is no longer interested in what is happening beyond its own backyard, and no longer preoccupied with regions in other
As technological change sweeps across the world, the focus of education has undergone an inevitable shift toward artificial intelligence (AI) and digital learning. However, the HundrED Global Collection 2026 report has a message that Taiwanese society and education policymakers would do well to reflect on. In the age of AI, the scarcest resource in education is not advanced computing power, but people; and the most urgent global educational crisis is not technological backwardness, but teacher well-being and retention. Covering 52 countries, the report from HundrED, a Finnish nonprofit that reviews and compiles innovative solutions in education from around the world, highlights a