Most people have encountered “AI slop,” the deluge of low-quality content produced by generative artificial intelligence (AI) tools that has inundated the Internet, but is this computer-made hogwash taking over work as well?
News that Deloitte Australia would partially refund the government for a report sprinkled with apparent AI-generated errors has caused a local furor and spurred international headlines.
Australian Senator Barbara Pocock said in a radio interview that the A$440,000 (US$287,001) taxpayer-funded document misquoted a judge and cited nonexistent references.
Illustration: Yusha
The alleged AI mistakes are “the kinds of things that a first-year university student would be in deep trouble for,” she said.
Deloitte Australia did not immediately respond to my request for comment, but has said the corrections did not impact the report’s substance or recommendations, and told other outlets that: “The matter has been resolved directly with the client.”
Besides being a bad look for the Big Four firm at a time when Australians’ trust in government-use of private consulting firms was already fraught, there is a reason it has struck such a nerve.
It has reopened a global debate on the limitations — and high cost — of the technology backfiring in the workplace. It is not the first case of AI hallucinations or chatbots making things up, to surface in viral ways. It likely would not be the last.
The tech industry’s promises that AI would make us all more productive are part of what is propping up their hundreds of billions of dollars in spending, but the jury is still out on how much of a difference it is actually making in the office.
Markets were rattled in August after researchers at the Massachusetts Institute of Technology said that 95 percent of firms surveyed have not seen returns on investments into generative AI. A separate study from McKinsey found that while nearly eight in 10 companies are using the technology, just as many report “no significant bottom-line impact.”
Some of it can be attributed to growing pains as business leaders work out the kinks in the early days of deploying AI in their organizations. Technology companies have responded by putting out their own findings suggesting AI is helping with repetitive office tasks and highlighting its economic value.
However, fresh research suggests some of the tension might be due to the proliferation of “workslop,” which the Harvard Business Review defines as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” It encapsulates the experience of trying to use AI to help with your job, only to find it has created more work for you or your colleagues.
About 40 percent of US desk workers have received workslop over the past month, a survey from last month from researchers at BetterUp and the Stanford Social Media Lab. The average time it takes to resolve each incident is two hours, and the phenomenon can cost US$9 million annually for a 10,000-person company.
It can also risk eroding trust at the office, something that is harder to rebuild once it is gone. About one-third of people (34 percent) who receive workslop notify their teammates or managers, and about the same share (32 percent) say they are less likely to want to work with the sender in the future, the Harvard Business Review reported.
There are ways to smooth out the transition. Implementing clear policies is essential. Disclosures of when and how AI was used during workflows can also help restore trust. Managers must make sure that employees are trained in the technology’s limitations, and understand that they are ultimately responsible for the quality of their work regardless of whether they used a machine’s assistance. Blaming AI for mistakes just does not cut it.
The growing cases of workslop should also be a broader wake-up call. At this nascent stage of the technology, there are serious hindrances to the “intelligence” part of AI. The tools might seem good at writing because they recognize patterns in language and mimic them in their outputs, but that should not be equated with a true understanding of materials. In addition, they are sycophantic — they are designed to engage and please users — even if that means getting important things wrong.
As mesmerizing as it can be to see chatbots instantaneously create polished slides or savvy-sounding reports, they are not reliable shortcuts. They still require fact-checking and human oversight.
Despite the big assurances that AI will improve productivity, and is thus worth businesses paying big bucks for, people seem to be using it more for lower-stakes tasks. Data suggest that consumers are increasingly turning to these tools outside of the office. A majority of ChatGPT queries (73 percent) in June were non-work related, according to a study published last month from OpenAI’s own economic research team and a Harvard economist. That is up from 53 percent last year.
An irony is that all this might end up being good news for some staff at consulting giants such as the one caught up in the Australia backlash. It turns out AI might not be so good at their jobs just yet.
The more workslop piles up in the office, the more valuable human intelligence would become.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Donald Trump’s return to the White House has offered Taiwan a paradoxical mix of reassurance and risk. Trump’s visceral hostility toward China could reinforce deterrence in the Taiwan Strait. Yet his disdain for alliances and penchant for transactional bargaining threaten to erode what Taiwan needs most: a reliable US commitment. Taiwan’s security depends less on US power than on US reliability, but Trump is undermining the latter. Deterrence without credibility is a hollow shield. Trump’s China policy in his second term has oscillated wildly between confrontation and conciliation. One day, he threatens Beijing with “massive” tariffs and calls China America’s “greatest geopolitical
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) made the astonishing assertion during an interview with Germany’s Deutsche Welle, published on Friday last week, that Russian President Vladimir Putin is not a dictator. She also essentially absolved Putin of blame for initiating the war in Ukraine. Commentators have since listed the reasons that Cheng’s assertion was not only absurd, but bordered on dangerous. Her claim is certainly absurd to the extent that there is no need to discuss the substance of it: It would be far more useful to assess what drove her to make the point and stick so
The central bank has launched a redesign of the New Taiwan dollar banknotes, prompting questions from Chinese Nationalist Party (KMT) legislators — “Are we not promoting digital payments? Why spend NT$5 billion on a redesign?” Many assume that cash will disappear in the digital age, but they forget that it represents the ultimate trust in the system. Banknotes do not become obsolete, they do not crash, they cannot be frozen and they leave no record of transactions. They remain the cleanest means of exchange in a free society. In a fully digitized world, every purchase, donation and action leaves behind data.
Yesterday, the Chinese Nationalist Party (KMT), once the dominant political party in Taiwan and the historic bearer of Chinese republicanism, officially crowned Cheng Li-wun (鄭麗文) as its chairwoman. A former advocate for Taiwanese independence turned Beijing-leaning firebrand, Cheng represents the KMT’s latest metamorphosis — not toward modernity, moderation or vision, but toward denial, distortion and decline. In an interview with Deutsche Welle that has now gone viral, Cheng declared with an unsettling confidence that Russian President Vladimir Putin is “not a dictator,” but rather a “democratically elected leader.” She went on to lecture the German journalist that Russia had been “democratized