Most people have encountered “AI slop,” the deluge of low-quality content produced by generative artificial intelligence (AI) tools that has inundated the Internet, but is this computer-made hogwash taking over work as well?
News that Deloitte Australia would partially refund the government for a report sprinkled with apparent AI-generated errors has caused a local furor and spurred international headlines.
Australian Senator Barbara Pocock said in a radio interview that the A$440,000 (US$287,001) taxpayer-funded document misquoted a judge and cited nonexistent references.
Illustration: Yusha
The alleged AI mistakes are “the kinds of things that a first-year university student would be in deep trouble for,” she said.
Deloitte Australia did not immediately respond to my request for comment, but has said the corrections did not impact the report’s substance or recommendations, and told other outlets that: “The matter has been resolved directly with the client.”
Besides being a bad look for the Big Four firm at a time when Australians’ trust in government-use of private consulting firms was already fraught, there is a reason it has struck such a nerve.
It has reopened a global debate on the limitations — and high cost — of the technology backfiring in the workplace. It is not the first case of AI hallucinations or chatbots making things up, to surface in viral ways. It likely would not be the last.
The tech industry’s promises that AI would make us all more productive are part of what is propping up their hundreds of billions of dollars in spending, but the jury is still out on how much of a difference it is actually making in the office.
Markets were rattled in August after researchers at the Massachusetts Institute of Technology said that 95 percent of firms surveyed have not seen returns on investments into generative AI. A separate study from McKinsey found that while nearly eight in 10 companies are using the technology, just as many report “no significant bottom-line impact.”
Some of it can be attributed to growing pains as business leaders work out the kinks in the early days of deploying AI in their organizations. Technology companies have responded by putting out their own findings suggesting AI is helping with repetitive office tasks and highlighting its economic value.
However, fresh research suggests some of the tension might be due to the proliferation of “workslop,” which the Harvard Business Review defines as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” It encapsulates the experience of trying to use AI to help with your job, only to find it has created more work for you or your colleagues.
About 40 percent of US desk workers have received workslop over the past month, a survey from last month from researchers at BetterUp and the Stanford Social Media Lab. The average time it takes to resolve each incident is two hours, and the phenomenon can cost US$9 million annually for a 10,000-person company.
It can also risk eroding trust at the office, something that is harder to rebuild once it is gone. About one-third of people (34 percent) who receive workslop notify their teammates or managers, and about the same share (32 percent) say they are less likely to want to work with the sender in the future, the Harvard Business Review reported.
There are ways to smooth out the transition. Implementing clear policies is essential. Disclosures of when and how AI was used during workflows can also help restore trust. Managers must make sure that employees are trained in the technology’s limitations, and understand that they are ultimately responsible for the quality of their work regardless of whether they used a machine’s assistance. Blaming AI for mistakes just does not cut it.
The growing cases of workslop should also be a broader wake-up call. At this nascent stage of the technology, there are serious hindrances to the “intelligence” part of AI. The tools might seem good at writing because they recognize patterns in language and mimic them in their outputs, but that should not be equated with a true understanding of materials. In addition, they are sycophantic — they are designed to engage and please users — even if that means getting important things wrong.
As mesmerizing as it can be to see chatbots instantaneously create polished slides or savvy-sounding reports, they are not reliable shortcuts. They still require fact-checking and human oversight.
Despite the big assurances that AI will improve productivity, and is thus worth businesses paying big bucks for, people seem to be using it more for lower-stakes tasks. Data suggest that consumers are increasingly turning to these tools outside of the office. A majority of ChatGPT queries (73 percent) in June were non-work related, according to a study published last month from OpenAI’s own economic research team and a Harvard economist. That is up from 53 percent last year.
An irony is that all this might end up being good news for some staff at consulting giants such as the one caught up in the Australia backlash. It turns out AI might not be so good at their jobs just yet.
The more workslop piles up in the office, the more valuable human intelligence would become.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Chinese actor Alan Yu (于朦朧) died after allegedly falling from a building in Beijing on Sept. 11. The actor’s mysterious death was tightly censored on Chinese social media, with discussions and doubts about the incident quickly erased. Even Hong Kong artist Daniel Chan’s (陳曉東) post questioning the truth about the case was automatically deleted, sparking concern among overseas Chinese-speaking communities about the dark culture and severe censorship in China’s entertainment industry. Yu had been under house arrest for days, and forced to drink with the rich and powerful before he died, reports said. He lost his life in this vicious
In South Korea, the medical cosmetic industry is fiercely competitive and prices are low, attracting beauty enthusiasts from Taiwan. However, basic medical risks are often overlooked. While sharing a meal with friends recently, I heard one mention that his daughter would be going to South Korea for a cosmetic skincare procedure. I felt a twinge of unease at the time, but seeing as it was just a casual conversation among friends, I simply reminded him to prioritize safety. I never thought that, not long after, I would actually encounter a patient in my clinic with a similar situation. She had
A recent trio of opinion articles in this newspaper reflects the growing anxiety surrounding Washington’s reported request for Taiwan to shift up to 50 percent of its semiconductor production abroad — a process likely to take 10 years, even under the most serious and coordinated effort. Simon H. Tang (湯先鈍) issued a sharp warning (“US trade threatens silicon shield,” Oct. 4, page 8), calling the move a threat to Taiwan’s “silicon shield,” which he argues deters aggression by making Taiwan indispensable. On the same day, Hsiao Hsi-huei (蕭錫惠) (“Responding to US semiconductor policy shift,” Oct. 4, page 8) focused on
George Santayana wrote: “Those who cannot remember the past are condemned to repeat it.” This article will help readers avoid repeating mistakes by examining four examples from the civil war between the Chinese Communist Party (CCP) forces and the Republic of China (ROC) forces that involved two city sieges and two island invasions. The city sieges compared are Changchun (May to October 1948) and Beiping (November 1948 to January 1949, renamed Beijing after its capture), and attempts to invade Kinmen (October 1949) and Hainan (April 1950). Comparing and contrasting these examples, we can learn how Taiwan may prevent a war with