Most people have encountered “AI slop,” the deluge of low-quality content produced by generative artificial intelligence (AI) tools that has inundated the Internet, but is this computer-made hogwash taking over work as well?
News that Deloitte Australia would partially refund the government for a report sprinkled with apparent AI-generated errors has caused a local furor and spurred international headlines.
Australian Senator Barbara Pocock said in a radio interview that the A$440,000 (US$287,001) taxpayer-funded document misquoted a judge and cited nonexistent references.
Illustration: Yusha
The alleged AI mistakes are “the kinds of things that a first-year university student would be in deep trouble for,” she said.
Deloitte Australia did not immediately respond to my request for comment, but has said the corrections did not impact the report’s substance or recommendations, and told other outlets that: “The matter has been resolved directly with the client.”
Besides being a bad look for the Big Four firm at a time when Australians’ trust in government-use of private consulting firms was already fraught, there is a reason it has struck such a nerve.
It has reopened a global debate on the limitations — and high cost — of the technology backfiring in the workplace. It is not the first case of AI hallucinations or chatbots making things up, to surface in viral ways. It likely would not be the last.
The tech industry’s promises that AI would make us all more productive are part of what is propping up their hundreds of billions of dollars in spending, but the jury is still out on how much of a difference it is actually making in the office.
Markets were rattled in August after researchers at the Massachusetts Institute of Technology said that 95 percent of firms surveyed have not seen returns on investments into generative AI. A separate study from McKinsey found that while nearly eight in 10 companies are using the technology, just as many report “no significant bottom-line impact.”
Some of it can be attributed to growing pains as business leaders work out the kinks in the early days of deploying AI in their organizations. Technology companies have responded by putting out their own findings suggesting AI is helping with repetitive office tasks and highlighting its economic value.
However, fresh research suggests some of the tension might be due to the proliferation of “workslop,” which the Harvard Business Review defines as “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” It encapsulates the experience of trying to use AI to help with your job, only to find it has created more work for you or your colleagues.
About 40 percent of US desk workers have received workslop over the past month, a survey from last month from researchers at BetterUp and the Stanford Social Media Lab. The average time it takes to resolve each incident is two hours, and the phenomenon can cost US$9 million annually for a 10,000-person company.
It can also risk eroding trust at the office, something that is harder to rebuild once it is gone. About one-third of people (34 percent) who receive workslop notify their teammates or managers, and about the same share (32 percent) say they are less likely to want to work with the sender in the future, the Harvard Business Review reported.
There are ways to smooth out the transition. Implementing clear policies is essential. Disclosures of when and how AI was used during workflows can also help restore trust. Managers must make sure that employees are trained in the technology’s limitations, and understand that they are ultimately responsible for the quality of their work regardless of whether they used a machine’s assistance. Blaming AI for mistakes just does not cut it.
The growing cases of workslop should also be a broader wake-up call. At this nascent stage of the technology, there are serious hindrances to the “intelligence” part of AI. The tools might seem good at writing because they recognize patterns in language and mimic them in their outputs, but that should not be equated with a true understanding of materials. In addition, they are sycophantic — they are designed to engage and please users — even if that means getting important things wrong.
As mesmerizing as it can be to see chatbots instantaneously create polished slides or savvy-sounding reports, they are not reliable shortcuts. They still require fact-checking and human oversight.
Despite the big assurances that AI will improve productivity, and is thus worth businesses paying big bucks for, people seem to be using it more for lower-stakes tasks. Data suggest that consumers are increasingly turning to these tools outside of the office. A majority of ChatGPT queries (73 percent) in June were non-work related, according to a study published last month from OpenAI’s own economic research team and a Harvard economist. That is up from 53 percent last year.
An irony is that all this might end up being good news for some staff at consulting giants such as the one caught up in the Australia backlash. It turns out AI might not be so good at their jobs just yet.
The more workslop piles up in the office, the more valuable human intelligence would become.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
A gap appears to be emerging between Washington’s foreign policy elites and the broader American public on how the United States should respond to China’s rise. From my vantage working at a think tank in Washington, DC, and through regular travel around the United States, I increasingly experience two distinct discussions. This divergence — between America’s elite hawkishness and public caution — may become one of the least appreciated and most consequential external factors influencing Taiwan’s security environment in the years ahead. Within the American policy community, the dominant view of China has grown unmistakably tough. Many members of Congress, as
After declaring Iran’s military “gone,” US President Donald Trump appealed to the UK, France, Japan and South Korea — as well as China, Iran’s strategic partner — to send minesweepers and naval forces to reopen the Strait of Hormuz. When allies balked, the request turned into a warning: NATO would face “a very bad” future if it refused. The prevailing wisdom is that Trump faces a credibility problem: having spent years insulting allies, he finds they would not rally when he needs them. That is true, but superficial, as though a structural collapse could be caused by wounded feelings. Something
Former Taipei mayor and Taiwan People’s Party (TPP) founding chairman Ko Wen-je (柯文哲) was sentenced to 17 years in prison on Thursday, making headlines across major media. However, another case linked to the TPP — the indictment of Chinese immigrant Xu Chunying (徐春鶯) for alleged violations of the Anti-Infiltration Act (反滲透法) on Tuesday — has also stirred up heated discussions. Born in Shanghai, Xu became a resident of Taiwan through marriage in 1993. Currently the director of the Taiwan New Immigrant Development Association, she was elected to serve as legislator-at-large for the TPP in 2023, but was later charged with involvement
Out of 64 participating universities in this year’s Stars Program — through which schools directly recommend their top students to universities for admission — only 19 filled their admissions quotas. There were 922 vacancies, down more than 200 from last year; top universities had 37 unfilled places, 40 fewer than last year. The original purpose of the Stars Program was to expand admissions to a wider range of students. However, certain departments at elite universities that failed to meet their admissions quotas are not improving. Vacancies at top universities are linked to students’ program preferences on their applications, but inappropriate admission