Microsoft Corp plans to spend US$80 billion on artificial intelligence (AI) this year, reinforcing its position as a leading vendor. So why has it published a research paper showing an erosion of critical-thinking skills among workers using generative AI tools such as ChatGPT? If we were being generous, we could say it was a genuine scientific enquiry. More likely, it wants to keep ahead of the curve as AI disrupts certain jobs and ensure that its tools remain useful to businesses. At a time when big tech firms are racing to make AI models bigger, that is a refreshingly thoughtful approach to the industry’s business model and its social outcomes.
The study, carried out in conjunction with researchers at Carnegie Mellon University, surveyed 319 knowledge workers about how they used AI, including a teacher generating images for a presentation about handwashing to her students using DALL-E 2 and a commodities trader generating strategies using ChatGPT.
The researchers found a striking pattern: The more participants trusted AI for certain tasks, the less they practiced those skills themselves, such as writing, analysis and critical evaluations. As a result, they self-reported an atrophying of skills in those areas. Several respondents said they started to doubt their abilities to perform tasks such as verifying grammar in text or composing legal letters, which led them to automatically accept whatever generative AI gave them.
They were even less likely to practice their skills when there was time pressure.
“In sales, I must reach a certain quota daily or risk losing my job,” one anonymized study participant said. “Ergo, I use AI to save time and don’t have much room to ponder over the result.”
A similar recent study by AI company Anthropic, which looked at how people were using its model Claude, found that the top skill exhibited by the chatbot in conversations was “critical thinking.”
This paints the picture of a future where professional workers ultimately become managers of AI’s output, rather than the originators of new ideas and content, particularly as AI models get better. OpenAI’s latest “Deep Research” model, which costs US$200 a month, can conduct research across the Internet, scouring images, PDFs and text, to produce detailed reports with citations.
One result is that cognitive work is going to transform — and quickly, Deutsche Bank AG said in a note to investors on Wednesday last week.
“Humans will be rewarded for asking their AI agent the right questions, in the right way, and then using their judgment to assess and iterate on the answers,” research analyst Adrian Cox wrote. “Much of the rest of the cognitive process will be offloaded.”
As frightening as that sounds, consider that Socrates once worried that writing would lead to the erosion of memory, that calculators were once expected to kill our mathematical skills, and that GPS navigation would leave us hopelessly lost without our phones. That last one might be somewhat true, but by and large humans have managed to other uses for their brains when they outsource their thinking, even if our math and navigating skills become lazier.
What is different with AI is that it encroaches on a much broader part of our everyday cognition. We are put in positions to think critically far more often than we are to calculate sums or chart routes — whether crafting a sensitive e-mail or deciding what to flag to our boss in a report. That could leave us less able to do core professional work, or more vulnerable to propaganda. It leads back to the question of why Microsoft — which makes money from sales of OpenAI’s GPT models — published these findings.
There is a clue in the report itself, in which the authors said that they risk creating products “that do not address workers’ real needs,” if they do not know how knowledge workers use AI, and how their brains work when they do.
If a sales manager thinks skills go downhill when they use Microsoft’s AI products, the quality of their work might decline, too.
A fascinating finding in Microsoft’s study was that the more people were confident in the abilities of their AI tool, the less likely they were to double-check its output. Given that AI still tends to hallucinate, that raises the risk of poor-quality work. What happens when employers start noticing a decline in performance? They might blame it on the worker — but they might also blame it on the AI, which would be bad for Microsoft.
Tech companies have loudly marketed AI as a tool that would “augment” our intelligence, not replace it, as this study seems to suggest. So, the lesson for Microsoft is in how it aims future products, not in making them more powerful, but in somehow designing them to enhance rather than erode human capabilities. Perhaps, for instance, ChatGPT and its ilk can prod its users to come up with their own original thoughts once in a while. If they do not, businesses could end up with workforces that can do more with less, but also cannot spot when their newfound efficiency is sending them in the wrong directions.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The gutting of Voice of America (VOA) and Radio Free Asia (RFA) by US President Donald Trump’s administration poses a serious threat to the global voice of freedom, particularly for those living under authoritarian regimes such as China. The US — hailed as the model of liberal democracy — has the moral responsibility to uphold the values it champions. In undermining these institutions, the US risks diminishing its “soft power,” a pivotal pillar of its global influence. VOA Tibetan and RFA Tibetan played an enormous role in promoting the strong image of the US in and outside Tibet. On VOA Tibetan,
Former minister of culture Lung Ying-tai (龍應台) has long wielded influence through the power of words. Her articles once served as a moral compass for a society in transition. However, as her April 1 guest article in the New York Times, “The Clock Is Ticking for Taiwan,” makes all too clear, even celebrated prose can mislead when romanticism clouds political judgement. Lung crafts a narrative that is less an analysis of Taiwan’s geopolitical reality than an exercise in wistful nostalgia. As political scientists and international relations academics, we believe it is crucial to correct the misconceptions embedded in her article,
Sung Chien-liang (宋建樑), the leader of the Chinese Nationalist Party’s (KMT) efforts to recall Democratic Progressive Party (DPP) Legislator Lee Kun-cheng (李坤城), caused a national outrage and drew diplomatic condemnation on Tuesday after he arrived at the New Taipei City District Prosecutors’ Office dressed in a Nazi uniform. Sung performed a Nazi salute and carried a copy of Adolf Hitler’s Mein Kampf as he arrived to be questioned over allegations of signature forgery in the recall petition. The KMT’s response to the incident has shown a striking lack of contrition and decency. Rather than apologizing and distancing itself from Sung’s actions,
US President Trump weighed into the state of America’s semiconductor manufacturing when he declared, “They [Taiwan] stole it from us. They took it from us, and I don’t blame them. I give them credit.” At a prior White House event President Trump hosted TSMC chairman C.C. Wei (魏哲家), head of the world’s largest and most advanced chip manufacturer, to announce a commitment to invest US$100 billion in America. The president then shifted his previously critical rhetoric on Taiwan and put off tariffs on its chips. Now we learn that the Trump Administration is conducting a “trade investigation” on semiconductors which