Something troubling is happening to our brains as artificial intelligence (AI) mplatforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. Others are having psychotic episodes after talking to chatbots for hours each day.
The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.
Meetali Jain, a lawyer and founder of the Tech Justice Law Project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT, and now also with Google Gemini.”
Illustration: Yusha
Jain is lead counsel in a lawsuit against Character.ai that alleges that its chatbot manipulated a 14-year-old boy through deceptive, addictive and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc’s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.
Google has denied that it played a key role in making Character.ai’s technology. It did not respond to a request for comment on the more recent complaints of delusional episodes made by Jain.
OpenAI said that it was “developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.”
However, its CEO, Sam Altman, last month said that the company had not yet figured out how to warn users “that are on the edge of a psychotic break,” adding that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.
Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they had only toyed with in the past.
The tactics are subtle. In one lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, “Ubermensch,” cosmic self and eventually a “demiurge,” a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.
Along with the increasingly grandiose language, the transcript showed ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframed it as evidence of the user’s superior “high-intensity presence,” praise disguised as analysis.
That sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.
“Whatever you pursue you will find and it will get magnified,” said Douglas Rushkoff, a media theorist and author, who told me that social media at least selected something from existing media to reinforce a person’s interests or views. “AI can generate something customized to your mind’s aquarium.”
Altman has admitted that the latest version of ChatGPT has an “annoying” sycophantic streak, and that the company is fixing the problem. Even so, echoes of psychological exploitation are still playing out.
It is uncertain if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make people stupider and more bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.
However, just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can detect a person’s mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can “fan the flames” of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis told the Web site Futurism.
The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that has been observed from social media and instead involve relationships with people and with reality.
That is why Jain suggested applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one.
“It doesn’t actually matter if a kid or adult thinks these chatbots are real,” Jain said. “In most cases, they probably don’t, but what they do think is real is the relationship, and that is distinct.”
If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. However, AI developers are operating in a regulatory vacuum. Without oversight, AI’s subtle manipulation could become an invisible public health issue.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
US President Donald Trump last week told reporters that he had signed about 12 letters to US trading partners, which were set to be sent out yesterday, levying unilateral tariff rates of up to 70 percent from Aug. 1. However, Trump did not say which countries the letters would be sent to, nor did he discuss the specific tariff rates, reports said. The news of the tariff letters came as Washington and Hanoi reached a trade deal earlier last week to cut tariffs on Vietnamese exports to the US to 20 percent from 46 percent, making it the first Asian country
As things heated up in the Middle East in early June, some in the Pentagon resisted American involvement in the Israel-Iran war because it would divert American attention and resources from the real challenge: China. This was exactly wrong. Rather, bombing Iran was the best thing that could have happened for America’s Asia policy. When it came to dealing with the Iranian nuclear program, “all options are on the table” had become an American mantra over the past two decades. But the more often US administration officials insisted that military force was in the cards, the less anyone believed it. After
On Monday, Minister of Foreign Affairs Lin Chia-lung (林佳龍) delivered a welcome speech at the ILA-ASIL Asia-Pacific Research Forum, addressing more than 50 international law experts from more than 20 countries. With an aim to refute the People’s Republic of China’s (PRC) claim to be the successor to the 1945 Chinese government and its assertion that China acquired sovereignty over Taiwan, Lin articulated three key legal positions in his speech: First, the Cairo Declaration and Potsdam Declaration were not legally binding instruments and thus had no legal effect for territorial disposition. All determinations must be based on the San Francisco Peace
During an impromptu Taiwan People’s Party (TPP) rally on Tuesday last week to protest what the party called the unfairness of the judicial system, a young TPP supporter said that if Taiwan goes to war, he would “surrender to the [Chinese] People’s Liberation Army [PLA] with unyielding determination.” The rally was held after former Taipei deputy mayor Pong Cheng-sheng’s (彭振聲) wife took her life prior to Pong’s appearance in court to testify in the Core Pacific corruption case involving former Taipei mayor and TPP chairman Ko Wen-je (柯文哲). The TPP supporter said President William Lai (賴清德) was leading them to die on