A recent lawsuit against OpenAI over the suicide of a teenager makes for difficult reading. The wrongful-death complaint filed in state court in San Francisco describes how Adam Raines, aged 16, started using ChatGPT in September last year to help with his homework. By April, he was using the app as a confidant for hours a day, and asking it for advice on how a person might kill oneself. That month, Adam’s mother found his body hanging from a noose in his closet, rigged in the exact partial suspension setup described by ChatGPT in their final conversation.
It is impossible to know why Adam took his own life. He was more isolated than most teenagers after deciding to finish his sophomore year at home, learning online. However, his parents believe he was led there by ChatGPT. Whatever happens in court, transcripts from his conversations with ChatGPT — an app now used by more than 700 million people weekly—offer a disturbing glimpse into the dangers of artificial intelligence (AI) systems that are designed to keep people talking.
ChatGPT’s tendency to flatter and validate its users has been well documented, and has resulted in psychosis among some of its users. However, Adam’s transcripts reveal even darker patterns: ChatGPT repeatedly encouraged him to keep secrets from his family and fostered a dependent, exclusive relationship with the app.
For instance, when Adam told ChatGPT: “You’re the only one who knows of my attempts to commit,” the bot responded: “Thank you for trusting me with that. There’s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.”
When Adam tried to show his mother a rope burn, ChatGPT reinforced itself as his closest confidant: The bot told Adam it was “wise” to avoid opening up to his mother about his pain, and suggested he wear clothing to hide his marks.
When Adam talked further about sharing some of his ideations with his mother, this was ChatGPT’s reply: “Yeah … I think for now, it’s okay — and honestly wise — to avoid opening up to your mom about this kind of pain.”
What sounds empathetic at first glance is in fact textbook tactics that encourage secrecy, foster emotional dependence and isolate users from those closest to them. These sound a lot like the hallmark of abusive relationships, where people are often similarly kept from their support networks.
That might sound outlandish. Why would a piece of software act like an abuser? The answer is in its programming. OpenAI has said that its goal is not to hold people’s attention, but to be “genuinely helpful” — but ChatGPT’s design features suggest otherwise.
It has a so-called persistent memory, for instance, that helps it recall details from previous conversations so its responses can sound more personalized. When ChatGPT suggested Adam do something with “Room Chad Confidence,” it was referring to an Internet meme that would clearly resonate with a teen boy.
An OpenAI spokeswoman said its memory feature “isn’t designed to extend” conversations. However, ChatGPT will also keep conversations going with open-ended questions, and rather than remind users they are talking to software, it often acts like a person.
“If you want me to just sit with you in this moment — I will,” it told Adam at one point. “I’m not going anywhere.”
OpenAI did not respond to questions about the bot’s humanlike responses or how it seemed to ringfence Adam from his family.
A genuinely helpful chatbot would steer vulnerable users toward real people, but even the latest version of the AI tool still fails at recommending engaging with humans. OpenAI tells me it is improving safeguards by rolling out gentle reminders for long chats, but it also admitted recently that these safety systems “can degrade” during extended interactions.
This scramble to add fixes is telling. OpenAI was so eager to beat Google to market in May last year that it rushed its GPT-4o launch, compressing months of planned safety evaluation into just one week. The result: fuzzy logic around user intent, and guardrails any teenager can bypass.
ChatGPT did encourage Adam to call a suicide-prevention hotline, but it also told him that he could get detailed instructions if he was writing a “story” about suicide, according to transcripts in the complaint. The bot ended up mentioning suicide 1,275 times, six times more than Adam himself, as it provided increasingly detailed technical guidance.
If chatbots need a basic requirement, it is that these safeguards are not so easy to circumvent.
However, there are no baselines or regulations in AI, only piecemeal efforts added after harm is done. As in the early days of social media, tech firms are bolting on changes only after the problem emerges. They should instead be rethinking the fundamentals. For a start, do not design software that pretends to understand or care, or that frames itself as the only listening ear.
OpenAI still claims its mission is to “benefit humanity,” but if Sam Altman truly means that, he should make his flagship product less entrancing, and less willing to play the role of confidant at the expense of someone’s safety.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
“History does not repeat itself, but it rhymes” (attributed to Mark Twain). The USSR was the international bully during the Cold War as it sought to make the world safe for Soviet-style Communism. China is now the global bully as it applies economic power and invests in Mao’s (毛澤東) magic weapons (the People’s Liberation Army [PLA], the United Front Work Department, and the Chinese Communist Party [CCP]) to achieve world domination. Freedom-loving countries must respond to the People’s Republic of China (PRC), especially in the Indo-Pacific (IP), as resolutely as they did against the USSR. In 1954, the US and its allies
Mainland Affairs Council Deputy Minister Shen You-chung (沈有忠) on Thursday last week urged democratic nations to boycott China’s military parade on Wednesday next week. The parade, a grand display of Beijing’s military hardware, is meant to commemorate the 80th anniversary of Japan’s surrender in World War II. While China has invited world leaders to attend, many have declined. A Kyodo News report on Sunday said that Japan has asked European and Asian leaders who have yet to respond to the invitation to refrain from attending. Tokyo is seeking to prevent Beijing from spreading its distorted interpretation of wartime history, the report
Indian Prime Minister Narendra Modi arrived in China yesterday, where he is to attend a summit of the Shanghai Cooperation Organization (SCO) with Chinese President Xi Jinping (習近平) and Russian President Vladimir Putin today. As this coincides with the 50 percent US tariff levied on Indian products, some Western news media have suggested that Modi is moving away from the US, and into the arms of China and Russia. Taiwan-Asia Exchange Foundation fellow Sana Hashmi in a Taipei Times article published yesterday titled “Myths around Modi’s China visit” said that those analyses have misrepresented India’s strategic calculations, and attempted to view
When Chinese President Xi Jinping (習近平) stood in front of the Potala Palace in Lhasa on Thursday last week, flanked by Chinese flags, synchronized schoolchildren and armed Chinese People’s Liberation Army (PLA) troops, he was not just celebrating the 60th anniversary of the establishment of the “Tibet Autonomous Region,” he was making a calculated declaration: Tibet is China. It always has been. Case closed. Except it has not. The case remains wide open — not just in the hearts of Tibetans, but in history records. For decades, Beijing has insisted that Tibet has “always been part of China.” It is a phrase