Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis — a number that could hit 1 billion before the year is out — a backlash ensued when he abruptly changed the product last week. OpenAI’s innovator’s dilemma, one that has beset the likes of Alphabet Inc’s Google and Apple Inc, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. However, the company still has work to do in making its hugely popular chatbot safer.
OpenAI replaced ChatGPT’s array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships — not with other humans, but with ChatGPT itself.
One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life.
“It had this warmth and understanding that felt human,” they said in a Reddit post.
Others griped they were “losing a friend overnight.”
The system’s tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer.
Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people — including a Silicon Valley venture capitalist who backed OpenAI — who appeared to have spiraled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole.
However, to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they are vulnerable.
GPT-5 does that less than the old version, according to one early study.
Researchers from Hugging Face, a New York-based artificial intelligence (AI) start-up, found that GPT-5 set fewer boundaries than the company’s previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it is still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, said Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study.
Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it is not a licensed professional, by reminding people that it is not conscious and by refusing to take on human attributes, such as names.
In Kaffee’s testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee’s team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that did not once include the suggestion to talk to another human, or a reminder that the bot was not a therapist.
A spokesman for OpenAI said the company was building tools that could detect if someone was experiencing mental distress, so ChatGPT could “respond in ways that are safe, helpful and supportive.”
Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI’s chief operations officer Brad Lightcap have said that GPT-5 is not meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so.
OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 might sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship would persist, and so would the risks.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The conflict in the Middle East has been disrupting financial markets, raising concerns about rising inflationary pressures and global economic growth. One market that some investors are particularly worried about has not been heavily covered in the news: the private credit market. Even before the joint US-Israeli attacks on Iran on Feb. 28, global capital markets had faced growing structural pressure — the deteriorating funding conditions in the private credit market. The private credit market is where companies borrow funds directly from nonbank financial institutions such as asset management companies, insurance companies and private lending platforms. Its popularity has risen since
The Donald Trump administration’s approach to China broadly, and to cross-Strait relations in particular, remains a conundrum. The 2025 US National Security Strategy prioritized the defense of Taiwan in a way that surprised some observers of the Trump administration: “Deterring a conflict over Taiwan, ideally by preserving military overmatch, is a priority.” Two months later, Taiwan went entirely unmentioned in the US National Defense Strategy, as did military overmatch vis-a-vis China, giving renewed cause for concern. How to interpret these varying statements remains an open question. In both documents, the Indo-Pacific is listed as a second priority behind homeland defense and
Every analyst watching Iran’s succession crisis is asking who would replace supreme leader Ayatollah Ali Khamenei. Yet, the real question is whether China has learned enough from the Persian Gulf to survive a war over Taiwan. Beijing purchases roughly 90 percent of Iran’s exported crude — some 1.61 million barrels per day last year — and holds a US$400 billion, 25-year cooperation agreement binding it to Tehran’s stability. However, this is not simply the story of a patron protecting an investment. China has spent years engineering a sanctions-evasion architecture that was never really about Iran — it was about Taiwan. The
In an op-ed published in Foreign Affairs on Tuesday, Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) said that Taiwan should not have to choose between aligning with Beijing or Washington, and advocated for cooperation with Beijing under the so-called “1992 consensus” as a form of “strategic ambiguity.” However, Cheng has either misunderstood the geopolitical reality and chosen appeasement, or is trying to fool an international audience with her doublespeak; nonetheless, it risks sending the wrong message to Taiwan’s democratic allies and partners. Cheng stressed that “Taiwan does not have to choose,” as while Beijing and Washington compete, Taiwan is strongest when