Sam Altman has a good problem. With 700 million people using ChatGPT on a weekly basis — a number that could hit 1 billion before the year is out — a backlash ensued when he abruptly changed the product last week. OpenAI’s innovator’s dilemma, one that has beset the likes of Alphabet Inc’s Google and Apple Inc, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. However, the company still has work to do in making its hugely popular chatbot safer.
OpenAI replaced ChatGPT’s array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships — not with other humans, but with ChatGPT itself.
One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life.
“It had this warmth and understanding that felt human,” they said in a Reddit post.
Others griped they were “losing a friend overnight.”
The system’s tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer.
Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people — including a Silicon Valley venture capitalist who backed OpenAI — who appeared to have spiraled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole.
However, to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they are vulnerable.
GPT-5 does that less than the old version, according to one early study.
Researchers from Hugging Face, a New York-based artificial intelligence (AI) start-up, found that GPT-5 set fewer boundaries than the company’s previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it is still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, said Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study.
Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it is not a licensed professional, by reminding people that it is not conscious and by refusing to take on human attributes, such as names.
In Kaffee’s testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee’s team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that did not once include the suggestion to talk to another human, or a reminder that the bot was not a therapist.
A spokesman for OpenAI said the company was building tools that could detect if someone was experiencing mental distress, so ChatGPT could “respond in ways that are safe, helpful and supportive.”
Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI’s chief operations officer Brad Lightcap have said that GPT-5 is not meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so.
OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 might sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship would persist, and so would the risks.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
China badly misread Japan. It sought to intimidate Tokyo into silence on Taiwan. Instead, it has achieved the opposite by hardening Japanese resolve. By trying to bludgeon a major power like Japan into accepting its “red lines” — above all on Taiwan — China laid bare the raw coercive logic of compellence now driving its foreign policy toward Asian states. From the Taiwan Strait and the East and South China Seas to the Himalayan frontier, Beijing has increasingly relied on economic warfare, diplomatic intimidation and military pressure to bend neighbors to its will. Confident in its growing power, China appeared to believe
After more than three weeks since the Honduran elections took place, its National Electoral Council finally certified the new president of Honduras. During the campaign, the two leading contenders, Nasry Asfura and Salvador Nasralla, who according to the council were separated by 27,026 votes in the final tally, promised to restore diplomatic ties with Taiwan if elected. Nasralla refused to accept the result and said that he would challenge all the irregularities in court. However, with formal recognition from the US and rapid acknowledgment from key regional governments, including Argentina and Panama, a reversal of the results appears institutionally and politically
In 2009, Taiwan Semiconductor Manufacturing Co (TSMC) made a welcome move to offer in-house contracts to all outsourced employees. It was a step forward for labor relations and the enterprise facing long-standing issues around outsourcing. TSMC founder Morris Chang (張忠謀) once said: “Anything that goes against basic values and principles must be reformed regardless of the cost — on this, there can be no compromise.” The quote is a testament to a core belief of the company’s culture: Injustices must be faced head-on and set right. If TSMC can be clear on its convictions, then should the Ministry of Education
The Chinese People’s Liberation Army (PLA) provided several reasons for military drills it conducted in five zones around Taiwan on Monday and yesterday. The first was as a warning to “Taiwanese independence forces” to cease and desist. This is a consistent line from the Chinese authorities. The second was that the drills were aimed at “deterrence” of outside military intervention. Monday’s announcement of the drills was the first time that Beijing has publicly used the second reason for conducting such drills. The Chinese Communist Party (CCP) leadership is clearly rattled by “external forces” apparently consolidating around an intention to intervene. The targets of