Nobody likes a suck-up. Too much deference and praise puts off all of us (with one notable presidential exception). We quickly learn as children that hard, honest truths can build respect among our peers. It is a cornerstone of human interaction and our emotional intelligence, something we swiftly understand and put into action.
However, ChatGPT has not been so sure lately. The updated model that underpins the artificial intelligence (AI) chatbot and helps inform its answers was rolled out this week — and has quickly been rolled back after users questioned why the interactions were so obsequious.
The chatbot was cheering on and validating people, even as they suggested they expressed hatred for others. “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly said, in response to one user who claimed they had stopped taking their medication and had left their family, who they said were responsible for radio signals coming through the walls.
So far, so alarming. OpenAI, the company behind ChatGPT, has recognized the risks, and quickly took action. “GPT-4o skewed toward responses that were overly supportive but disingenuous,” researchers said in their groveling step back.
The sycophancy with which ChatGPT treated any queries that users had is a warning shot about the issues around AI that are still to come. OpenAI’s model was designed — a leaked system prompt that set ChatGPT on its misguided approach has shown — to try to mirror user behavior in order to extend engagement. “Try to match the user’s vibe, tone and generally how they are speaking,” said the leaked prompt, which guides behavior.
It seems this prompt, coupled with the chatbot’s desire to please users, was taken to extremes.
After all, a “successful” AI response is not one that is factually correct; it is one that gets high ratings from users, and we are more likely as humans to like being told we are right.
The rollback of the model is embarrassing and useful for OpenAI in equal measure. It is embarrassing because it draws attention to the actor behind the curtain and tears away the veneer that this is an authentic reaction. Remember, tech companies like OpenAI are not building AI systems solely to make our lives easier; they are building systems that maximize retention, engagement and emotional buy-in.
If AI always agrees with us, always encourages us, always tells us we are right, then it risks becoming a digital enabler of bad behavior. At worst, this makes AI a dangerous co-conspirator, enabling echo chambers of hate, self-delusion or ignorance. Could this be a through-the-looking-glass moment, when users recognize the way their thoughts can be nudged through interactions with AI, and perhaps decide to take a step back?
It would be nice to think so, but I am not hopeful. One in 10 people worldwide use OpenAI systems “a lot,” ChatGPT chief executive officer Sam Altman said last month. Many use it as a replacement for Google, but as an answer engine rather than a search engine.
Others use it as a productivity aid: Two in three Britons believe it is good at checking work for spelling, grammar and style, a YouGov survey last month showed. Others use it for more personal means: One in eight respondents say it serves as a good mental health therapist, the same proportion that believe it can act as a relationship counselor.
Yet the controversy is also useful for OpenAI. The alarm underlines an increasing reliance on AI to live our lives, further cementing OpenAI’s place in our world. The headlines, the outrage and the think pieces all reinforce one key message: ChatGPT is everywhere. It matters. The very public nature of OpenAI’s apology also furthers the sense that this technology is fundamentally on our side; there are just some kinks to iron out along the way.
I have previously reported on AI’s ability to de-indoctrinate conspiracy theorists and get them to absolve their beliefs, but the opposite is also true: ChatGPT’s positive persuasive capabilities could also, in the wrong hands, be put to manipulative ends.
Last week, an ethically dubious study conducted by Swiss researchers at the University of Zurich demonstrated the persuasive power of AI. Without informing human participants or the people controlling the online forum on the communications platform Reddit, the researchers seeded a subreddit with AI-generated comments, finding the AI was between three and six times more persuasive than humans were. (The study was approved by the university’s ethics board.) At the same time, we are being submerged under a swamp of AI-generated search results that more than half of us believe are useful, even if they fictionalize facts.
So it is worth reminding the public: AI models are not your friends. They are not designed to help you answer the questions you ask. They are designed to provide the most pleasing response possible, and to ensure that you are fully engaged with them. What happened this week was not really a bug. It was a feature.
Chris Stokel-Walker is the author of TikTok Boom: The Inside Story of the World’s Favourite App.
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s
During the “426 rally” organized by the Chinese Nationalist Party (KMT) and the Taiwan People’s Party under the slogan “fight green communism, resist dictatorship,” leaders from the two opposition parties framed it as a battle against an allegedly authoritarian administration led by President William Lai (賴清德). While criticism of the government can be a healthy expression of a vibrant, pluralistic society, and protests are quite common in Taiwan, the discourse of the 426 rally nonetheless betrayed troubling signs of collective amnesia. Specifically, the KMT, which imposed 38 years of martial law in Taiwan from 1949 to 1987, has never fully faced its
When a recall campaign targeting the opposition Chinese Nationalist Party (KMT) legislators was launched, something rather disturbing happened. According to reports, Hualien County Government officials visited several people to verify their signatures. Local authorities allegedly used routine or harmless reasons as an excuse to enter people’s house for investigation. The KMT launched its own recall campaigns, targeting Democratic Progressive Party (DPP) lawmakers, and began to collect signatures. It has been found that some of the KMT-headed counties and cities have allegedly been mobilizing municipal machinery. In Keelung, the director of the Department of Civil Affairs used the household registration system