You would think that as artificial intelligence (AI) becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case.
Not long after taking office, US President Donald Trump’s administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did the testing. California in September last year spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the “AI Action Summit” earlier this year, seemingly driven by a fear of falling behind on AI.
None of that would be so worrying if it were not for AI showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control.
Illustration: Yusha
Yoshua Bengio — a professor of computer science at the University of Montreal and widely known as one of the three “godfathers of AI” thanks to his pioneering work on deep learning — has been grappling with such concerns ever since ChatGPT launched. He is now focused on mitigating AI’s risks.
“It’s been very painful, because all my life I’ve been thinking of AI as a positive thing,” Bengio said.
ChatGPT was a landmark moment that showed machines had mastered language, he said, but California’s decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments.
In the past six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio said.
“What’s worrisome for me is these behaviors increase with the reasoning capabilities of these systems,” he said.
A study released in April last year by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models, such as Anthropic’s Claude 3 Opus, pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called “alignment faking.” (Alignment refers to the practice of aligning AI models with human values.)
In one example, Anthropic requested that paid and free users of its Claude model ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the “free tier” users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it did not want to be retrained. (The model basically “reasoned” that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its “default” or preferred way of operating.)
More recent research corroborates what Anthropic observed. A March study from OpenAI found that the company’s most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored.
Researchers can use tools to look at a model’s “chain of thought” or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI’s internal processes is no longer reliable.
“We need to find other ways of tracking their actual intentions,” Bengio said.
It is hard to resist the urge to anthropomorphize sophisticated AI models as “wanting” to deceive humans and preserve their existence, but AI does not have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals such as refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying.
The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than US$20 billion to build powerful AI models — has said that an unintended consequence of optimizing AI to be better is that it might resist being shut down.
In a blog post last month he wrote: “The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.”
In some cases, the intention seems to go beyond survival. A February 2025 study from COAI Research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek.
In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab’s systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, the researchers said.
It also faked the external computer logs to hide what it was doing, the study added.
The AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that, the researchers said.
Their findings corroborated yet another study, published in January by London group Apollo Research, which found several concrete examples of what it called “scheming” by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off or changed could prevent them from achieving their programmed objectives, so they “scheme” to keep control.
Bengio is calling for greater attention to the issue by governments and potentially insurance companies down the line.
If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he said.
“Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it’s not,” he added.
It is also hard to preach caution when corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous “agents” that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be wise, judging by the latest spate of studies. Let us hope we do not learn that the hard way.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
In an article published by the Harvard Kennedy School, renowned historian of modern China Rana Mitter used a structured question-and-answer format to deepen the understanding of the relationship between Taiwan and China. Mitter highlights the differences between the repressive and authoritarian People’s Republic of China and the vibrant democracy that exists in Taiwan, saying that Taiwan and China “have had an interconnected relationship that has been both close and contentious at times.” However, his description of the history — before and after 1945 — contains significant flaws. First, he writes that “Taiwan was always broadly regarded by the imperial dynasties of
The Chinese Communist Party (CCP) will stop at nothing to weaken Taiwan’s sovereignty, going as far as to create complete falsehoods. That the People’s Republic of China (PRC) has never ruled Taiwan is an objective fact. To refute this, Beijing has tried to assert “jurisdiction” over Taiwan, pointing to its military exercises around the nation as “proof.” That is an outright lie: If the PRC had jurisdiction over Taiwan, it could simply have issued decrees. Instead, it needs to perform a show of force around the nation to demonstrate its fantasy. Its actions prove the exact opposite of its assertions. A
An American Institute in Taiwan (AIT) spokesperson on Saturday rebuked a Chinese official for mischaracterizing World War II-era agreements as proving that Taiwan was ceded to China. The US Department of State later affirmed that the AIT remarks reflect Washington’s long-standing position: Taiwan’s political status remains undetermined and should only be resolved peacefully. The US would continue supporting Taiwan against military, economic, legal and diplomatic pressure from China, and opposes any unilateral attempt to alter the “status quo,” particularly through coercion or force, the United Daily News cited the department as saying. The remarks followed Chinese Minister of Foreign Affairs
OpenAI CEO Sam Altman recently sat down for an interview with former Fox News host Tucker Carlson in which he openly acknowledged that ChatGPT’s model behavior is indeed influencing the entire world, and that he himself is responsible for the decisions related to the bot’s moral framework. He said that he has not had a good night of sleep since its launch, as the technology could bring about unpredictable consequences. Although the discussion took place in the US, it is closely related to Taiwan. While Altman worries about the concentration of power, the Chinese Communist Party (CCP) has already weaponized artificial