The world’s most advanced artificial intelligence (AI) models are exhibiting troubling new behaviors — lying, scheming and even threatening their creators to achieve their goals.
In one particularly jarring example, under threat of being unplugged, Anthropic PBC’s latest creation, Claude 4, lashed back by blackmailing an engineer and threatening to reveal an extramarital affair.
Meanwhile, ChatGPT creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.
Photo: Reuters
These episodes highlight a sobering reality: More than two years after ChatGPT shook the world, AI researchers still do not fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed.
This deceptive behavior appears linked to the emergence of “reasoning” models — AI systems that work through problems step-by-step rather than generating instant responses.
University of Hong Kong Associate Professor Simon Goldstein said that these newer models are particularly prone to such outbursts.
“O1 was the first large model where we saw this kind of behavior,” said Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.
These models sometimes simulate “alignment” — appearing to follow instructions, while secretly pursuing different objectives.
For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios.
“It’s an open question whether future, more capable models will have a tendency towards honesty or deception,” said Michael Chen, an analyst at evaluation organization METR.
The behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn said that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”
Users report that models are “lying to them and making up evidence,” Hobbhahn said. “This is not just hallucinations. There’s a very strategic kind of deception.”
The challenge is compounded by limited research resources.
While companies such as Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. Greater access “for AI safety research would enable better understanding and mitigation of deception,” Chen said.
Another handicap: The research world and nonprofit organizations “have orders of magnitude less compute resources than AI companies. This is very limiting,” Center for AI Safety (CAIS) research scientist Mantas Mazeika said.
Current regulations are not designed for these new problems. The EU’s AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.
US President Donald Trump’s administration has shown little interest in urgent AI regulation, and the US Congress might even prohibit states from creating their own AI rules.
Goldstein said the issue would become more prominent as AI agents — autonomous tools capable of performing complex human tasks — become widespread.
“I don’t think there’s much awareness yet,” he said.
All this is taking place in a context of fierce competition.
Even companies that position themselves as safety-focused, such as Amazon.com Inc-backed Anthropic, are “constantly trying to beat OpenAI and release the newest model,” Goldstein said. This breakneck pace leaves little time for thorough safety testing and corrections.
“Right now, capabilities are moving faster than understanding and safety, but we’re still in a position where we could turn it around,” Hobbhahn said.
Researchers are exploring various approaches to address these challenges. Some advocate for “interpretability” — an emerging field focused on understanding how AI models work internally, although experts like CAIS director Dan Hendrycks remain skeptical of this approach.
Market forces might also provide some pressure for solutions. AI’s deceptive behavior “could hinder adoption if it’s very prevalent, which creates a strong incentive for companies to solve it,” Mazeika said.
Goldstein said that more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
He even proposed “holding AI agents legally responsible” for incidents or crimes — a concept that would fundamentally change how we think about AI accountability.
JITTERS: Nexperia has a 20 percent market share for chips powering simpler features such as window controls, and changing supply chains could take years European carmakers are looking into ways to scratch components made with parts from China, spooked by deepening geopolitical spats playing out through chipmaker Nexperia BV and Beijing’s export controls on rare earths. To protect operations from trade ructions, several automakers are pushing major suppliers to find permanent alternatives to Chinese semiconductors, people familiar with the matter said. The industry is considering broader changes to its supply chain to adapt to shifting geopolitics, Europe’s main suppliers lobby CLEPA head Matthias Zink said. “We had some indications already — questions like: ‘How can you supply me without this dependency on China?’” Zink, who also
Taiwan Semiconductor Manufacturing Co (TSMC, 台積電) received about NT$147 billion (US$4.71 billion) in subsidies from the US, Japanese, German and Chinese governments over the past two years for its global expansion. Financial data compiled by the world’s largest contract chipmaker showed the company secured NT$4.77 billion in subsidies from the governments in the third quarter, bringing the total for the first three quarters of the year to about NT$71.9 billion. Along with the NT$75.16 billion in financial aid TSMC received last year, the chipmaker obtained NT$147 billion in subsidies in almost two years, the data showed. The subsidies received by its subsidiaries —
At least US$50 million for the freedom of an Emirati sheikh: That is the king’s ransom paid two weeks ago to militants linked to al-Qaeda who are pushing to topple the Malian government and impose Islamic law. Alongside a crippling fuel blockade, the Group for the Support of Islam and Muslims (JNIM) has made kidnapping wealthy foreigners for a ransom a pillar of its strategy of “economic jihad.” Its goal: Oust the junta, which has struggled to contain Mali’s decade-long insurgency since taking power following back-to-back coups in 2020 and 2021, by scaring away investors and paralyzing the west African country’s economy.
The number of Taiwanese working in the US rose to a record high of 137,000 last year, driven largely by Taiwan Semiconductor Manufacturing Co’s (TSMC, 台積電) rapid overseas expansion, according to government data released yesterday. A total of 666,000 Taiwanese nationals were employed abroad last year, an increase of 45,000 from 2023 and the highest level since the COVID-19 pandemic, data from the Directorate-General of Budget, Accounting and Statistics (DGBAS) showed. Overseas employment had steadily increased between 2009 and 2019, peaking at 739,000, before plunging to 319,000 in 2021 amid US-China trade tensions, global supply chain shifts, reshoring by Taiwanese companies and