Paris played host to representatives from more than 100 countries to discuss the future of artificial intelligence (AI) last week. The result was a vague agreement signed by 60 of them that does almost nothing to help make the technology safer.
The clue was in the name. The international meeting series, founded in the UK as an “AI Safety Summit” in 2023, became known as an “AI Action Summit” when it came to Paris. French President Emmanuel Macron used it as a springboard to announce a 109 billion euro (US$114.4 billion) investment in AI and make a pitch to the world for French tech.
You had to squint to find anything safety related in all the platitudes. The final 799-word statement focused more on the economic opportunities of AI than on advancing measures that had been established at previous summits in the UK and South Korea.
While the Bletchley Park and Seoul gatherings secured specific commitments from large AI firms to test their systems with a newly established, international network of safety institutes, the Paris statement calls for hazy goals like making AI “trustworthy.”
What is mind-boggling is this still managed to be too onerous for the US, whose vice president, JD Vance, complained about how “excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.”
First, the agreement was hardly excessive. Second, the AI industry in the US is not just “taking off.” Its companies are the standard-bearers, with Nvidia holding an effective monopoly on chips for training and inference and Microsoft and Alphabet controlling much of the cloud infrastructure and most popular AI models.
However, the US refused to sign the agreement anyway, likely due to increasing paranoia about China. Last month, a little Chinese firm called DeepSeek shot to the top of the app charts with an AI model that was as good as the latest version of ChatGPT’s — cheaper to build and free for anyone to use and copy, which you can be sure Silicon Valley engineers are doing right now.
Even stranger, the UK also declined to sign the statement for what seemed to be the opposite reason.
“We felt the declaration didn’t provide enough practical clarity on global governance,” a British government spokesperson said.
That sounds more sensible. As a reminder: AI is developing at an unprecedented pace, and its systems are on course to make high-stakes decisions about healthcare, law enforcement and finance without clear guardrails. Ambiguous pledges are “a step backwards for international and technical collaboration,” says Max Tegmark, a physics professor at Massachusetts Institute of Technology and co-founder of the Future of Life Institute who has been one of the leading voices advocating for AI safety measures.
The summit should have at least addressed the security concerns raised by last month’s International AI Safety Report, signed by more than 150 experts, he said.
It also should have turned the “voluntary commitments” from the Bletchley Park and Seoul summits into requirements for AI companies to run safety tests before deploying new systems to the public — and to share the results.
It also would have been good to see a deadline for creating binding international laws, as well as clearer thresholds for when AI systems reach certain capability levels (in reasoning or speed, for instance) to trigger further audits.
We should not have to wait for a calamity to occur before governments wake up to the risks of such transformative technology. Let us not repeat the delayed response to traffic safety, for instance, where it took thousands of deaths before seatbelts became mandatory in the 1960s.
A recent anecdote by the Washington Post’s Geoffrey Fowler highlights how things could go awry. The writer left a new “agentic” version of ChatGPT alone on his computer with access to his credit card. After asking it to find the cheapest eggs, the bot went on Instacart and bought a dozen at a high price, racking up fees and a tip for US$31.
“It went rogue,” Fowler wrote.
That might not be as bad as AI wreaking havoc on our electricity grids or financial markets, but the example shows that such errors in these systems carry a cost and can come out of nowhere, even as businesses and governments race to plug them in.
Vance’s call to prioritize “pro-growth” policies over safety would sound ludicrous if the topic were healthcare, aviation or social media. AI should be no different. Governments must be bolder about their role as proactive regulators and talk up the value of standards, rights and protections. It might not win them many points with businesses inside and outside the tech industry, but it is far better than standing by until disaster strikes. The next summit in Kigali, Rwanda, needs to establish more concrete oversight before AI’s mistakes scale up from unauthorized grocery purchases to something worse.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of Supremacy: AI, ChatGPT and the Race That Will Change the World. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
“History does not repeat itself, but it rhymes” (attributed to Mark Twain). The USSR was the international bully during the Cold War as it sought to make the world safe for Soviet-style Communism. China is now the global bully as it applies economic power and invests in Mao’s (毛澤東) magic weapons (the People’s Liberation Army [PLA], the United Front Work Department, and the Chinese Communist Party [CCP]) to achieve world domination. Freedom-loving countries must respond to the People’s Republic of China (PRC), especially in the Indo-Pacific (IP), as resolutely as they did against the USSR. In 1954, the US and its allies
The fallout from the mass recalls and the referendum on restarting the Ma-anshan Nuclear Power Plant continues to monopolize the news. The general consensus is that the Democratic Progressive Party (DPP) has been bloodied and found wanting, and is in need of reflection and a course correction if it is to avoid electoral defeat. The Chinese Nationalist Party (KMT) has not emerged unscathed, either, but has the opportunity of making a relatively clean break. That depends on who the party on Oct. 18 picks to replace outgoing KMT Chairman Eric Chu (朱立倫). What is certain is that, with the dust settling
Mainland Affairs Council Deputy Minister Shen You-chung (沈有忠) on Thursday last week urged democratic nations to boycott China’s military parade on Wednesday next week. The parade, a grand display of Beijing’s military hardware, is meant to commemorate the 80th anniversary of Japan’s surrender in World War II. While China has invited world leaders to attend, many have declined. A Kyodo News report on Sunday said that Japan has asked European and Asian leaders who have yet to respond to the invitation to refrain from attending. Tokyo is seeking to prevent Beijing from spreading its distorted interpretation of wartime history, the report
Indian Prime Minister Narendra Modi arrived in China yesterday, where he is to attend a summit of the Shanghai Cooperation Organization (SCO) with Chinese President Xi Jinping (習近平) and Russian President Vladimir Putin today. As this coincides with the 50 percent US tariff levied on Indian products, some Western news media have suggested that Modi is moving away from the US, and into the arms of China and Russia. Taiwan-Asia Exchange Foundation fellow Sana Hashmi in a Taipei Times article published yesterday titled “Myths around Modi’s China visit” said that those analyses have misrepresented India’s strategic calculations, and attempted to view