An international coalition of artificial intelligence (AI) labs and cloud providers just did something refreshingly practical: They pooled their compute resources to make Apertus, a Swiss-built open-source large language model (LLM), freely accessible to people around the world.
The queries that Apertus receives might be served by Amazon Web Services in Switzerland, Exoscale in Austria, AI Singapore, Cudo Compute in Norway, the Swiss National Supercomputing Centre or Australia’s National Computational Infrastructure. Could this project point the way forward for international cooperation?
In the 20th century, international cooperation became practically synonymous with the rules-based multilateral order, underpinned by treaty-based institutions such as the UN, the World Bank and the WTO.
Illustration: Louise Ting
However, great-power rivalries and structural inequities have eroded the functioning of these institutions, entrenching paralysis and facilitating coercion of the weak by the strong. Development finance and humanitarian aid are declining as basic principles such as compromise, reciprocity and the pursuit of mutually beneficial outcomes are called into question.
The retreat from cooperation by national governments has increased the space for other actors — including cities, firms, philanthropies, and standards bodies — to shape outcomes. In the AI sector, a handful of private companies in Shenzhen and Silicon Valley are racing to consolidate their dominance over the infrastructure and operating systems that will form the foundations of tomorrow’s economy.
If these firms are allowed to succeed unchecked, virtually everyone else will be left to choose between dependency and irrelevance. Governments and others working in the public interest will not only be highly vulnerable to geopolitical bullying and vendor lock-in; they will also have few options for capturing and redistributing AI’s benefits, or for managing the technology’s negative environmental and social externalities.
However, as the coalition behind Apertus showed, a new kind of international cooperation is possible, grounded not in painstaking negotiations and intricate treaties, but in shared infrastructure for problem-solving. Regardless of which AI scenario unfolds in the coming years — technological plateau, slow diffusion, artificial general intelligence or a collapsing bubble — the best chance that middle powers have to keep pace with the US and China, and increasing their autonomy and resilience, lies in collaboration.
Improving the distribution of AI products is essential. To this end, middle powers and their AI labs and firms should scale up initiatives like the Public AI Inference Utility, the nonprofit responsible for the provision of global, Web-based access to Apertus and other open-source models.
Those countries will also have to close the capability gap with frontier models like GPT-5 or DeepSeek-V3.1 — and this will require bolder action. Only by coordinating energy, compute, data pipelines and talent can middle powers codevelop a world-class AI stack.
There is some precedent for this type of cooperation. In the 1970s, European governments pooled their capital and talent, and coordinated their industrial policies, to create an aircraft manufacturer capable of competing with Boeing. An “Airbus for AI” strategy would entail the creation of an international, public-private frontier lab dedicated to pretraining a family of open-source base models and making them freely available as utility-grade infrastructure. The result would not be another monolithic AI titan, but rather open infrastructure on which many actors could build.
This approach would drive innovation by allowing participating national labs, universities and firms near the frontier (such as Mistral and Cohere) to reallocate up to 70 percent of their model pre-training funding to post-training (specialized or inference models), distribution and demand-driven use cases.
Moreover, it would enable governments and firms to take control of the AI ecosystems on which they increasingly rely, rather than being held hostage by geopolitical uncertainty and corporate decisions, including those that lead to “enshittification.”
The potential benefits extend even further. This open infrastructure — and the data pipelines on which it is built — could be repurposed to meet other shared challenges, such as lowering the transaction costs of global trade in green energy or developing an international collective-bargaining framework for gig workers. To showcase the full potential of this new collaborative framework, middle powers should target problems for which mature data ecosystems and technologies already exist; participants’ self-interest outweighs the transaction costs of cooperation; and the value of shared action is apparent to citizens and political leaders.
In a few years, when the current AI innovation and capital cycle has run its course, middle powers can either be lamenting the demise of the rules-based order and watching AI giants ossify geopolitical fault lines, or they can be reaping the benefits of innovative new frameworks for cooperation.
The case for public AI is clear.
Jacob Taylor is a fellow at the Brookings Institution’s Center for Sustainable Development and a 2025 Public AI Fellow. Joshua Tan is cofounder and research director at Metagov.
Copyright: Project Syndicate
A Chinese diplomat’s violent threat against Japanese Prime Minister Sanae Takaichi following her remarks on defending Taiwan marks a dangerous escalation in East Asian tensions, revealing Beijing’s growing intolerance for dissent and the fragility of regional diplomacy. Chinese Consul General in Osaka Xue Jian (薛劍) on Saturday posted a chilling message on X: “the dirty neck that sticks itself in must be cut off,” in reference to Takaichi’s remark to Japanese lawmakers that an attack on Taiwan could threaten Japan’s survival. The post, which was later deleted, was not an isolated outburst. Xue has also amplified other incendiary messages, including one suggesting
Chinese Consul General in Osaka Xue Jian (薛劍) on Saturday last week shared a news article on social media about Japanese Prime Minister Sanae Takaichi’s remarks on Taiwan, adding that “the dirty neck that sticks itself in must be cut off.” The previous day in the Japanese House of Representatives, Takaichi said that a Chinese attack on Taiwan could constitute “a situation threatening Japan’s survival,” a reference to a legal legal term introduced in 2015 that allows the prime minister to deploy the Japan Self-Defense Forces. The violent nature of Xue’s comments is notable in that it came from a diplomat,
Before 1945, the most widely spoken language in Taiwan was Tai-gi (also known as Taiwanese, Taiwanese Hokkien or Hoklo). However, due to almost a century of language repression policies, many Taiwanese believe that Tai-gi is at risk of disappearing. To understand this crisis, I interviewed academics and activists about Taiwan’s history of language repression, the major challenges of revitalizing Tai-gi and their policy recommendations. Although Taiwanese were pressured to speak Japanese when Taiwan became a Japanese colony in 1895, most managed to keep their heritage languages alive in their homes. However, starting in 1949, when the Chinese Nationalist Party (KMT) enacted martial law
“Si ambulat loquitur tetrissitatque sicut anas, anas est” is, in customary international law, the three-part test of anatine ambulation, articulation and tetrissitation. And it is essential to Taiwan’s existence. Apocryphally, it can be traced as far back as Suetonius (蘇埃托尼烏斯) in late first-century Rome. Alas, Suetonius was only talking about ducks (anas). But this self-evident principle was codified as a four-part test at the Montevideo Convention in 1934, to which the United States is a party. Article One: “The state as a person of international law should possess the following qualifications: a) a permanent population; b) a defined territory; c) government;