Central to the Cold War between the US and the Soviet Union was a rivalry to develop the technologies of the future. First came the race to deploy nuclear weapons on intercontinental missiles. Then came the space race. Then came then-US president Ronald Reagan’s “Star Wars” program, which seemed to launch a new race to build missile-defense systems. However, it soon became clear that the Soviet economy had fallen decisively behind.
Now, a new struggle for technological mastery is under way, this time between the US and China, over artificial intelligence (AI). Both have signaled that they want to manage their competition through dialogue over the development, deployment and governance of AI. However, formal talks on May 14 made it painfully clear that no grand bargain can be expected anytime soon.
That should come as no surprise. The issue is simply too broad — and governments’ perspectives and goals too different — to allow for any single “treaty” or agreement on transnational AI governance. Instead, the potential risks can and should be managed through multiple, targeted bargains, and a combination of official and unofficial dialogues.
Illustration: Yusha
IN IT TO WIN IT
China and the US are each fully engaged in policymaking to shape the future of AI, both domestically and internationally. US President Joe Biden’s executive order, issued in October last year, required US government agencies to step up their own use of AI, and to update how they regulate the use of AI in their respective sectors. Similarly, China’s central government has repeatedly signaled the importance of AI development, and the Cyberspace Administration of China (CAC) has issued stringent regulations on the use of algorithms, deepfakes and AI-generated content.
As for shaping AI governance for the rest of the world, the US has already established multiple global partnerships focused on AI governance, and it led the drafting of a UN General Assembly resolution on “safe, secure and trustworthy artificial intelligence systems for sustainable development.” Similarly, China last year announced a Global AI Governance Initiative and now hosts an annual World AI Conference in Shanghai. With this year’s “Shanghai Declaration,” it unveiled additional plans to shape transnational AI governance. Not to be outdone by the US, China is cosponsoring a resolution at the UN titled “Enhancing International Cooperation on Capacity-building of Artificial Intelligence,” which focuses on helping developing countries pursue AI in a “non-discriminatory” environment.
The US and China each recognize the importance not only of engaging in dialogue with each other, but also of being seen by the rest of the world to be doing so. The bilateral talks in May demonstrated that both countries will continue to pay lip service to dialogue despite their obvious rivalry. The US highlighted the importance of developing “safe, secure, and trustworthy” systems, and identified potential instances of abuse by China. The Chinese stated that AI development should be “beneficial, safe and fair,” highlighted the UN’s role in global AI governance and objected to US export controls.
However, given all the attention that the US and China have devoted to AI governance and dialogue, why are their official statements so lukewarm? More to the point, why is it so hard to tackle real issues and come to an actual, substantive agreement? The answer can be found in each country’s domestic approach to AI governance, and how these domestic contexts affect the international dialogue.
THE AMERICAN WAY
China and the US have starkly different views on what “AI governance” means, and on what “AI dialogue” entails and should aim to accomplish. In the US, governance is distributed by sector and generally focuses on addressing specific AI-related harms. This is partly due to normative policy goals like supporting innovation and avoiding excessive regulation; but it also reflects constitutional and practical limits on what the US government can actually do to regulate AI. Hence, Biden’s executive order instructs federal agencies to focus more on AI, but does not seek to regulate the technology’s private use.
The Biden administration likely determined that it lacks the authority to issue regulations on the use of AI by private actors. However, the US Congress’ authority to regulate AI also faces challenges. A general AI law, like the one the EU recently adopted, would probably be too broad to get through the House of Representatives and the Senate, and it would surely face legal challenges if it did. The US Supreme Court’s decisions in Murthy v Missouri on June 26 and Moody v NetChoice on July 1 lend credence to the idea that code — including algorithm-based content moderation — qualifies as constitutionally protected speech in US jurisprudence, implying that the bar would be quite high for regulatory intrusion.
In practice, most AI governance in the US falls to sector-specific regulators, such as the US Food and Drug Administration, with its rules on AI-assisted medical products. One exception is in the national security context; the US government has broad authority to regulate the use of AI for military purposes, and — arguably — to impose export controls on advanced semiconductors to limit China’s ability to develop its own military AI. The White House and the federal government thus participate in multistakeholder discussions about AI risks, and influence the practical development of AI by setting policy goals and promoting collaborative, voluntary principles and standards.
The US government’s approach to AI dialogue is similarly focused on concrete perceived risks that it can directly manage or regulate. Thus, the US delegation in May was led by the US Special Assistant to the President and Senior Director for Technology and National Security Tarun Chhabra and Special Envoy for Critical and Emerging Technology for the Department of State Seth Center. Both focus primarily on policies relating to emerging technologies, rather than on US-China relations per se.
When the US government engages China and others on AI, its objectives are to develop voluntary general standards and principles, to articulate policy goals and values, and to identify specific military and national-security risks, such as autonomous weapons, AI-powered biological warfare, and cutting-edge hardware and software that are falling into the hands of nonstate actors.
The hope, then, is to find common ground on a joint policy direction or vision, or even to secure concrete agreements addressing specific risks and objectives. The US sees dialogue as primarily about perceived threats, and not about other areas of US-China relations. During the May meeting, US officials raised concerns only about China’s actual and potential misuses of AI, and stressed the importance of maintaining open lines of communication.
THE CHINESE WAY
The Chinese government’s approach to AI governance and dialogue is very different, not least because its primary concern is about politics, narrative control and power, rather than the technology itself. Nonetheless, Chinese regulators face many of the same practical challenges as their US counterparts when it comes to creating AI guardrails. Moreover, China also has a largely distributed, sector-specific approach to regulating the use of AI in different contexts, and also draws on input from experts in academia and the private sector.
And yet the only hard national-level regulations on AI (so far) were issued by the CAC, and they focus primarily on content control rather than the management of specific, concrete risks. The CAC rules require AI models to adhere to Chinese Communist Party (CCP) narratives, thus placing expansive, though vague, requirements on technology companies, platforms, model developers and anyone else who intends to use AI in a public-facing way. The CAC requires that all output from large language models (like ChatGPT in the US) conform to “socialist values” and CCP positions on sensitive topics, and it has even issued its own chatbot based on Xi Jinping (習近平) Thought.
Two new national institutions will shape the governance of AI and other technologies. The National Data Bureau will seek to leverage the value of China’s massive, but siloed, collections of data, and to regulate private and public uses of data, while the Central Science and Technology Commission will oversee the mobilization of national resources for developing AI and other emerging technologies.
Although both organizations were formally established last year, details about their operations remain sparse. However, we do know that the Data Bureau would be led by Liu Liehong (劉烈宏) and the commission by Ding Xuexiang (丁薛祥), one of Xi’s chief lieutenants. Both officials are quite senior and have close connections to the CCP leadership.
This approach would affect AI governance across China. Although China boasts increasingly sophisticated national regulators with cutting-edge expertise, as well as specialized Internet courts staffed by some of the world’s best-trained jurists, its AI-regulation regime remains vague and subject to shifting political narratives and courts or agencies with limited authority. While AI governance in China is complex and widely distributed, everyone must respect the party leadership’s “discourse power” — meaning the prerogative to lead discussions on AI governance.
POLITICS FIRST
These domestic dynamics naturally influence China’s approach to international dialogue as well. Here, too, politics comes first. From the Chinese perspective, the May talks were first and foremost about US-China relations, and only secondarily about AI governance. International talks on AI are too important for CCP leaders to cede to technical experts, CEOs, or anyone who is not directly answerable to them. Since the government’s harsh “crackdown” on the domestic tech sector in 2021 and 2022, AI experts, particularly tech company CEOs, have had only limited “discourse power.”
Few now dare to say anything that conflicts with national policy. Unlike OpenAI’s Sam Altman or Elon Musk, leading entrepreneurs in China, such as Alibaba cofounder Jack Ma (馬雲), cannot travel around the world calling for different kinds of AI governance. Chinese developers, academics, private experts and regulators still debate each other constantly (if not publicly) about the best approaches; but the top political leadership has other priorities for international dialogue, which is not led by technology authorities, as in the US, but by the Chinese Ministry of Foreign Affairs’ Department of North American and Oceanian Affairs.
Many of China’s AI narratives echo those of the US and international organizations. For example, at the recent World AI Conference in Shanghai, Chinese Premier Li Qiang (李強) emphasized Beijing’s willingness to work with the rest of the world, deepen innovative cooperation, promote inclusive development and strengthen collaborative governance. However, China criticizes what it sees as US efforts to limit its capacity to develop AI technologies (via controls on semiconductor exports and proposed investment restrictions). In its recent Shanghai Declaration on Global AI Governance, the Chinese foreign ministry highlights, among other things, the “right of all countries to independent development ... based on their own national conditions.”
Though it does not name the US directly, the declaration was likely aimed at criticizing the US and highlighting how China’s own approach to transnational AI governance is different. In the May dialogue, China resisted US efforts to separate the issue of perceived AI risks from export controls and other aspects of US-China relations. Many Chinese experts on AI governance view the US’ negotiating strategy as disingenuous — or as a gambit to lock China into second place. They see the potential risks as rather abstract or distant, whereas export controls and other limitations are inflicting concrete harm on China’s AI industry right now.
Of course, both countries are concerned about certain risks, such as from AI-driven decisionmaking on military matters (including nuclear weapons). However, given its own domestic goals and perspective on the purpose of AI governance, the Chinese government does not necessarily see these concerns as more urgent or even separate from explicitly political goals. While official dialogue will continue, it will be difficult for both sides to realize their primary objectives.
LIMITS AND ALTERNATIVES
Neither the US nor China is going to change its institutions or fundamental goals for AI governance any time soon. US anxiety about China’s potential abuse of AI will likely remain, as will its export controls and investment restrictions. China can no longer rely on US business interlocutors to water down or prevent limits on economic and technological exchange between the two countries. Moreover, China has become a less attractive market for US investors, venture capitalists and tech companies; all are increasingly hesitant to be seen as working with the Chinese.
China’s own “politics first” approach and suspicion of US intentions will also remain. Thus, when it comes to entering specific agreements with the US (such as on developing rules for autonomous weapons or the use of AI in cybersecurity), China’s contemporary perception of bilateral relations will determine what is possible.
Notwithstanding these challenges, “track-two talks” among non-government officials still have much potential. After all, much of what makes AI such a difficult and sometimes nakedly political topic also makes it amenable to different kinds of dialogue. Transnational AI governance cannot just be about agreements between governments; it also must involve substantive forms of collaboration between whole societies. The Sino-American AI dialogue is bigger than the two governments and should involve interactions between not just politicians, but also regulators, academics, civil society and private-sector experts.
Official talks would also benefit from including more government agencies. The May meeting did make room alongside foreign-affairs officials for agencies that actually govern AI. However, more substantive discussions would be possible if regulators had greater opportunities to meet with their counterparts.
Moreover, competition on international AI governance should not be framed as a wholly bad thing, especially if both countries follow through on offering benefits beyond their borders, such as by helping developing countries build their own capacity to leverage AI.
China is already providing public resources to help others — including private companies — develop AI tools. Notable examples include the CAC’s basic (Chinese) corpus to help train LLMs and the Shanghai AI Lab’s GenAD, a video-generation model that can help developers train autonomous vehicles. At the same time, many US companies have developed open-source foundation models that are available for users around the world, including in China. This kind of continued competition could make AI resources more affordable and widely available globally.
TAKE TWO
Because track-two dialogues include academics, private companies, think tanks, civil-society organizations and others who are committed to sharing best practices and building trust in a specific domain, they can do most of the heavy lifting when it comes to addressing specific AI-governance challenges. While discussions often start in closed settings, the big takeaways usually inform policymaking processes in both countries. Track-two talks thus are helpful, and often necessary, in preparing the ground for agreements between governments.
AI is frequently compared to nuclear technology, which has long been subject to international agreements, but while both have great transformative potential, AI is far more distributed across government actors and society. Even if AI policy goals can be decided at the highest levels of government, the work of implementation is far more complex.
For example, unlike nuclear weapons, the president is not going to approve every use of a drone or similar piece of technology. To have an agreement on lethal autonomous weapons, the US and China must not only agree with each other on basic principles; they also must understand how such an agreement will be implemented in each countries’ respective militaries. Track-two talks provide the opportunity to gain more granular understandings of such questions.
Dialogue, both official and unofficial, might also increase in the face of crises or AI risks materializing. After the Cuban missile crisis, the US and the Soviet Union famously established a hotline at the highest level to prevent unwanted escalation. With AI already being deployed in warfare, and with tensions rising in the South China Sea and across the Taiwan Strait, Chinese and US leaders have ample grounds to do the same.
At the same time, both should accept that their goals and preferences will remain at odds. This is only natural, given their radically different political systems and values. Obviously, the US should not try to harmonize its policies on misinformation and disinformation with those of China, nor should it expect China to adopt Western policies. However, different goals in some areas need not derail the possibility of constructive dialogue in others, such as to ensure that humans are in charge of deciding to launch nuclear weapons.
Finally, both governments and participants in track-two talks should recognize their limits. Countries, like people, are political animals. Dialogue about AI between China and the US cannot resolve the two countries’ geopolitical rifts, such as disagreements over Taiwan or their increasingly contentious bilateral economic relations. The goal of engagement should be to solve a specific problem related to a particular AI use.
The official dialogue between the US and China will continue to face serious political and institutional constraints, limiting what is possible. Much more can be achieved through unofficial channels that connect experts from across both societies. At the very least, we can gain a better understanding of each other’s institutions and their purposes, as well as develop the infrastructure to act if hypothetical scenarios become real.
Karman Lucero is an associate research scholar and fellow at the Paul Tsai China Center at Yale Law School.
Copyright: Project Syndicate
This month, the National Health Insurance (NHI) is to implement a major policy change by eliminating the suspension-and-resumption mechanism for Taiwanese residing abroad. With more than 210,000 Taiwanese living overseas — many with greater financial means than those in Taiwan — this reform, catalyzed by a 2022 Constitutional Court ruling, underscores the importance of fairness, sustainability and shared responsibility in one of the world’s most admired public healthcare systems. Beyond legal obligations, expatriates have a compelling moral duty to contribute, recognizing their stake in a system that embodies the principle of health as a human right. The ruling declared the prior
US president-elect Donald Trump is inheriting from President Joe Biden a challenging situation for American policy in the Indo-Pacific region, with an expansionist China on the march and threatening to incorporate Taiwan, by force if necessary. US policy choices have become increasingly difficult, in part because Biden’s policy of engagement with China, including investing in personal diplomacy with President Xi Jinping (習近平), has not only yielded little but also allowed the Chinese military to gain a stronger footing in the South China Sea and the Taiwan Strait. In Xi’s Nov. 16 Lima meeting with a diminished Biden, the Chinese strongman signaled little
On Tuesday, the Mainland Affairs Council (MAC) issued a statement criticizing Song Siyao (宋思瑤), a student from Shanghai’s Fudan University, saying she had offended the sensibilities of Taiwanese. It also called for the Ma Ying-jeou Foundation — established by former president Ma Ying-jeou (馬英九) — which had organized the tour group, to remind group members to be careful with their statements. Song, during a visit to a baseball stadium in Taichung, said that the tour group “would like to congratulate China, Taipei team (中國台北隊) ... we wish mainland China and Taiwan compatriots can be like the team Chinatrust Brothers and
“Integrated Diplomacy” (總和外交) is the guiding principle of Taiwan’s current foreign policy. It seeks to mobilize technology, capital and talent for global outreach, strengthening Taiwan’s international connections. However, without a robust information security mechanism, such efforts risk being reduced to superficial courtesy calls. Security clearance serves as the “entrance examination results” for government agency personnel in sensitive positions, qualifying them to access sensitive information. Senior aides in the US Congress must also possess security clearance to assist lawmakers in handling classified budgets. However, security clearance is not an automatic right or a blanket necessity for accessing sensitive information. Access is granted only