In November, the UK is to host a high-profile international summit on the governance of artificial intelligence (AI). With the agenda and list of invitees still being finalized, the biggest decision UK officials face is whether to invite China or host a more exclusive gathering for the G7 and other countries that want to safeguard liberal democracy as the foundation for a digital society.
The trade-off is obvious: Any global approach to AI governance that excludes China is likely to have only a limited impact; China’s presence would inevitably change the agenda. The summit would no longer be able to address AI being used by governments for domestic surveillance — or any other controversial issue that is of concern to democracies.
Whatever the agenda, the summit is a prudent response to rapid and dramatic advances in AI that present both unprecedented opportunities and challenges. World leaders are eager not to miss out on a technological revolution that could — ideally — help them expand their economies and address global challenges.
Illustration: Kevin Sheu
AI undoubtedly has the potential to improve individuals’ productivity and drive social progress. It could lead to important advances in education, medicine, agriculture and many other fields critical human development. It also will be a source of geopolitical and military power, conferring a significant strategic advantage on countries that gain a lead in its development.
Yet AI also poses societal challenges and risks — hence the growing chorus demanding that governments step in and regulate it. Among other things, AI is expected to transform labor markets in ways that will make many workers redundant and some far more productive, widening existing inequalities and eroding social cohesion. It also will be weaponized by bad actors to commit fraud, deceive people and spread disinformation.
When used in the context of elections, AI could compromise citizens’ political autonomy and undermine democracy. As a powerful tool for surveillance purposes, it threatens to undermine individuals’ fundamental rights and civil liberties.
While the above risks are all but certain to materialize, others are more speculative yet potentially catastrophic. Most notably, some commentators warn that AI could spin out of control and pose an existential threat to humanity.
No model to rule them all
With an eye toward seizing AI’s unprecedented opportunities while managing its potentially serious risks, divergent approaches for regulating the sector are emerging. Hesitant to interfere in the development of a disruptive technology that is critical in its economic, geopolitical and military competition with China, the US is relying on voluntary guidance and self-regulation by tech companies.
In contrast, the EU is adamant that AI governance not be left to tech companies; instead, digital regulation must be grounded in the rule of law and subject to democratic oversight. Adding to its existing cache of digital regulations, the EU is in the final stages of adopting comprehensive, binding AI regulations that focus on protecting individuals’ fundamental rights, including their right to privacy and non-discrimination.
China is also pursuing ambitious AI regulation, but with authoritarian characteristics. The authorities seek to support AI development without undermining censorship and jeopardizing the Chinese Communist Party’s (CCP) monopoly on political power. Yet this implies a trade-off, because to maintain social stability, the CCP must restrict content that could be used to train the large language models behind generative AI.
The US, the EU and China thus offer competing models of AI regulation. As the world’s leading technological, economic and regulatory powers, they are “digital empires”: each not only regulating its domestic markets but also exporting its regulatory model and aiming to shape the global digital order in its own interests. Some governments may align their regulatory stance with the US’ market-driven approach, opting for light-touch regulation; others may side with the EU’s rights-driven approach, pursuing binding legislation that sets constraints on AI development; some authoritarian countries will look to China, emulating its state-focused regulatory model.
Most countries, however, are likely to straddle the three approaches, selectively adopting elements of each. That means no single blueprint for AI governance worldwide will emerge.
Case for cooperation
Although regulatory divergence seems inevitable, there is a glaring need for international coordination, as AI presents challenges that no government alone can manage. A closer alignment of regulatory approaches would help all governments maximize the technology’s potential benefits and minimize risks.
If every government develops its own regulatory framework, the resulting fragmentation will hamper AI development. After all, navigating conflicting regulatory regimes adds to companies’ costs, breeds uncertainty and undermines projected gains. Consistent and predictable standards across markets will foster innovation, reward AI developers and benefit consumers.
Moreover, an international agreement could help distribute these projected gains more equally across countries. AI development is currently concentrated in a handful of (mostly) developed economies that are poised to emerge as the clear winners in the global AI race. At the same time, most other countries’ ability to take advantage of AI is limited. International cooperation is needed to democratize access and mitigate fears that AI will benefit only a subset of wealthy countries and leave the Global South further behind.
International coordination could also help governments manage cross-border risks and prevent a race to the bottom. Absent such coordination, some actors will exploit regulatory gaps in some markets, offsetting the benefits of well-designed guardrails elsewhere. To prevent regulatory arbitrage, countries with better regulatory capacities would need to offer technical assistance to countries lacking it. In practice, this would entail pooling resources to identify and evaluate AI-related risks, disseminating technical knowledge about those risks and helping countries develop regulatory responses to them.
Perhaps most importantly, international cooperation could contain the costly and dangerous AI arms race before it destabilizes the global order or precipitates a military conflict. Absent a joint agreement establishing rules governing dual-purpose (civil and military) AI, no country will be able to risk curtailing its own military-driven development, lest it cede a strategic advantage to its adversaries.
Given the obvious benefits of international coordination, several attempts to develop global standards or methods of cooperation are already underway within institutions such as the OECD, the G20, the G7, the Council of Europe and the UN. Yet it is reasonable to worry that these efforts will have only a limited impact. Given the differences in values, interests and capabilities among states, it will be difficult to reach any meaningful consensus. For the same reason, the upcoming UK summit most likely will produce only lofty statements, endorse vague high-level principles and commit to continue the dialogue.
Not everyone is cheering for governments to succeed in their regulatory efforts. Some observers object to governments even attempting to regulate such a rapidly evolving technology.
These critics typically advance two arguments.
First: AI is too complex and fast moving for legislators to understand and keep up with. Second: Even if legislators were competent in regulating AI, they would likely err on the side of excessive precaution — doing too much — thereby curtailing innovation and undermining gains. If correct, either concern would provide grounds for governments to follow a “do no harm” principle, exercise restraint, and let the AI revolution follow its own course.
The argument that lawmakers are incapable of understanding such a complex, multifaceted and fast-moving technology is easy to make, but remains unconvincing. Policymakers regulate many domains of economic activity without being experts themselves. Few regulators know how to build planes, yet they exercise uncontroversial authority over aviation safety. Governments also regulate medicines and vaccines, even though very few (if any) lawmakers are biotechnology experts. If only experts had the power to regulate, then every industry would regulate itself.
Likewise, while the AI governance challenge is partly about the technology, it is also about understanding how that technology affects fundamental rights and democracy. This is hardly a domain where tech companies can claim expertise. Consider a company like Meta (Facebook). Its track record in content moderation and data privacy suggests that it is one of the least-qualified entities in the world to protect democracy or fundamental rights — as are most other leading tech companies. Given the stakes, government, not developers, must take the lead in governing AI.
This is not to suggest that governments will always get regulation right or that regulation will not force companies to divert resources from research and development toward compliance. However, if implemented correctly, regulation can encourage firms to invest in more ethical and less error-prone applications, steering the industry toward more robust AI systems. This would enhance consumer confidence in the technology, thus expanding — rather than diminishing — market opportunities for AI companies.
Governments have every incentive not to forgo benefits associated with AI. They desperately need new sources of economic growth and innovations that will help them achieve better outcomes, such as improved education and health care, at lower cost. If anything, they are more likely to do too little, for fear of losing a strategic advantage and missing out on potential benefits.
The key to regulating any fast-evolving, multifaceted technology is to work closely with AI developers to ensure that potential benefits are preserved and that regulators remain agile. Close consultation with tech companies is one thing; simply handing over governance to the private sector is quite another.
Who is in charge here?
Some commentators are less worried that governments do not understand AI, or that they will get AI regulation wrong — they doubt that government action matters much at all. The techno-determinist camp suggests that governments ultimately have only a limited ability to regulate tech companies at all. Since the real power resides in Silicon Valley and other technology hubs, there is no point in governments picking a fight that they will lose. High-level meetings and summits are destined to be sideshows that merely allow governments to pretend they are still in charge.
Some commentators even argue — not unconvincingly — that tech firms are “new governors” who are “exercising a form of sovereignty,” and ushering in a world that will not be unipolar, bipolar, or multipolar, but rather “technopolar.” The largest tech companies are indeed exercising greater economic and political influence than most states. The tech industry also has near-unlimited resources with which to lobby against regulations and defend themselves in legal battles against governments.
Yet it does not follow that governments are powerless in this domain. The state remains the fundamental unit around which societies are built. As political scientist Stephen M. Walt recently put it, “Which do you expect to be around in 100 years? Facebook or France?” Despite all the influence tech companies have amassed, governments still have the ultimate authority to exercise coercive force.
This authority can, and frequently has been, deployed to change the way firms operate. The user terms, community guidelines and any other rules written by large tech companies remain subject to laws written by governments that have the authority to enforce legal compliance. Tech companies cannot decouple themselves from governments. Though they can try to resist and shape government regulations, they ultimately must obey them. They cannot force their way into mergers against antitrust authorities’ objections, nor can they refuse to pay digital taxes that governments enact, or offer digital services that violate a jurisdiction’s laws. If governments ban certain AI systems or applications, tech companies will have no choice but to comply or stay out of that market.
This is not merely hypothetical. Earlier this year, Sam Altman of OpenAI (the developer of ChatGPT) warned that his company might not offer its products in the EU, owing to regulatory constraints. Yet within days, he was backpedaling. OpenAI’s sovereignty is limited to the freedom not to do business in the EU or any other jurisdiction whose regulations it opposes. It is free to exercise that choice; it is a costly choice to make.
A problem of will
The question, then, is not whether governments can govern the digital economy; it is whether they have the political will to do so. Since the commercialization of the internet in the 1990s, the US government has elected to delegate important governance functions to the private sector. This techno-libertarian approach is famously manifested in Section 230 of the 1996 Communications Decency Act, which shields online platforms from liability for any third-party content that they host. Yet even under this framework, the US government is not powerless. Though it gave platform companies free rein with Section 230, it retains the authority to repeal or amend that law.
The political will to do so may have been lacking in the past, but momentum for regulation is building as trust in the tech industry has declined. Over the past few years, US lawmakers have proposed bills to not only rewrite Section 230, but to also revive antitrust laws and establish a federal privacy law; some lawmakers now are determined to regulate AI. They are holding hearings and already proposing legislation to address advances in generative AI algorithms and large language models.
Yet while congressional Democrats and Republicans increasingly agree that tech companies have grown too powerful and need to be regulated, they are deeply divided when it comes to how to go about it. For some, the concern that AI regulation would undermine US technological progress and innovation is salient in an era of intensifying US-China competition. Of course, tech companies also continue to lobby aggressively and effectively, suggesting that even a bipartisan anti-tech crusade may change little in the end. As strong as the discontent about tech companies is, the US Congress’ political dysfunction could prove stronger.
Again, this does not mean that governments are not in charge. For its part, the EU is not hampered by the same political dysfunction, and its legislative record has been impressive. Following its adoption of the General Data Protection Regulation (GDPR) in 2016, it has moved to regulate online platforms with its landmark 2022 laws: the Digital Services Act and the Digital Markets Act, which establish clear rules on content moderation and market competition, respectively. The EU’s ambitious AI Act is expected to be finalized this year.
Yet for all the EU’s success in legislating, its digital regulation enforcement has often failed to realize the measures’ stated goals. GDPR enforcement, especially, has drawn much criticism, and all the large antitrust fines that the EU has imposed on Google have done little to dent its dominance. These failures have led some to argue that tech companies are already too big to regulate and that AI will further entrench their market power, leaving the EU even more powerless to enforce its laws.
The Chinese government does not face this problem. Without the need to adhere to a democratic process, it was able to crack down dramatically and suddenly on the country’s tech industry starting in 2020, and tech companies duly capitulated. This relative “success” in holding tech companies accountable stands in stark contrast to European and US regulators’ experiences. In both jurisdictions, regulators must fight lengthy legal battles against companies that will reliably contest, rather than acquiesce to, whatever regulatory actions they pursue.
The same pattern may well repeat with AI regulation. The US Congress will likely remain deadlocked, generating heated debates but no real action; the EU will legislate, though continued uncertainty about the effectiveness of its regulation could lead to an outcome resembling the US. In that case, tech companies, not democratically elected governments, will be free to shape the AI revolution however they see fit.
Democracy’s big test
These scenarios raise a troubling possibility: only authoritarian regimes are capable of effectively governing AI. To disprove this proposition, the US, the EU and other like-minded governments will have to demonstrate that democratic governance for AI is both feasible and effective. They will have to insist on their role as the primary rule-makers.
The upcoming summit likely will not convince the world that truly global AI rules are within reach anytime soon. The disagreements remain too deep for countries — especially so-called techno-democracies and techno-autocracies — to act in unison. Nonetheless, the summit can and should send a clear signal that tech companies remain beholden to governments; not the other way around.
While working closely with tech companies to foster AI innovation and maximize benefits, democracies will also need to protect their citizens, values and institutions. Without this kind of dual commitment, the AI revolution will be much more likely to live up to its peril, not its promise.
.Anu Bradford, professor of law and international organization at Columbia Law School, is the author of the forthcoming Digital Empires: The Global Battle to Regulate Technology.
Copyright: Project Syndicate
Speaking at the Asia-Pacific Forward Forum in Taipei, former Singaporean minister for foreign affairs George Yeo (楊榮文) proposed a “Chinese commonwealth” as a potential framework for political integration between Taiwan and China. Yeo said the “status quo” in the Taiwan Strait is unsustainable and that Taiwan should not be “a piece on the chessboard” in a geopolitical game between China and the US. Yeo’s remark is nothing but an ill-intentioned political maneuver that is made by all pro-China politicians in Singapore. Since when does a Southeast Asian nation have the right to stick its nose in where it is not wanted
The Chinese Communist Party (CCP) has released a plan to economically integrate China’s Fujian Province with Taiwan’s Kinmen County, outlining a cross-strait development project based on six major themes and 21 measures. This official document by the CCP is directed toward Taiwan’s three outlying island counties: Penghu County, Lienchiang County (Matsu) and Kinmen County. The plan sets out to construct a cohabiting sphere between Kinmen and the nearby Chinese city of Xiamen, as well as between Matsu and Fuzhou. It also aims to bring together Minnanese cultural areas including Taiwan’s Penghu and China’s cities of Quanzhou and Zhangzhou for further integrated
During a recent visit to Taiwan, I encountered repeated questions about “America skepticism” among the body politic. The basic premise of the “America skepticism” theory is that Taiwan people should view the United States as an unreliable, self-interested actor who is using Taiwan for its own purposes. According to this theory, America will abandon Taiwan when its interests are advanced by doing so. At one level, such skepticism is a sign of a healthy, well-functioning democratic society that protects the right for vigorous political debate. Indeed, around the world, the people of Taiwan are far from alone in debating America’s reliability
As China’s economy was meant to drive global economic growth this year, its dramatic slowdown is sounding alarm bells across the world, with economists and experts criticizing Chinese President Xi Jinping (習近平) for his unwillingness or inability to respond to the nation’s myriad mounting crises. The Wall Street Journal reported that investors have been calling on Beijing to take bolder steps to boost output — especially by promoting consumer spending — but Xi has deep-rooted philosophical objections to Western-style consumption-driven growth, seeing it as wasteful and at odds with his goal of making China a world-leading industrial and technological powerhouse, and