Eight years ago, Russian President Vladimir Putin suggested that whoever masters artificial intelligence (AI) “will be the ruler of the world.” Since then, investments in the technology have skyrocketed, with US tech giants (Microsoft, Google, Amazon, Meta) spending more than US$320 billion this year alone.
Not surprisingly, the race for AI dominance has also generated significant pushback. There are growing concerns about intelligent machines displacing human labor or introducing new safety risks, such as by empowering terrorists, hackers and other bad actors. What if AIs were to elude human control altogether, perhaps vanquishing us in their own quest for dominance?
There is a more immediate danger: Increasingly powerful, but opaque, AI algorithms are threatening freedom itself. The more we let machines do our thinking for us, the less capable we would be of meeting the challenges that self-governance presents.
The threat to freedom is twofold.
On one hand, autocracies such as Russia and China are already deploying AI for mass surveillance and increasingly sophisticated forms of repression, cracking down not only on dissent, but on any source of information that might foment it.
On the other hand, private corporations, particularly multinationals with access to massive amounts of capital and data, are threatening human agency by integrating AI into their products and systems. The purpose is to maximize profit, which is not necessarily conducive to the public good (as the dire social, political and mental-health effects of social media show).
AI confronts liberal democracies with an existential question. If they remain under the control of the private sector, how (paraphrasing Abraham Lincoln) would government of, by and for the people not perish from the Earth?
The public needs to understand that the meaningful exercise of freedom depends on defending human agency from incursions by machines designed to shape thinking and feeling in ways that favor corporate, rather than human, flourishing.
The threat is not merely hypothetical. In a study involving almost 77,000 people who used AI models to discuss political issues, chatbots designed for persuasion were found to be up to 51 percent more effective than those that had not been trained in that way. In another study (conducted in Canada and Poland), roughly one in 10 voters told researchers that conversations with AI chatbots persuaded them to shift from not supporting particular candidates to supporting them.
In free societies such as the US, corporations’ ability to monitor and influence behavior on a massive scale has benefited from traditional legal constraints on state regulation of the marketplace, including the marketplace of opinions and ideas. The operative assumption has long been that, absent a significant threat of imminent violence, putatively harmful words and images are best met by more words and images aimed at countering their effects.
This familiar free-speech doctrine is ill suited to a digital marketplace shaped by pervasive algorithms that covertly function as AI influencers.
Users of online services might think they are getting what they want — based, for example, on previous viewing choices or past purchases. However, the extensive measures by which algorithms “nudge” users toward what a given corporate platform wants them to want remain obscure, buried in the depths of proprietary code.
As a result, not only is “counter speech” unlikely to break through programmed barriers, but the perception of — and felt need to counter — harm is being squelched at the source.
A similar distortion of free-speech doctrine is evident in Section 230 of the US’ Communications Decency Act of 1996, which protects digital platform owners (including the most popular social media sites) from liability for harms that might arise from online content. The corporate-friendly policy assumes that all such content is user-generated — just people exchanging ideas and expressing their preferences — but Meta, TikTok, X and the rest hardly offer a neutral platform for users. Their existence rests on the premise that monetizing attention is immensely lucrative.
Now, corporations seek to increase profits not only by marketing AI services, but also by deploying them to maximize the time users spend online, thereby increasing their exposure to targeted advertising. If holding users’ attention means covertly serving up certain kinds of information and blocking others, or offering AI-generated flattery and ill-considered encouragement, so be it.
Governments betray their obligation to protect the meaningful exercise of freedom when they fail to regulate online marketing that is designed to manipulate preferences surreptitiously. Like the calculated falsehoods that constitute fraud when commercial products or services are at issue, deliberately hidden or disguised corporate behavioral manipulation for profit falls outside what the US Supreme Court regards as “the fruitful exercise of the right of free speech.”
Law and public policy need to catch up to contemporary conditions and the threats corporate AI poses to freedom in the digital age. If AI is indeed becoming powerful enough to rule the world, governments in free societies must make sure that it serves — or, at the very least, does not disserve — the public good.
Richard K. Sherwin, professor emeritus of law at New York Law School, is a coeditor (with Danielle Celermajer) of A Cultural History of Law in the Modern Age.
Copyright: Project Syndicate
In the US’ National Security Strategy (NSS) report released last month, US President Donald Trump offered his interpretation of the Monroe Doctrine. The “Trump Corollary,” presented on page 15, is a distinctly aggressive rebranding of the more than 200-year-old foreign policy position. Beyond reasserting the sovereignty of the western hemisphere against foreign intervention, the document centers on energy and strategic assets, and attempts to redraw the map of the geopolitical landscape more broadly. It is clear that Trump no longer sees the western hemisphere as a peaceful backyard, but rather as the frontier of a new Cold War. In particular,
When it became clear that the world was entering a new era with a radical change in the US’ global stance in US President Donald Trump’s second term, many in Taiwan were concerned about what this meant for the nation’s defense against China. Instability and disruption are dangerous. Chaos introduces unknowns. There was a sense that the Chinese Nationalist Party (KMT) might have a point with its tendency not to trust the US. The world order is certainly changing, but concerns about the implications for Taiwan of this disruption left many blind to how the same forces might also weaken
As the Chinese People’s Liberation Army (PLA) races toward its 2027 modernization goals, most analysts fixate on ship counts, missile ranges and artificial intelligence. Those metrics matter — but they obscure a deeper vulnerability. The true future of the PLA, and by extension Taiwan’s security, might hinge less on hardware than on whether the Chinese Communist Party (CCP) can preserve ideological loyalty inside its own armed forces. Iran’s 1979 revolution demonstrated how even a technologically advanced military can collapse when the social environment surrounding it shifts. That lesson has renewed relevance as fresh unrest shakes Iran today — and it should
As the new year dawns, Taiwan faces a range of external uncertainties that could impact the safety and prosperity of its people and reverberate in its politics. Here are a few key questions that could spill over into Taiwan in the year ahead. WILL THE AI BUBBLE POP? The global AI boom supported Taiwan’s significant economic expansion in 2025. Taiwan’s economy grew over 7 percent and set records for exports, imports, and trade surplus. There is a brewing debate among investors about whether the AI boom will carry forward into 2026. Skeptics warn that AI-led global equity markets are overvalued and overleveraged