Eight years ago, Russian President Vladimir Putin suggested that whoever masters artificial intelligence (AI) “will be the ruler of the world.” Since then, investments in the technology have skyrocketed, with US tech giants (Microsoft, Google, Amazon, Meta) spending more than US$320 billion this year alone.
Not surprisingly, the race for AI dominance has also generated significant pushback. There are growing concerns about intelligent machines displacing human labor or introducing new safety risks, such as by empowering terrorists, hackers and other bad actors. What if AIs were to elude human control altogether, perhaps vanquishing us in their own quest for dominance?
There is a more immediate danger: Increasingly powerful, but opaque, AI algorithms are threatening freedom itself. The more we let machines do our thinking for us, the less capable we would be of meeting the challenges that self-governance presents.
The threat to freedom is twofold.
On one hand, autocracies such as Russia and China are already deploying AI for mass surveillance and increasingly sophisticated forms of repression, cracking down not only on dissent, but on any source of information that might foment it.
On the other hand, private corporations, particularly multinationals with access to massive amounts of capital and data, are threatening human agency by integrating AI into their products and systems. The purpose is to maximize profit, which is not necessarily conducive to the public good (as the dire social, political and mental-health effects of social media show).
AI confronts liberal democracies with an existential question. If they remain under the control of the private sector, how (paraphrasing Abraham Lincoln) would government of, by and for the people not perish from the Earth?
The public needs to understand that the meaningful exercise of freedom depends on defending human agency from incursions by machines designed to shape thinking and feeling in ways that favor corporate, rather than human, flourishing.
The threat is not merely hypothetical. In a study involving almost 77,000 people who used AI models to discuss political issues, chatbots designed for persuasion were found to be up to 51 percent more effective than those that had not been trained in that way. In another study (conducted in Canada and Poland), roughly one in 10 voters told researchers that conversations with AI chatbots persuaded them to shift from not supporting particular candidates to supporting them.
In free societies such as the US, corporations’ ability to monitor and influence behavior on a massive scale has benefited from traditional legal constraints on state regulation of the marketplace, including the marketplace of opinions and ideas. The operative assumption has long been that, absent a significant threat of imminent violence, putatively harmful words and images are best met by more words and images aimed at countering their effects.
This familiar free-speech doctrine is ill suited to a digital marketplace shaped by pervasive algorithms that covertly function as AI influencers.
Users of online services might think they are getting what they want — based, for example, on previous viewing choices or past purchases. However, the extensive measures by which algorithms “nudge” users toward what a given corporate platform wants them to want remain obscure, buried in the depths of proprietary code.
As a result, not only is “counter speech” unlikely to break through programmed barriers, but the perception of — and felt need to counter — harm is being squelched at the source.
A similar distortion of free-speech doctrine is evident in Section 230 of the US’ Communications Decency Act of 1996, which protects digital platform owners (including the most popular social media sites) from liability for harms that might arise from online content. The corporate-friendly policy assumes that all such content is user-generated — just people exchanging ideas and expressing their preferences — but Meta, TikTok, X and the rest hardly offer a neutral platform for users. Their existence rests on the premise that monetizing attention is immensely lucrative.
Now, corporations seek to increase profits not only by marketing AI services, but also by deploying them to maximize the time users spend online, thereby increasing their exposure to targeted advertising. If holding users’ attention means covertly serving up certain kinds of information and blocking others, or offering AI-generated flattery and ill-considered encouragement, so be it.
Governments betray their obligation to protect the meaningful exercise of freedom when they fail to regulate online marketing that is designed to manipulate preferences surreptitiously. Like the calculated falsehoods that constitute fraud when commercial products or services are at issue, deliberately hidden or disguised corporate behavioral manipulation for profit falls outside what the US Supreme Court regards as “the fruitful exercise of the right of free speech.”
Law and public policy need to catch up to contemporary conditions and the threats corporate AI poses to freedom in the digital age. If AI is indeed becoming powerful enough to rule the world, governments in free societies must make sure that it serves — or, at the very least, does not disserve — the public good.
Richard K. Sherwin, professor emeritus of law at New York Law School, is a coeditor (with Danielle Celermajer) of A Cultural History of Law in the Modern Age.
Copyright: Project Syndicate
In late January, Taiwan’s first indigenous submarine, the Hai Kun (海鯤, or Narwhal), completed its first submerged dive, reaching a depth of roughly 50m during trials in the waters off Kaohsiung. By March, it had managed a fifth dive, still well short of the deep-water and endurance tests required before the navy could accept the vessel. The original delivery deadline of November last year passed months ago. CSBC Corp, Taiwan, the lead contractor, now targets June and the Ministry of National Defense is levying daily penalties for every day the submarine remains unfinished. The Hai Kun was supposed to be
Reports about Elon Musk planning his own semiconductor fab have sparked anxiety, with some warning that Taiwan Semiconductor Manufacturing Co (TSMC) could lose key customers to vertical integration. A closer reading suggests a more measured conclusion: Musk is advancing a strategic vision of in-house chip manufacturing, but remains far from replacing the existing foundry ecosystem. For TSMC, the short-term impact is limited; the medium-term challenge lies in supply diversification and pricing pressure, only in the long term could it evolve into a structural threat. The clearest signal is Musk’s announcement that Tesla and SpaceX plan to develop a fab project dubbed “Terafab”
Most schoolchildren learn that the circumference of the Earth is about 40,000km. They do not learn that the global economy depends on just 160 of those kilometers. Blocking two narrow waterways — the Strait of Hormuz and the Taiwan Strait — could send the economy back in time, if not to the Stone Age that US President Donald Trump has been threatening to bomb Iran back to, then at least to the mid-20th century, before the Rolling Stones first hit the airwaves. Over the past month and a half, Iran has turned the Strait of Hormuz, which is about 39km wide at
The ongoing Middle East crisis has reinforced an uncomfortable truth for Taiwan: In an increasingly interconnected and volatile world, distant wars rarely remain distant. What began as a regional confrontation between the US, Israel and Iran has evolved into a strategic shock wave reverberating far beyond the Persian Gulf. For Taiwan, the consequences are immediate, material and deeply unsettling. From Taipei’s perspective, the conflict has exposed two vulnerabilities — Taiwan’s dependence on imported energy and the risks created when Washington’s military attention is diverted. Together, they offer a preview of the pressures Taiwan might increasingly face in an era of overlapping geopolitical