Artificial intelligence (AI) is already impacting the pillars of democratic governance around the world. Its effects can be mapped as concentric circles that radiate outward from elections through government adoption; political participation, public trust, and information ecosystems; and then to broader systemic risks — economic shocks, geopolitical competition, and “existential” risks such as climate or bioweapons. Each circle presents opportunities and challenges.
Start with elections. Especially in the US, election administrators are severely understaffed and underfunded. Many argue that AI could help by translating ballots into multiple languages, verifying mail-in ballots, or selecting optimal locations for polling sites. However, only 8 percent of US election administrators use these tools.
Instead, AI is being used to make voting harder. In the state of Georgia, activists used the Eagle AI network to mass-generate voter challenges and pressure officials to purge election rolls. Opponents are using similar tools to try to reinstate voters. Familiar risks — such as deepfakes designed to confuse or mislead voters — abound. Last year, Romania annulled its presidential election results amid evidence of AI-amplified Russian interference — the first unequivocal example of AI’s impact.
However, the hunt for “smoking guns” might miss the greater danger: the steady erosion of trust, truth and social cohesion.
Government use of AI offers a second vector of influence — one with greater promise. Public trust in the US federal government hovers about 23 percent, and government agencies at every level are experimenting with AI to improve efficiency. Such efforts are already delivering results. The US Department of State, for example, has reduced the staff time spent on Freedom of Information Act requests by 60 percent. In California, San Jose relied on AI transit-optimization software to redesign bus routes, cutting travel times by almost 20 percent.
Such improvements could strengthen democratic legitimacy, but the hazards are real. Black-box algorithms already influence decisions about eligibility for government benefits, and even criminal sentencing, posing serious threats to fairness and civil rights. Military adoption is also accelerating: Last year, the US Department of Defense signed US$200 million contracts with four leading AI firms, heightening concerns about state surveillance and AI-driven policing and warfare.
At the same time, AI could transform public participation. In Taiwan — a global model for tech-enabled government — AI-powered tools such as Pol.is helped rebuild public trust following the 2014 occupation of parliament, boosting government institutions’ approval ratings from under 10 percent to more than 70 percent. Stanford’s Deliberative Democracy Lab is deploying AI moderators in more than 40 countries, and Google’s Jigsaw is exploring similar approaches to support healthier debate. Even social movement organizers are using AI to identify potential allies or track the people behind the money propping up anti-democratic efforts.
However, four risks loom large: broken engagement systems, as processes such as “notice-and-comment” are flooded with AI slop; active silencing, as AI-amplified doxing and trolling — and even state surveillance — threaten to intimidate activists and drive them out of civic spaces; passive silencing if people further opt out of real-world civic spaces in favor of digital ones, or eventually even delegate their civic voice entirely to AI agents; and finally, competency erosion, as overreliance on AI — or sycophantic AI chatbots — further dull our capacity for sound judgement and respectful disagreement.
The information ecosystem is also changing as a result of AI. On the positive side, newsrooms are innovating. In California, CalMatters and Cal Poly are using AI to process legislative transcripts across the state, mine them for insights, and even generate story ideas.
However, these benefits could be overshadowed by a flood of ever more convincing deepfakes and synthetic media. False content can sway opinions — people are able to distinguish real from fake images only 60 percent of the time. More insidiously, the sheer volume of fakes fuels the so-called “liar’s dividend,” as people become so overwhelmed with fabricated content that they start to doubt everything. Cynicism and disengagement ensue.
Finally, beyond the threats to democratic institutions lie broader systemic challenges. The IMF estimates that AI could affect 60 percent of jobs in advanced economies, while McKinsey projects that between 75 million and 345 million people might change jobs by 2030.
The problem is not just that big economic shocks invariably jeopardize political stability. AI could exacerbate extreme concentrations of wealth, distorting political voice and undermining equality. Add the possibility that the West loses the AI race, ceding global military and economic dominance to anti-democratic superpowers such as China.
Meeting these challenges requires action on two fronts. First, sector-specific steps can help journalists, government officials, election administrators and civil society adopt AI responsibly. Second, we need broader “foundational interventions” — cross-cutting measures that safeguard society as a whole.
Foundational measures must cover the entire AI lifecycle, from development to deployment. This includes strong privacy protections, as well as transparency concerning the data used to train models, potential biases, how corporations and governments deploy AI, dangerous capabilities and any real-world harms (this global tracker is a great start).
Limits on use are also essential, from police deploying AI for real-time facial recognition to schools, and employers tracking student or worker activities (or even emotions). Liability regimes are needed when AI systems wrongly deny people jobs, loans, or government benefits. New ideas in antitrust or economic redistribution might also be required to prevent democratically unsustainable levels of inequality.
Finally, public AI infrastructure is necessary — open models, affordable computing resources, and shared databases that civil society can access to ensure that the technology’s benefits are widely distributed.
While the EU has moved quickly on regulation, federal action in the US has stalled, but state legislatures are forging ahead: 20 states have enacted privacy laws, 47 have AI deepfake statutes, and 15 have restricted police use of facial recognition.
The window for policy action is narrow. Just as campaign-finance reforms followed the Watergate scandal, and efforts to regulate social media accelerated — then stalled — after the 2016 US presidential election, democracies must rise to the challenge of AI, mitigating its costs while capturing its remarkable benefits.
Kelly Born, former director of Stanford University’s Cyber Policy Center, is director of the Democracy, Rights, and Governance initiative at the David and Lucile Packard Foundation.
Copyright: Project Syndicate
When Chinese President Xi Jinping (習近平) sits down with US President Donald Trump in Beijing on Thursday next week, Xi is unlikely to demand a dramatic public betrayal of Taiwan. He does not need to. Beijing’s preferred victory is smaller, quieter and in some ways far more dangerous: a subtle shift in American wording that appears technical, but carries major strategic meaning. The ask is simple: replace the longstanding US formulation that Washington “does not support Taiwan independence” with a harder one — that Washington “opposes” Taiwan independence. One word changes; a deterrence structure built over decades begins to shift.
Taipei is facing a severe rat infestation, and the city government is reportedly considering large-scale use of rodenticides as its primary control measure. However, this move could trigger an ecological disaster, including mass deaths of birds of prey. In the past, black kites, relatives of eagles, took more than three decades to return to the skies above the Taipei Basin. Taiwan’s black kite population was nearly wiped out by the combined effects of habitat destruction, pesticides and rodenticides. By 1992, fewer than 200 black kites remained on the island. Fortunately, thanks to more than 30 years of collective effort to preserve their remaining
After Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) met Chinese President Xi Jinping (習近平) in Beijing, most headlines referred to her as the leader of the opposition in Taiwan. Is she really, though? Being the chairwoman of the KMT does not automatically translate into being the leader of the opposition in the sense that most foreign readers would understand it. “Leader of the opposition” is a very British term. It applies to the Westminster system of parliamentary democracy, and to some extent, to other democracies. If you look at the UK right now, Conservative Party head Kemi Badenoch is
A Pale View of Hills, a movie released last year, follows the story of a Japanese woman from Nagasaki who moved to Britain in the 1950s with her British husband and daughter from a previous marriage. The daughter was born at a time when memories of the US atomic bombing of Nagasaki during World War II and anxiety over the effects of nuclear radiation still haunted the community. It is a reflection on the legacy of the local and national trauma of the bombing that ended the period of Japanese militarism. A central theme of the movie is the need, at