Artificial intelligence (AI) is already impacting the pillars of democratic governance around the world. Its effects can be mapped as concentric circles that radiate outward from elections through government adoption; political participation, public trust, and information ecosystems; and then to broader systemic risks — economic shocks, geopolitical competition, and “existential” risks such as climate or bioweapons. Each circle presents opportunities and challenges.
Start with elections. Especially in the US, election administrators are severely understaffed and underfunded. Many argue that AI could help by translating ballots into multiple languages, verifying mail-in ballots, or selecting optimal locations for polling sites. However, only 8 percent of US election administrators use these tools.
Instead, AI is being used to make voting harder. In the state of Georgia, activists used the Eagle AI network to mass-generate voter challenges and pressure officials to purge election rolls. Opponents are using similar tools to try to reinstate voters. Familiar risks — such as deepfakes designed to confuse or mislead voters — abound. Last year, Romania annulled its presidential election results amid evidence of AI-amplified Russian interference — the first unequivocal example of AI’s impact.
However, the hunt for “smoking guns” might miss the greater danger: the steady erosion of trust, truth and social cohesion.
Government use of AI offers a second vector of influence — one with greater promise. Public trust in the US federal government hovers about 23 percent, and government agencies at every level are experimenting with AI to improve efficiency. Such efforts are already delivering results. The US Department of State, for example, has reduced the staff time spent on Freedom of Information Act requests by 60 percent. In California, San Jose relied on AI transit-optimization software to redesign bus routes, cutting travel times by almost 20 percent.
Such improvements could strengthen democratic legitimacy, but the hazards are real. Black-box algorithms already influence decisions about eligibility for government benefits, and even criminal sentencing, posing serious threats to fairness and civil rights. Military adoption is also accelerating: Last year, the US Department of Defense signed US$200 million contracts with four leading AI firms, heightening concerns about state surveillance and AI-driven policing and warfare.
At the same time, AI could transform public participation. In Taiwan — a global model for tech-enabled government — AI-powered tools such as Pol.is helped rebuild public trust following the 2014 occupation of parliament, boosting government institutions’ approval ratings from under 10 percent to more than 70 percent. Stanford’s Deliberative Democracy Lab is deploying AI moderators in more than 40 countries, and Google’s Jigsaw is exploring similar approaches to support healthier debate. Even social movement organizers are using AI to identify potential allies or track the people behind the money propping up anti-democratic efforts.
However, four risks loom large: broken engagement systems, as processes such as “notice-and-comment” are flooded with AI slop; active silencing, as AI-amplified doxing and trolling — and even state surveillance — threaten to intimidate activists and drive them out of civic spaces; passive silencing if people further opt out of real-world civic spaces in favor of digital ones, or eventually even delegate their civic voice entirely to AI agents; and finally, competency erosion, as overreliance on AI — or sycophantic AI chatbots — further dull our capacity for sound judgement and respectful disagreement.
The information ecosystem is also changing as a result of AI. On the positive side, newsrooms are innovating. In California, CalMatters and Cal Poly are using AI to process legislative transcripts across the state, mine them for insights, and even generate story ideas.
However, these benefits could be overshadowed by a flood of ever more convincing deepfakes and synthetic media. False content can sway opinions — people are able to distinguish real from fake images only 60 percent of the time. More insidiously, the sheer volume of fakes fuels the so-called “liar’s dividend,” as people become so overwhelmed with fabricated content that they start to doubt everything. Cynicism and disengagement ensue.
Finally, beyond the threats to democratic institutions lie broader systemic challenges. The IMF estimates that AI could affect 60 percent of jobs in advanced economies, while McKinsey projects that between 75 million and 345 million people might change jobs by 2030.
The problem is not just that big economic shocks invariably jeopardize political stability. AI could exacerbate extreme concentrations of wealth, distorting political voice and undermining equality. Add the possibility that the West loses the AI race, ceding global military and economic dominance to anti-democratic superpowers such as China.
Meeting these challenges requires action on two fronts. First, sector-specific steps can help journalists, government officials, election administrators and civil society adopt AI responsibly. Second, we need broader “foundational interventions” — cross-cutting measures that safeguard society as a whole.
Foundational measures must cover the entire AI lifecycle, from development to deployment. This includes strong privacy protections, as well as transparency concerning the data used to train models, potential biases, how corporations and governments deploy AI, dangerous capabilities and any real-world harms (this global tracker is a great start).
Limits on use are also essential, from police deploying AI for real-time facial recognition to schools, and employers tracking student or worker activities (or even emotions). Liability regimes are needed when AI systems wrongly deny people jobs, loans, or government benefits. New ideas in antitrust or economic redistribution might also be required to prevent democratically unsustainable levels of inequality.
Finally, public AI infrastructure is necessary — open models, affordable computing resources, and shared databases that civil society can access to ensure that the technology’s benefits are widely distributed.
While the EU has moved quickly on regulation, federal action in the US has stalled, but state legislatures are forging ahead: 20 states have enacted privacy laws, 47 have AI deepfake statutes, and 15 have restricted police use of facial recognition.
The window for policy action is narrow. Just as campaign-finance reforms followed the Watergate scandal, and efforts to regulate social media accelerated — then stalled — after the 2016 US presidential election, democracies must rise to the challenge of AI, mitigating its costs while capturing its remarkable benefits.
Kelly Born, former director of Stanford University’s Cyber Policy Center, is director of the Democracy, Rights, and Governance initiative at the David and Lucile Packard Foundation.
Copyright: Project Syndicate
Taiwan is rapidly accelerating toward becoming a “super-aged society” — moving at one of the fastest rates globally — with the proportion of elderly people in the population sharply rising. While the demographic shift of “fewer births than deaths” is no longer an anomaly, the nation’s legal framework and social customs appear stuck in the last century. Without adjustments, incidents like last month’s viral kicking incident on the Taipei MRT involving a 73-year-old woman would continue to proliferate, sowing seeds of generational distrust and conflict. The Senior Citizens Welfare Act (老人福利法), originally enacted in 1980 and revised multiple times, positions older
The Chinese Nationalist Party (KMT) has its chairperson election tomorrow. Although the party has long positioned itself as “China friendly,” the election is overshadowed by “an overwhelming wave of Chinese intervention.” The six candidates vying for the chair are former Taipei mayor Hau Lung-bin (郝龍斌), former lawmaker Cheng Li-wen (鄭麗文), Legislator Luo Chih-chiang (羅智強), Sun Yat-sen School president Chang Ya-chung (張亞中), former National Assembly representative Tsai Chih-hong (蔡志弘) and former Changhua County comissioner Zhuo Bo-yuan (卓伯源). While Cheng and Hau are front-runners in different surveys, Hau has complained of an online defamation campaign against him coming from accounts with foreign IP addresses,
Taiwan’s business-friendly environment and science parks designed to foster technology industries are the key elements of the nation’s winning chip formula, inspiring the US and other countries to try to replicate it. Representatives from US business groups — such as the Greater Phoenix Economic Council, and the Arizona-Taiwan Trade and Investment Office — in July visited the Hsinchu Science Park (新竹科學園區), home to Taiwan Semiconductor Manufacturing Co’s (TSMC) headquarters and its first fab. They showed great interest in creating similar science parks, with aims to build an extensive semiconductor chain suitable for the US, with chip designing, packaging and manufacturing. The
When Taiwan High Speed Rail Corp (THSRC) announced the implementation of a new “quiet carriage” policy across all train cars on Sept. 22, I — a classroom teacher who frequently takes the high-speed rail — was filled with anticipation. The days of passengers videoconferencing as if there were no one else on the train, playing videos at full volume or speaking loudly without regard for others finally seemed numbered. However, this battle for silence was lost after less than one month. Faced with emotional guilt from infants and anxious parents, THSRC caved and retreated. However, official high-speed rail data have long