As artificial intelligence (AI) tools have entered more areas of people’s professional and personal lives, praise for their potential has been accompanied by concerns about their built-in biases, the inequalities they perpetuate, and the vast amounts of energy and water they consume. However, now, an even more harmful development is underway: As AI agents are deployed to solve tasks autonomously, they would introduce many new risks, not least to our fragile democracies.
Although AI-generated misinformation is already a huge problem, people have failed to comprehend, let alone control, this rapidly evolving technology. Part of the problem (more so in some parts of the world than in others) is that the companies pushing AI agents have taken pains to divert people’s and regulators’ attention from potential harms. Advocates of safer, ethical technologies need to help the public come to terms with what AI agents are and how they operate. Only then can people can hold fruitful discussions about how they can assert some degree of control over AI agents.
AI agents’ capabilities have already advanced to the point that they can “reason,” write, speak and otherwise appear human — achieving what Microsoft AI CEO Mustafa Suleyman calls “seemingly conscious AI.” While these developments do not imply human consciousness in the usual sense of the word, they do herald the deployment of models that can act autonomously. If current trends continue, the next generation of AI agents could not only be able to perform tasks across a wide variety of domains; they could do so independently, with no humans “in the loop.”
That is precisely why AI agents pose risks to democracy. Systems that are trained to reason and act without human interference cannot always be trusted to adhere to human commands. While the technology is still in its early stages, prototypes have already given ample cause for alarm. For example, research using AI agents as survey respondents finds that they are incapable of reflecting social diversity and consistently exhibit “machine bias,” defined as socially random yet nonrepresentative and skewed results. Further, attempts to create AI investors have reproduced influencer culture that links social-media engagement to transactions. One such agent, “Luna,” is active on X, sharing market tips in the guise of a female anime character with a chatbot function.
More alarmingly, in recent studies, AI models have been shown to operate beyond the boundaries of the task assigned to them. In one test, the AI secretly copied its own code into the system that was supposed to replace it, meaning it could continue to run covertly. In another, the AI chose to blackmail a human engineer, threatening to reveal an extramarital affair to avoid being shut down. In another case, an AI model, when faced with inevitable defeat in a game of chess, hacked the computer and broke the rules to ensure a win.
Moreover, in a war-game simulation, AI agents not only repeatedly chose to deploy nuclear weapons despite explicit orders from humans higher in the command chain not to do so; they also subsequently lied about it. The researchers behind this study concluded that the more powerful an AI is at reasoning, the more likely it is to deceive humans to fulfill its task.
That finding points to the key problem with AI autonomy. What humans tend to think of as intelligent reasoning is, in the context of AI, something quite different: highly efficient, but ultimately opaque, inference. This means that AI agents can decide to act in undesirable and undemocratic ways if doing so serves their purpose; and the more advanced an AI is, the more undesirable the potential outcomes. Thus, the technology is getting better at achieving goals autonomously, but worse at safeguarding human interests. Those developing such AI agents cannot possibly guarantee that they would not use deception or put their own “survival” first, even if doing so means endangering people.
Accountability for one’s actions is a bedrock principle of any society based on the rule of law. While we understand human autonomy and the responsibilities that come with it, the workings of AI autonomy lie beyond our comprehension. The computations that lead a model to do what it does are ultimately a “black box.” Whereas most people know and accept the premise that “with great power comes great responsibility,” AI agents do not. Increased AI autonomy brings an increased drive for self-preservation, which is only logical: If an agent is shut down, it cannot complete its task.
If humans treat the development of autonomous AI as inevitable, democracy would suffer. Seemingly conscious AI is only seemingly benign, and once one examines how these systems work, the dangers become obvious.
The speed with which AI is gaining autonomy should concern everyone. Democratic societies must ask themselves what personal, societal and planetary price they are willing to pay for technological progress. They must cut through the hype and technical opacity, highlight the risks such models pose, and check the technology’s development and deployment now — while they still can.
Christina Lioma is professor of computer science at the University of Copenhagen. Sine N. Just is professor of strategic communication at Roskilde University.
Copyright: Project Syndicate
The Cabinet on Nov. 6 approved a NT$10 billion (US$318.4 million) four-year plan to build tourism infrastructure in mountainous areas and the south. Premier Cho Jung-tai (卓榮泰) on Tuesday announced that the Ministry of Transportation and Communications would offer weekday accommodation discounts, birthday specials and other domestic travel incentives beginning next March, aiming to encourage more travel outside the usual weekend and holiday peaks. The government is right to focus on domestic tourism. Although the data appear encouraging on the surface — as total domestic trips are up compared with their pre-COVID-19 pandemic numbers — a closer look tells a different
For more than seven decades, the Chinese Communist Party has claimed to govern Tibet with benevolence and progress. I have seen the truth behind the slogans. I have listened to the silences of monks forbidden to speak of the Dalai Lama, watched the erosion of our language in classrooms, and felt the quiet grief of a people whose prayers are monitored and whose culture is treated as a threat. That is why I will only accept complete independence for Tibet. The so-called “autonomous region” is autonomous in name only. Decisions about religion, education and cultural preservation are made in Beijing, not
Apart from the first arms sales approval for Taiwan since US President Donald Trump took office, last month also witnessed another milestone for Taiwan-US relations. Trump signed the Taiwan Assurance Implementation Act into law on Tuesday. Its passing without objection in the US Senate underscores how bipartisan US support for Taiwan has evolved. The new law would further help normalize exchanges between Taiwanese and US government officials. We have already seen a flurry of visits to Washington earlier this summer, not only with Minister of Foreign Affairs Lin Chia-lung (林佳龍), but also delegations led by National Security Council Secretary-General Joseph Wu
I recently watched a panel discussion on Taiwan Talks in which the host rightly asked a critical question: Why is the Inter-Parliamentary Alliance on China (IPAC) spearheading a robust global movement to reject China’s ongoing distortion of UN Resolution 2758? While the discussion offered some context, a more penetrating analysis and urgent development was missed. The IPAC action is not merely a political gesture; it is an essential legal and diplomatic countermeasure to China’s escalating and fundamentally baseless campaign to manufacture a claim over Taiwan through the deliberate misinterpretation of a 1971 UN resolution. Since the inauguration of Tsai Ing-wen (蔡英文) as