As artificial intelligence (AI) tools have entered more areas of people’s professional and personal lives, praise for their potential has been accompanied by concerns about their built-in biases, the inequalities they perpetuate, and the vast amounts of energy and water they consume. However, now, an even more harmful development is underway: As AI agents are deployed to solve tasks autonomously, they would introduce many new risks, not least to our fragile democracies.
Although AI-generated misinformation is already a huge problem, people have failed to comprehend, let alone control, this rapidly evolving technology. Part of the problem (more so in some parts of the world than in others) is that the companies pushing AI agents have taken pains to divert people’s and regulators’ attention from potential harms. Advocates of safer, ethical technologies need to help the public come to terms with what AI agents are and how they operate. Only then can people can hold fruitful discussions about how they can assert some degree of control over AI agents.
AI agents’ capabilities have already advanced to the point that they can “reason,” write, speak and otherwise appear human — achieving what Microsoft AI CEO Mustafa Suleyman calls “seemingly conscious AI.” While these developments do not imply human consciousness in the usual sense of the word, they do herald the deployment of models that can act autonomously. If current trends continue, the next generation of AI agents could not only be able to perform tasks across a wide variety of domains; they could do so independently, with no humans “in the loop.”
That is precisely why AI agents pose risks to democracy. Systems that are trained to reason and act without human interference cannot always be trusted to adhere to human commands. While the technology is still in its early stages, prototypes have already given ample cause for alarm. For example, research using AI agents as survey respondents finds that they are incapable of reflecting social diversity and consistently exhibit “machine bias,” defined as socially random yet nonrepresentative and skewed results. Further, attempts to create AI investors have reproduced influencer culture that links social-media engagement to transactions. One such agent, “Luna,” is active on X, sharing market tips in the guise of a female anime character with a chatbot function.
More alarmingly, in recent studies, AI models have been shown to operate beyond the boundaries of the task assigned to them. In one test, the AI secretly copied its own code into the system that was supposed to replace it, meaning it could continue to run covertly. In another, the AI chose to blackmail a human engineer, threatening to reveal an extramarital affair to avoid being shut down. In another case, an AI model, when faced with inevitable defeat in a game of chess, hacked the computer and broke the rules to ensure a win.
Moreover, in a war-game simulation, AI agents not only repeatedly chose to deploy nuclear weapons despite explicit orders from humans higher in the command chain not to do so; they also subsequently lied about it. The researchers behind this study concluded that the more powerful an AI is at reasoning, the more likely it is to deceive humans to fulfill its task.
That finding points to the key problem with AI autonomy. What humans tend to think of as intelligent reasoning is, in the context of AI, something quite different: highly efficient, but ultimately opaque, inference. This means that AI agents can decide to act in undesirable and undemocratic ways if doing so serves their purpose; and the more advanced an AI is, the more undesirable the potential outcomes. Thus, the technology is getting better at achieving goals autonomously, but worse at safeguarding human interests. Those developing such AI agents cannot possibly guarantee that they would not use deception or put their own “survival” first, even if doing so means endangering people.
Accountability for one’s actions is a bedrock principle of any society based on the rule of law. While we understand human autonomy and the responsibilities that come with it, the workings of AI autonomy lie beyond our comprehension. The computations that lead a model to do what it does are ultimately a “black box.” Whereas most people know and accept the premise that “with great power comes great responsibility,” AI agents do not. Increased AI autonomy brings an increased drive for self-preservation, which is only logical: If an agent is shut down, it cannot complete its task.
If humans treat the development of autonomous AI as inevitable, democracy would suffer. Seemingly conscious AI is only seemingly benign, and once one examines how these systems work, the dangers become obvious.
The speed with which AI is gaining autonomy should concern everyone. Democratic societies must ask themselves what personal, societal and planetary price they are willing to pay for technological progress. They must cut through the hype and technical opacity, highlight the risks such models pose, and check the technology’s development and deployment now — while they still can.
Christina Lioma is professor of computer science at the University of Copenhagen. Sine N. Just is professor of strategic communication at Roskilde University.
Copyright: Project Syndicate
On Sunday, 13 new urgent care centers (UCC) officially began operations across the six special municipalities. The purpose of the centers — which are open from 8am to midnight on Sundays and national holidays — is to reduce congestion in hospital emergency rooms, especially during the nine-day Lunar New Year holiday next year. It remains to be seen how effective these centers would be. For one, it is difficult for people to judge for themselves whether their condition warrants visiting a major hospital or a UCC — long-term public education and health promotions are necessary. Second, many emergency departments acknowledge
US President Donald Trump’s seemingly throwaway “Taiwan is Taiwan” statement has been appearing in headlines all over the media. Although it appears to have been made in passing, the comment nevertheless reveals something about Trump’s views and his understanding of Taiwan’s situation. In line with the Taiwan Relations Act, the US and Taiwan enjoy unofficial, but close economic, cultural and national defense ties. They lack official diplomatic relations, but maintain a partnership based on shared democratic values and strategic alignment. Excluding China, Taiwan maintains a level of diplomatic relations, official or otherwise, with many nations worldwide. It can be said that
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) made the astonishing assertion during an interview with Germany’s Deutsche Welle, published on Friday last week, that Russian President Vladimir Putin is not a dictator. She also essentially absolved Putin of blame for initiating the war in Ukraine. Commentators have since listed the reasons that Cheng’s assertion was not only absurd, but bordered on dangerous. Her claim is certainly absurd to the extent that there is no need to discuss the substance of it: It would be far more useful to assess what drove her to make the point and stick so
The central bank has launched a redesign of the New Taiwan dollar banknotes, prompting questions from Chinese Nationalist Party (KMT) legislators — “Are we not promoting digital payments? Why spend NT$5 billion on a redesign?” Many assume that cash will disappear in the digital age, but they forget that it represents the ultimate trust in the system. Banknotes do not become obsolete, they do not crash, they cannot be frozen and they leave no record of transactions. They remain the cleanest means of exchange in a free society. In a fully digitized world, every purchase, donation and action leaves behind data.