Earlier this month, the United Arab Emirates (UAE) announced a plan to have half of its government services run on agentic artificial intelligence (AI) within the next two years. Under the scheme, AI is supposed to serve as an “executive partner” that “analyzes, decides, executes and improves in real time” without human intervention. Having spent our careers at the intersection of entrepreneurship, research and digital policy, we can confidently pronounce this plan reckless. And because the UAE presents itself as a global digital model, other governments would feel pressure to follow suit.
That is a danger we must not ignore. We already know what happens when governments delegate decisionmaking to algorithms. In 2021, a self-learning system in the Netherlands wrongly accused roughly 35,000 families of childcare benefit fraud. Parents were ordered to repay tens of thousands of euros they never owed; homes were lost; and more than 2,000 children were taken into state care.
This outcome had actually been built into the system’s design. Dual nationality and foreign-sounding names were flagged as risk factors, baking illegal discrimination directly into the model. The result was a national scandal that ultimately led to the resignation of former Dutch prime minister Mark Rutte.
Illustration: Mountain People
A similar dynamic played out in Australia. Between 2015 and 2019, the Robodebt scheme pursued 433,000 welfare recipients for A$1.7 billion (US$1.2 billion) in supposedly unlawful debts. The harm was profound; with mothers testifying that their sons killed themselves after receiving debt notices they had no way to challenge. A Royal Commission later found the program “neither fair nor legal.”
Meanwhile, in the US, Arkansas and Idaho replaced nurses with algorithms to assess eligibility and levels of home care. People with cerebral palsy, quadriplegia and multiple sclerosis had their care cut by 20 to 50 percent overnight. The courts eventually ordered a halt to the use of these systems, but not before the damage was done. Some patients were left without adequate support, leading to preventable medical complications.
Each of these cases involved a single system within a single agency. Now imagine such systems handling half of all government services, as the UAE’s plan proposes.
Consider, for example, a single mother whose childcare benefits are frozen after an AI agent flags her bank activity, leaving her to navigate an appeals process that sends her from one automated system to another, with no human point of contact, just as the rent comes due. What about a migrant worker whose residency renewal is denied because the system cannot parse his employer’s filings — rendering him effectively undocumented — or an elderly widow whose pension is paused because two databases conflict and she cannot make sense of the interface?
These are not hypotheticals. They are documented patterns that agentic AI intensifies in ways no training program can address within the UAE’s two-year timeline.
Three key risks stand out. The first is scale: When a caseworker makes a mistake, one person suffers; when an AI agent does, thousands can be affected before anyone even notices.
Then there is the opacity of AI decisionmaking. Given that agentic systems make decisions in sequence, with each step building on the last, the causal trail is effectively lost by the time harm becomes visible. Arkansas’s algorithmic health-benefit system offers a stark example. No one — not even its creators — could fully explain how it worked, prompting a federal court to describe it as “wildly irrational.” Moreover, a lack of transparency might be built in through trade secrets or proprietary frameworks underlying the algorithms.
Lastly, AI systems reverse the burden of proof, forcing citizens to prove their innocence rather than requiring the state to justify its actions. As the childcare-benefit scandal in the Netherlands and the Robodebt scheme in Australia showed, those who are least able to do so — people with limited time, money, language proficiency and access to legal support — are hit the hardest.
The UAE claims that the guiding principle of its AI program is “people come first.” However, the design suggests otherwise. A government that evaluates ministries by speed of adoption and mastery of AI is not tracking what matters, but replicating the same logic of efficiency that has already caused significant harm around the world.
Speed of adoption is a vendor’s metric, but a government’s core responsibility is a duty of care grounded in human judgement.
This aligns with citizens’ expectations that the government would be accountable and transparent, and that it would explain decisions that affect their rights and freedoms. When governments enthusiastically embrace autonomous decisionmaking in the name of efficiency, they are, in effect, signing away that accountability.
Every algorithm-related scandal of the past few years has raised the same fundamental questions: Who is in charge, and who made the decision? In a government run by agentic AI, those questions no longer have clear answers. The system decides, updates itself, and moves on, leaving citizens with no recourse when things go wrong.
With the advent of AI, democratic accountability erodes not through an open power grab, but through a series of procurement decisions that quietly displace human oversight. By undermining trust in institutions at a time when it is already dangerously low, these systems ultimately serve the interests of the tech titans driving the AI revolution.
However, it need not be this way. The UAE has the resources, talent and political stability needed to build a genuinely human-centered digital government that could set the global standard by augmenting, rather than replacing, human decisionmaking.
The costs of getting this wrong would not be confined to the UAE. They would be borne by a single mother in another country whose benefits are cut by an algorithm she never knew existed, and by countless others like her around the world.
Gabriela Ramos, co-chair of the Task Force on Inequalities and Social-Related Financial Disclosures, is a former assistant director-general for social and human sciences at UNESCO, where she oversaw the development of the Recommendation on the Ethics of AI, and a former OECD chief of staff and sherpa to the G20, G7 and APEC. Emilija Stojmenova Duh, associate professor of electrical engineering at the University of Ljubljana, is a member of the Globethics Board of Foundation and a former minister of digital transformation of Slovenia.
Copyright: Project Syndicate
The cancelation this week of President William Lai’s (賴清德) state visit to Eswatini, after the Seychelles, Madagascar and Mauritius revoked overflight permits under Chinese pressure, is one more measure of Taiwan’s shrinking executive diplomatic space. Another channel that deserves attention keeps growing while the first contracts. For several years now, Taipei has been one of Europe’s busiest legislative destinations. Where presidents and foreign ministers cannot land, parliamentarians do — and they do it in rising numbers. The Italian parliament opened the year with its largest bipartisan delegation to Taiwan to date: six Italian deputies and one senator, drawn from six
Recently, Taipei’s streets have been plagued by the bizarre sight of rats running rampant and the city government’s countermeasures have devolved into an anti-intellectual farce. The Taipei Parks and Street Lights Office has attempted to eradicate rats by filling their burrows with polyurethane foam, seeming to believe that rats could not simply dig another path out. Meanwhile, as the nation’s capital slowly deteriorates into a rat hive, the Taipei Department of Environmental Protection has proudly pointed to the increase in the number of poisoned rats reported in February and March as a sign of success. When confronted with public concerns over young
Taiwan and India are important partners, yet this reality is increasingly being overshadowed in current debates. At a time when Taiwan-India relations are at a crossroads, with clear potential for deeper engagement and cooperation, the labor agreement signed in February 2024 has become a source of friction. The proposal to bring in 1,000 migrant workers from India is already facing significant resistance, with a petition calling for its “indefinite suspension” garnering more than 40,000 signatures. What should have been a straightforward and practical step forward has instead become controversial. The agreement had the potential to serve as a milestone in
China has long given assurances that it would not interfere in free access to the global commons. As one Ministry of Defense spokesperson put it in 2024, “the Chinese side always respects the freedom of navigation and overflight entitled to countries under international law.” Although these reassurances have always been disingenuous, China’s recent actions display a blatant disregard for these principles. Countries that care about civilian air safety should take note. In April, President Lai Ching-te (賴清德) canceled a planned trip to Eswatini for the 40th anniversary of King Mswati III’s coronation and the 58th anniversary of bilateral diplomatic