On Feb. 28 last year Sewell Setzer III, a 14-year-old boy from Florida, killed himself at the urging of a lifelike artificial intelligence (AI) character generated by Character.AI, a platform that is also reportedly hosting pro-anorexia AI chatbots that encourage disordered eating among young people. Clearly, stronger measures are urgently needed to protect children and young people from AI.
Of course, even in strictly ethical terms, AI has immense positive potential, from promoting human health and dignity to improving sustainability and education among marginalized populations. However, these promised benefits are no excuse for downplaying or ignoring the ethical risks and real-world costs. Every violation of human rights must be seen as ethically unacceptable. If a lifelike AI chatbot provokes the death of a teenager, that AI could play a role in advancing medical research is no compensation.
Nor is the Setzer tragedy an isolated case. In December last year, two families in Texas filed a lawsuit against Character.AI and its financial backer, Google, alleging that the platform’s chatbots sexually and emotionally abused their school-age children, resulting in self-harm and violence.
Illustration: Mountain People
We have seen this movie before, having already sacrificed a generation of children and teens to social-media companies that profit from their platforms’ addictiveness. Only slowly did we awaken to the social and psychological harms done by “anti-social media.” Now, many countries are banning or restricting access, and young people themselves are demanding stronger regulation.
However, humanity cannot wait to rein in AI’s manipulative power. Owing to the huge quantities of personal data that the tech industry has harvested from us, those building platforms such as Character.AI can create algorithms that know us better than we know ourselves. The potential for abuse is profound. AIs know exactly which buttons to press to tap into our desires, or to get us to vote a certain way. The pro-anorexia chatbots on Character.AI are merely the latest, most outrageous example. There is no good reason why they should not be banned immediately.
Yet time is running out, because generative AI models have been developing faster than expected — and they are generally accelerating in the wrong direction. The “Godfather of AI,” the Nobel laureate cognitive scientist Geoffrey Hinton, continues to warn that AI could lead to human extinction.
“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation,” Hinton said.
Given big tech’s consistent failure to uphold ethical standards, it is folly to expect these companies to police themselves. Google poured US$2.7 billion into Character.AI last year despite its well-known problems. However, while regulation is obviously needed, AI is a global phenomenon, which means we should strive for global regulation, anchored in a new global enforcement mechanism, such as an international data-based systems agency at the UN, as I have proposed.
The fact that something is possible does not mean that it is desirable. Humans bear the responsibility to decide which technologies, which innovations, and which forms of progress are to be realized and scaled up, and which ought not be. It is our responsibility to design, produce, use, and govern AI in ways that respect human rights and facilitate a more sustainable future for humanity and the planet.
Sewell would almost certainly still be alive if a global regulation had been in place to promote human rights-based AI, and if a global institution had been established to monitor innovations in this domain. Ensuring that human rights and the rights of the child are respected requires governance of technological systems’ entire life cycle, from design and development to production, distribution, and use.
Since humans already know that AI can kill, there is no excuse for remaining passive as the technology continues to advance, with more unregulated models being released to the public every month. Whatever benefits these technologies might someday provide, they would never be able to compensate for the loss that all who loved Sewell have already suffered.
Peter G. Kirchschlager, professor of ethics and director of the Institute of Social Ethics at the University of Lucerne, is a visiting professor at Federal Institute of Technology Zurich.
Copyright: Project Syndicate
The Chinese Communist Party (CCP) has long been expansionist and contemptuous of international law. Under Chinese President Xi Jinping (習近平), the CCP regime has become more despotic, coercive and punitive. As part of its strategy to annex Taiwan, Beijing has sought to erase the island democracy’s international identity by bribing countries to sever diplomatic ties with Taipei. One by one, China has peeled away Taiwan’s remaining diplomatic partners, leaving just 12 countries (mostly small developing states) and the Vatican recognizing Taiwan as a sovereign nation. Taiwan’s formal international space has shrunk dramatically. Yet even as Beijing has scored diplomatic successes, its overreach
After more than a year of review, the National Security Bureau on Monday said it has completed a sweeping declassification of political archives from the Martial Law period, transferring the full collection to the National Archives Administration under the National Development Council. The move marks another significant step in Taiwan’s long journey toward transitional justice. The newly opened files span the architecture of authoritarian control: internal security and loyalty investigations, intelligence and counterintelligence operations, exit and entry controls, overseas surveillance of Taiwan independence activists, and case materials related to sedition and rebellion charges. For academics of Taiwan’s White Terror era —
After 37 US lawmakers wrote to express concern over legislators’ stalling of critical budgets, Legislative Speaker Han Kuo-yu (韓國瑜) pledged to make the Executive Yuan’s proposed NT$1.25 trillion (US$39.7 billion) special defense budget a top priority for legislative review. On Tuesday, it was finally listed on the legislator’s plenary agenda for Friday next week. The special defense budget was proposed by President William Lai’s (賴清德) administration in November last year to enhance the nation’s defense capabilities against external threats from China. However, the legislature, dominated by the opposition Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP), repeatedly blocked its review. The
In her article in Foreign Affairs, “A Perfect Storm for Taiwan in 2026?,” Yun Sun (孫韻), director of the China program at the Stimson Center in Washington, said that the US has grown indifferent to Taiwan, contending that, since it has long been the fear of US intervention — and the Chinese People’s Liberation Army’s (PLA) inability to prevail against US forces — that has deterred China from using force against Taiwan, this perceived indifference from the US could lead China to conclude that a window of opportunity for a Taiwan invasion has opened this year. Most notably, she observes that