On Feb. 28 last year Sewell Setzer III, a 14-year-old boy from Florida, killed himself at the urging of a lifelike artificial intelligence (AI) character generated by Character.AI, a platform that is also reportedly hosting pro-anorexia AI chatbots that encourage disordered eating among young people. Clearly, stronger measures are urgently needed to protect children and young people from AI.
Of course, even in strictly ethical terms, AI has immense positive potential, from promoting human health and dignity to improving sustainability and education among marginalized populations. However, these promised benefits are no excuse for downplaying or ignoring the ethical risks and real-world costs. Every violation of human rights must be seen as ethically unacceptable. If a lifelike AI chatbot provokes the death of a teenager, that AI could play a role in advancing medical research is no compensation.
Nor is the Setzer tragedy an isolated case. In December last year, two families in Texas filed a lawsuit against Character.AI and its financial backer, Google, alleging that the platform’s chatbots sexually and emotionally abused their school-age children, resulting in self-harm and violence.
Illustration: Mountain People
We have seen this movie before, having already sacrificed a generation of children and teens to social-media companies that profit from their platforms’ addictiveness. Only slowly did we awaken to the social and psychological harms done by “anti-social media.” Now, many countries are banning or restricting access, and young people themselves are demanding stronger regulation.
However, humanity cannot wait to rein in AI’s manipulative power. Owing to the huge quantities of personal data that the tech industry has harvested from us, those building platforms such as Character.AI can create algorithms that know us better than we know ourselves. The potential for abuse is profound. AIs know exactly which buttons to press to tap into our desires, or to get us to vote a certain way. The pro-anorexia chatbots on Character.AI are merely the latest, most outrageous example. There is no good reason why they should not be banned immediately.
Yet time is running out, because generative AI models have been developing faster than expected — and they are generally accelerating in the wrong direction. The “Godfather of AI,” the Nobel laureate cognitive scientist Geoffrey Hinton, continues to warn that AI could lead to human extinction.
“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation,” Hinton said.
Given big tech’s consistent failure to uphold ethical standards, it is folly to expect these companies to police themselves. Google poured US$2.7 billion into Character.AI last year despite its well-known problems. However, while regulation is obviously needed, AI is a global phenomenon, which means we should strive for global regulation, anchored in a new global enforcement mechanism, such as an international data-based systems agency at the UN, as I have proposed.
The fact that something is possible does not mean that it is desirable. Humans bear the responsibility to decide which technologies, which innovations, and which forms of progress are to be realized and scaled up, and which ought not be. It is our responsibility to design, produce, use, and govern AI in ways that respect human rights and facilitate a more sustainable future for humanity and the planet.
Sewell would almost certainly still be alive if a global regulation had been in place to promote human rights-based AI, and if a global institution had been established to monitor innovations in this domain. Ensuring that human rights and the rights of the child are respected requires governance of technological systems’ entire life cycle, from design and development to production, distribution, and use.
Since humans already know that AI can kill, there is no excuse for remaining passive as the technology continues to advance, with more unregulated models being released to the public every month. Whatever benefits these technologies might someday provide, they would never be able to compensate for the loss that all who loved Sewell have already suffered.
Peter G. Kirchschlager, professor of ethics and director of the Institute of Social Ethics at the University of Lucerne, is a visiting professor at Federal Institute of Technology Zurich.
Copyright: Project Syndicate
In a summer of intense political maneuvering, Taiwanese, whose democratic vibrancy is a constant rebuke to Beijing’s authoritarianism, delivered a powerful verdict not on China, but on their own political leaders. Two high-profile recall campaigns, driven by the ruling party against its opposition, collapsed in failure. It was a clear signal that after months of bitter confrontation, the Taiwanese public is demanding a shift from perpetual campaign mode to the hard work of governing. For Washington and other world capitals, this is more than a distant political drama. The stability of Taiwan is vital, as it serves as a key player
Yesterday’s recall and referendum votes garnered mixed results for the Chinese Nationalist Party (KMT). All seven of the KMT lawmakers up for a recall survived the vote, and by a convincing margin of, on average, 35 percent agreeing versus 65 percent disagreeing. However, the referendum sponsored by the KMT and the Taiwan People’s Party (TPP) on restarting the operation of the Ma-anshan Nuclear Power Plant in Pingtung County failed. Despite three times more “yes” votes than “no,” voter turnout fell short of the threshold. The nation needs energy stability, especially with the complex international security situation and significant challenges regarding
Much like the first round on July 26, Saturday’s second wave of recall elections — this time targeting seven Chinese Nationalist Party (KMT) lawmakers — also failed. With all 31 KMT legislators who faced recall this summer secure in their posts, the mass recall campaign has come to an end. The outcome was unsurprising. Last month’s across-the-board defeats had already dealt a heavy blow to the morale of recall advocates and the ruling Democratic Progressive Party (DPP), while bolstering the confidence of the KMT and its ally the Taiwan People’s Party (TPP). It seemed a foregone conclusion that recalls would falter, as
The fallout from the mass recalls and the referendum on restarting the Ma-anshan Nuclear Power Plant continues to monopolize the news. The general consensus is that the Democratic Progressive Party (DPP) has been bloodied and found wanting, and is in need of reflection and a course correction if it is to avoid electoral defeat. The Chinese Nationalist Party (KMT) has not emerged unscathed, either, but has the opportunity of making a relatively clean break. That depends on who the party on Oct. 18 picks to replace outgoing KMT Chairman Eric Chu (朱立倫). What is certain is that, with the dust settling