On Feb. 28 last year Sewell Setzer III, a 14-year-old boy from Florida, killed himself at the urging of a lifelike artificial intelligence (AI) character generated by Character.AI, a platform that is also reportedly hosting pro-anorexia AI chatbots that encourage disordered eating among young people. Clearly, stronger measures are urgently needed to protect children and young people from AI.
Of course, even in strictly ethical terms, AI has immense positive potential, from promoting human health and dignity to improving sustainability and education among marginalized populations. However, these promised benefits are no excuse for downplaying or ignoring the ethical risks and real-world costs. Every violation of human rights must be seen as ethically unacceptable. If a lifelike AI chatbot provokes the death of a teenager, that AI could play a role in advancing medical research is no compensation.
Nor is the Setzer tragedy an isolated case. In December last year, two families in Texas filed a lawsuit against Character.AI and its financial backer, Google, alleging that the platform’s chatbots sexually and emotionally abused their school-age children, resulting in self-harm and violence.
Illustration: Mountain People
We have seen this movie before, having already sacrificed a generation of children and teens to social-media companies that profit from their platforms’ addictiveness. Only slowly did we awaken to the social and psychological harms done by “anti-social media.” Now, many countries are banning or restricting access, and young people themselves are demanding stronger regulation.
However, humanity cannot wait to rein in AI’s manipulative power. Owing to the huge quantities of personal data that the tech industry has harvested from us, those building platforms such as Character.AI can create algorithms that know us better than we know ourselves. The potential for abuse is profound. AIs know exactly which buttons to press to tap into our desires, or to get us to vote a certain way. The pro-anorexia chatbots on Character.AI are merely the latest, most outrageous example. There is no good reason why they should not be banned immediately.
Yet time is running out, because generative AI models have been developing faster than expected — and they are generally accelerating in the wrong direction. The “Godfather of AI,” the Nobel laureate cognitive scientist Geoffrey Hinton, continues to warn that AI could lead to human extinction.
“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation,” Hinton said.
Given big tech’s consistent failure to uphold ethical standards, it is folly to expect these companies to police themselves. Google poured US$2.7 billion into Character.AI last year despite its well-known problems. However, while regulation is obviously needed, AI is a global phenomenon, which means we should strive for global regulation, anchored in a new global enforcement mechanism, such as an international data-based systems agency at the UN, as I have proposed.
The fact that something is possible does not mean that it is desirable. Humans bear the responsibility to decide which technologies, which innovations, and which forms of progress are to be realized and scaled up, and which ought not be. It is our responsibility to design, produce, use, and govern AI in ways that respect human rights and facilitate a more sustainable future for humanity and the planet.
Sewell would almost certainly still be alive if a global regulation had been in place to promote human rights-based AI, and if a global institution had been established to monitor innovations in this domain. Ensuring that human rights and the rights of the child are respected requires governance of technological systems’ entire life cycle, from design and development to production, distribution, and use.
Since humans already know that AI can kill, there is no excuse for remaining passive as the technology continues to advance, with more unregulated models being released to the public every month. Whatever benefits these technologies might someday provide, they would never be able to compensate for the loss that all who loved Sewell have already suffered.
Peter G. Kirchschlager, professor of ethics and director of the Institute of Social Ethics at the University of Lucerne, is a visiting professor at Federal Institute of Technology Zurich.
Copyright: Project Syndicate
When US budget carrier Southwest Airlines last week announced a new partnership with China Airlines, Southwest’s social media were filled with comments from travelers excited by the new opportunity to visit China. Of course, China Airlines is not based in China, but in Taiwan, and the new partnership connects Taiwan Taoyuan International Airport with 30 cities across the US. At a time when China is increasing efforts on all fronts to falsely label Taiwan as “China” in all arenas, Taiwan does itself no favors by having its flagship carrier named China Airlines. The Ministry of Foreign Affairs is eager to jump at
The muting of the line “I’m from Taiwan” (我台灣來欸), sung in Hoklo (commonly known as Taiwanese), during a performance at the closing ceremony of the World Masters Games in New Taipei City on May 31 has sparked a public outcry. The lyric from the well-known song All Eyes on Me (世界都看見) — originally written and performed by Taiwanese hip-hop group Nine One One (玖壹壹) — was muted twice, while the subtitles on the screen showed an alternate line, “we come here together” (阮作伙來欸), which was not sung. The song, performed at the ceremony by a cheerleading group, was the theme
Secretary of State Marco Rubio raised eyebrows recently when he declared the era of American unipolarity over. He described America’s unrivaled dominance of the international system as an anomaly that was created by the collapse of the Soviet Union at the end of the Cold War. Now, he observed, the United States was returning to a more multipolar world where there are great powers in different parts of the planet. He pointed to China and Russia, as well as “rogue states like Iran and North Korea” as examples of countries the United States must contend with. This all begs the question:
Liberals have wasted no time in pointing to Karol Nawrocki’s lack of qualifications for his new job as president of Poland. He has never previously held political office. He won by the narrowest of margins, with 50.9 percent of the vote. However, Nawrocki possesses the one qualification that many national populists value above all other: a taste for physical strength laced with violence. Nawrocki is a former boxer who still likes to go a few rounds. He is also such an enthusiastic soccer supporter that he reportedly got the logos of his two favorite teams — Chelsea and Lechia Gdansk —