Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.
More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.
The principles include prioritizing research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering.”
Photo: Photo: Bloomberg
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.
It has been published alongside a research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future — or at least ones that give the impression of being conscious.
Photo: EPA
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration.”
The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities.”
It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.
Photo: AFP
Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” — an entity that matters morally “in its own right, for its own sake.” In that scenario, it questions if destroying the AI would be comparable to killing an animal.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
The paper and letter were organized by Conscium, a research organization part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.
Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.
In 2023, Sir Demis Hassabis, the head of Google’s AI program and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.
Seven hundred job applications. One interview. Marco Mascaro arrived in Taiwan last year with a PhD in engineering physics and years of experience at a European research center. He thought his Gold Card would guarantee him a foothold in Taiwan’s job market. “It’s marketed as if Taiwan really needs you,” the 33-year-old Italian says. “The reality is that companies here don’t really need us.” The Employment Gold Card was designed to fix Taiwan’s labor shortage by offering foreign professionals a combined resident visa and open work permit valid for three years. But for many, like Mascaro, the welcome mat ends at the door. A
The Western media once again enthusiastically forwarded Beijing’s talking points on Japanese Prime Minister Sanae Takaichi’s comment two weeks ago that an attack by the People’s Republic of China (PRC) on Taiwan was an existential threat to Japan and would trigger Japanese military intervention in defense of Taiwan. The predictable reach for clickbait meant that a string of teachable moments was lost, “like tears in the rain.” Again. The Economist led the way, assigning the blame to the victim. “Takaichi Sanae was bound to rile China sooner rather than later,” the magazine asserted. It then explained: “Japan’s new prime minister is
NOV. 24 to NOV. 30 It wasn’t famine, disaster or war that drove the people of Soansai to flee their homeland, but a blanket-stealing demon. At least that’s how Poan Yu-pie (潘有秘), a resident of the Indigenous settlement of Kipatauw in what is today Taipei’s Beitou District (北投), told it to Japanese anthropologist Kanori Ino in 1897. Unable to sleep out of fear, the villagers built a raft large enough to fit everyone and set sail. They drifted for days before arriving at what is now Shenao Port (深奧) on Taiwan’s north coast,
Divadlo feels like your warm neighborhood slice of home — even if you’ve only ever spent a few days in Prague, like myself. A projector is screening retro animations by Czech director Karel Zeman, the shelves are lined with books and vinyl, and the owner will sit with you to share stories over a glass of pear brandy. The food is also fantastic, not just a new cultural experience but filled with nostalgia, recipes from home and laden with soul-warming carbs, perfect as the weather turns chilly. A Prague native, Kaio Picha has been in Taipei for 13 years and