Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.
More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.
The principles include prioritizing research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering.”
Photo: Photo: Bloomberg
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.
It has been published alongside a research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future — or at least ones that give the impression of being conscious.
Photo: EPA
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration.”
The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities.”
It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.
Photo: AFP
Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” — an entity that matters morally “in its own right, for its own sake.” In that scenario, it questions if destroying the AI would be comparable to killing an animal.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
The paper and letter were organized by Conscium, a research organization part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.
Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.
In 2023, Sir Demis Hassabis, the head of Google’s AI program and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.
June 2 to June 8 Taiwan’s woodcutters believe that if they see even one speck of red in their cooked rice, no matter how small, an accident is going to happen. Peng Chin-tian (彭錦田) swears that this has proven to be true at every stop during his decades-long career in the logging industry. Along with mining, timber harvesting was once considered the most dangerous profession in Taiwan. Not only were mishaps common during all stages of processing, it was difficult to transport the injured to get medical treatment. Many died during the arduous journey. Peng recounts some of his accidents in
“Why does Taiwan identity decline?”a group of researchers lead by University of Nevada political scientist Austin Wang (王宏恩) asked in a recent paper. After all, it is not difficult to explain the rise in Taiwanese identity after the early 1990s. But no model predicted its decline during the 2016-2018 period, they say. After testing various alternative explanations, Wang et al argue that the fall-off in Taiwanese identity during that period is related to voter hedging based on the performance of the Democratic Progressive Party (DPP). Since the DPP is perceived as the guardian of Taiwan identity, when it performs well,
The Taiwan People’s Party (TPP) on May 18 held a rally in Taichung to mark the anniversary of President William Lai’s (賴清德) inauguration on May 20. The title of the rally could be loosely translated to “May 18 recall fraudulent goods” (518退貨ㄌㄨㄚˋ!). Unlike in English, where the terms are the same, “recall” (退貨) in this context refers to product recalls due to damaged, defective or fraudulent merchandise, not the political recalls (罷免) currently dominating the headlines. I attended the rally to determine if the impression was correct that the TPP under party Chairman Huang Kuo-Chang (黃國昌) had little of a
At Computex 2025, Nvidia CEO Jensen Huang (黃仁勳) urged the government to subsidize AI. “All schools in Taiwan must integrate AI into their curricula,” he declared. A few months earlier, he said, “If I were a student today, I’d immediately start using tools like ChatGPT, Gemini Pro and Grok to learn, write and accelerate my thinking.” Huang sees the AI-bullet train leaving the station. And as one of its drivers, he’s worried about youth not getting on board — bad for their careers, and bad for his workforce. As a semiconductor supply-chain powerhouse and AI hub wannabe, Taiwan is seeing