Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.
More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.
The principles include prioritizing research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering.”
                    Photo: Photo: Bloomberg
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.
It has been published alongside a research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future — or at least ones that give the impression of being conscious.
                    Photo: EPA
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration.”
The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities.”
It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.
                    Photo: AFP
Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” — an entity that matters morally “in its own right, for its own sake.” In that scenario, it questions if destroying the AI would be comparable to killing an animal.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
The paper and letter were organized by Conscium, a research organization part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.
Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.
In 2023, Sir Demis Hassabis, the head of Google’s AI program and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.
US President Donald Trump may have hoped for an impromptu talk with his old friend Kim Jong-un during a recent trip to Asia, but analysts say the increasingly emboldened North Korean despot had few good reasons to join the photo-op. Trump sent repeated overtures to Kim during his barnstorming tour of Asia, saying he was “100 percent” open to a meeting and even bucking decades of US policy by conceding that North Korea was “sort of a nuclear power.” But Pyongyang kept mum on the invitation, instead firing off missiles and sending its foreign minister to Russia and Belarus, with whom it
When Taiwan was battered by storms this summer, the only crumb of comfort I could take was knowing that some advice I’d drafted several weeks earlier had been correct. Regarding the Southern Cross-Island Highway (南橫公路), a spectacular high-elevation route connecting Taiwan’s southwest with the country’s southeast, I’d written: “The precarious existence of this road cannot be overstated; those hoping to drive or ride all the way across should have a backup plan.” As this article was going to press, the middle section of the highway, between Meishankou (梅山口) in Kaohsiung and Siangyang (向陽) in Taitung County, was still closed to outsiders
President William Lai (賴清德) has championed Taiwan as an “AI Island” — an artificial intelligence (AI) hub powering the global tech economy. But without major shifts in talent, funding and strategic direction, this vision risks becoming a static fortress: indispensable, yet immobile and vulnerable. It’s time to reframe Taiwan’s ambition. Time to move from a resource-rich AI island to an AI Armada. Why change metaphors? Because choosing the right metaphor shapes both understanding and strategy. The “AI Island” frames our national ambition as a static fortress that, while valuable, is still vulnerable and reactive. Shifting our metaphor to an “AI Armada”
The Chinese Communist Party (CCP) has a dystopian, radical and dangerous conception of itself. Few are aware of this very fundamental difference between how they view power and how the rest of the world does. Even those of us who have lived in China sometimes fall back into the trap of viewing it through the lens of the power relationships common throughout the rest of the world, instead of understanding the CCP as it conceives of itself. Broadly speaking, the concepts of the people, race, culture, civilization, nation, government and religion are separate, though often overlapping and intertwined. A government