Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.
More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.
The principles include prioritizing research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering.”
Photo: Photo: Bloomberg
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.
It has been published alongside a research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future — or at least ones that give the impression of being conscious.
Photo: EPA
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration.”
The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities.”
It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.
Photo: AFP
Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” — an entity that matters morally “in its own right, for its own sake.” In that scenario, it questions if destroying the AI would be comparable to killing an animal.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
The paper and letter were organized by Conscium, a research organization part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.
Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.
In 2023, Sir Demis Hassabis, the head of Google’s AI program and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.
For many people, Bilingual Nation 2030 begins and ends in the classroom. Since the policy was launched in 2018, the debate has centered on students, teachers and the pressure placed on schools. Yet the policy was never solely about English education. The government’s official plan also calls for bilingualization in Taiwan’s government services, laws and regulations, and living environment. The goal is to make Taiwan more inclusive and accessible to international enterprises and talent and better prepared for global economic and trade conditions. After eight years, that grand vision is due for a pulse check. RULES THAT CAN BE READ For Harper Chen (陳虹宇), an adviser
Traditionally, indigenous people in Taiwan’s mountains practice swidden cultivation, or “slash and burn” agriculture, a practice common in human history. According to a 2016 research article in the International Journal of Environmental Sustainability, among the Atayal people, this began with a search for suitable forested slopeland. The trees are burnt for fertilizer and the land cleared of stones. The stones and wood are then piled up to make fences, while both dead and standing trees are retained on the plot. The fences are used to grow climbing crops like squash and beans. The plot itself supports farming for three years.
The breakwater stretches out to sea from the sprawling Kaohsiung port in southern Taiwan. Normally, it’s crowded with massive tankers ferrying liquefied natural gas from Qatar to be stored in the bulbous white tanks that dot the shoreline. These are not normal times, though, and not a single shipment from Qatar has docked at the Yongan terminal since early March after the Strait of Hormuz was shuttered. The suspension has provided a realistic preview of a potential Chinese blockade, a move that would throttle an economy anchored by the world’s most advanced and power-hungry semiconductor industry. It is a stark reminder of
May 4 to May 10 It was once said that if you hadn’t performed at the Sapphire Grand Cabaret (藍寶石大歌廳), you couldn’t truly be considered a star. Taking the stage at the legendary Kaohsiung club was more than just a concert. Performers were expected to entertain in every sense, wearing outlandish or revealing costumes and staying quick on their feet as sharp-tongued, over-the-top hosts asked questions and delivered jokes that would be seen as vulgar, even offensive, by today’s standards. Opening in May 1967 during a period of strict political and social control, Sapphire offered a rare outlet for audiences in