Silicon Valley’s favorite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction.
But increasingly vocal critics are warning that the philosophy is dangerous, and the obsession with extinction distracts from real problems associated with AI like data theft and biased algorithms.
Author Emile Torres, a former longtermist turned critic of the movement, said that the philosophy rested on the kind of principles used in the past to justify mass murder and genocide.
Photo: AFP
Yet the movement and linked ideologies like transhumanism and effective altruism hold huge sway in universities from Oxford to Stanford and throughout the tech sector.
Venture capitalists like Peter Thiel and Marc Andreessen have invested in life-extension companies and other pet projects linked to the movement.
Elon Musk and OpenAI’s Sam Altman have signed open letters warning that AI could make humanity extinct — though they stand to benefit by arguing only their products can save us.
Ultimately critics say this fringe movement is holding far too much influence over public debates over the future of humanity.
‘REALLY DANGEROUS’
Longtermists believe we are dutybound to try to produce the best outcomes for the greatest number of humans.
This is no different to 19th century liberals, but longtermists have a much longer timeline in mind.
They look to the far future and see trillions upon trillions of humans floating through space, colonizing new worlds.
They argue that we owe the same duty to each of these future humans as we do to anyone alive today.
And because there are so many of them, they carry much more weight than today’s specimens.
This kind of thinking makes the ideology “really dangerous,” said Torres, author of Human Extinction: A History of the Science and Ethics of Annihilation.
“Any time you have a utopian vision of the future marked by near infinite amounts of value, and you combine that with a sort of utilitarian mode of moral thinking where the ends can justify the means, it’s going to be dangerous,” Torres said.
If a superintelligent machine could be about to spring to life with the potential to destroy humanity, longtermists are bound to oppose it no matter the consequences.
When asked in March by a user of Twitter, the platform now known as X, how many people could die to stop this happening, longtermist ideologue Eliezer Yudkowsky replied that there only needed to be enough people “to form a viable reproductive population.”
“So long as that’s true, there’s still a chance of reaching the stars someday,” he wrote, though he later deleted the message.
EUGENICS CLAIMS
Longtermism grew out of work done by Swedish philosopher Nick Bostrom in the 1990s and 2000s around existential risk and transhumanism — the idea that humans can be augmented by technology.
Academic Timnit Gebru has pointed out that transhumanism was linked to eugenics from the start.
British biologist Julian Huxley, who coined the term transhumanism, was also president of the British Eugenics Society in the 1950s and 1960s.
“Longtermism is eugenics under a different name,” Gebru wrote on X last year.
Bostrom has long faced accusations of supporting eugenics after he listed as an existential risk “dysgenic pressures,” essentially less-intelligent people procreating faster than their smarter peers.
The philosopher, who runs the Future of Life Institute at the University of Oxford, apologized in January after admitting he had written racist posts on an Internet forum in the 1990s.
“Do I support eugenics? No, not as the term is commonly understood,” he wrote in his apology, pointing out it had been used to justify “some of the most horrific atrocities of the last century.”
‘MORE SENSATIONAL’
Despite these troubles, longtermists like Yudkowsky, a high school dropout known for writing Harry Potter fan-fiction and promoting polyamory, continue to be feted.
Altman has credited him with getting OpenAI funded and suggested in February he deserved a Nobel peace prize.
But Gebru, Torres and many others are trying to refocus on harms like theft of artists’ work, bias and concentration of wealth in the hands of a few corporations.
Torres, who uses the pronoun they, said while there were true believers like Yudkowsky, much of the debate around extinction was motivated by profit.
“Talking about human extinction, about a genuine apocalyptic event in which everybody dies, is just so much more sensational and captivating than Kenyan workers getting paid US$1.32 an hour, or artists and writers being exploited,” they said.
Dissident artist Ai Weiwei’s (艾未未) famous return to the People’s Republic of China (PRC) has been overshadowed by the astonishing news of the latest arrests of senior military figures for “corruption,” but it is an interesting piece of news in its own right, though more for what Ai does not understand than for what he does. Ai simply lacks the reflective understanding that the loneliness and isolation he imagines are “European” are simply the joys of life as an expat. That goes both ways: “I love Taiwan!” say many still wet-behind-the-ears expats here, not realizing what they love is being an
Google unveiled an artificial intelligence tool Wednesday that its scientists said would help unravel the mysteries of the human genome — and could one day lead to new treatments for diseases. The deep learning model AlphaGenome was hailed by outside researchers as a “breakthrough” that would let scientists study and even simulate the roots of difficult-to-treat genetic diseases. While the first complete map of the human genome in 2003 “gave us the book of life, reading it remained a challenge,” Pushmeet Kohli, vice president of research at Google DeepMind, told journalists. “We have the text,” he said, which is a sequence of
Every now and then, even hardcore hikers like to sleep in, leave the heavy gear at home and just enjoy a relaxed half-day stroll in the mountains: no cold, no steep uphills, no pressure to walk a certain distance in a day. In the winter, the mild climate and lower elevations of the forests in Taiwan’s far south offer a number of easy escapes like this. A prime example is the river above Mudan Reservoir (牡丹水庫): with shallow water, gentle current, abundant wildlife and a complete lack of tourists, this walk is accessible to nearly everyone but still feels quite remote.
It’s a bold filmmaking choice to have a countdown clock on the screen for most of your movie. In the best-case scenario for a movie like Mercy, in which a Los Angeles detective has to prove his innocence to an artificial intelligence judge within said time limit, it heightens the tension. Who hasn’t gotten sweaty palms in, say, a Mission: Impossible movie when the bomb is ticking down and Tom Cruise still hasn’t cleared the building? Why not just extend it for the duration? Perhaps in a better movie it might have worked. Sadly in Mercy, it’s an ever-present reminder of just