The Future of Life Institute’s open letter demanding a six-month precautionary pause on artificial-intelligence (AI) development has already been signed by thousands of high-profile figures, including Elon Musk. The signatories worry that AI labs are “locked in an out-of-control race” to develop and deploy increasingly powerful systems that no one – including their creators — can understand, predict or control.
What explains this outburst of panic among a certain cohort of elites? Control and regulation are obviously at the center of the story, but whose? During the proposed half-year pause when humanity can take stock of the risks, who will stand for humanity? Since AI labs in China, India and Russia will continue their work (perhaps in secret), a global public debate on the issue is inconceivable.
Still, we should consider what is at stake, here. In his 2015 book, Homo Deus, the historian Yuval Harari predicted that the most likely outcome of AI would be a radical division — much stronger than the class divide — within human society. Soon enough, biotechnology and computer algorithms would join their powers in producing “bodies, brains and minds,” resulting in a widening gap “between those who know how to engineer bodies and brains and those who do not.” In such a world, “those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.”
Illustration: Yusha
The panic reflected in the AI letter stems from the fear that even those who are on the “train of progress” will be unable to steer it. Our current digital feudal masters are scared. However, what they want is not public debate, but rather an agreement among governments and tech corporations to keep power where it belongs.
A massive expansion of AI capabilities is a serious threat to those in power — including those who develop, own and control AI. It points to nothing less than the end of capitalism as we know it, manifest in the prospect of a self-reproducing AI system that will need less and less input from human agents (algorithmic market trading is merely the first step in this direction). The choice left to us will be between a new form of communism and uncontrollable chaos.
OLD PARADOX
The new chatbots will offer many lonely (or not so lonely) people endless evenings of friendly dialogue about movies, books, cooking or politics. To reuse an old metaphor of mine, what people will get is the AI version of decaffeinated coffee or sugar-free soda: a friendly neighbor with no skeletons in its closet, an Other that will simply accommodate itself to your own needs.
There is a structure of fetishist disavowal here: “I know very well that I am not talking to a real person, but it feels as though I am — and without any of the accompanying risks.”
In any case, a close examination of the AI letter shows it to be yet another attempt at prohibiting the impossible. This is an old paradox: It is impossible for us, as humans, to participate in a post-human future, so we must prohibit its development.
To orient ourselves around these technologies, we should ask Lenin’s old question: Freedom for whom to do what? In what sense were we free before? Were we not already controlled much more than we realized? Instead of complaining about the threat to our freedom and dignity in the future, perhaps we should first consider what freedom means now. Until we do this, we will act like hysterics who, as French psychoanalyst Jacques Lacan said, are desperate for a master, but only one that we can dominate.
The futurist Ray Kurzweil predicted that, owing to the exponential nature of technological progress, we will soon be dealing with “spiritual” machines that will not only display all the signs of self-awareness, but also far surpass human intelligence.
However, one should not confuse this “post-human” stance for the paradigmatically modern preoccupation with achieving total technological domination over nature. What we are witnessing, instead, is a dialectical reversal of this process.
GOD OR DEVIL
Today’s “post-human” sciences are no longer about domination. Their credo is surprise: What kind of contingent, unplanned emergent properties might “black-box” AI models acquire for themselves? No one knows, and therein lies the thrill — or, indeed, the banality — of the entire enterprise.
Hence, earlier this century, the French philosopher-engineer Jean-Pierre Dupuy discerned in the new robotics, genetics, nanotechnology, artificial life and AI a strange inversion of the traditional anthropocentric arrogance that technology enables: “How are we to explain that science became such a ‘risky’ activity that, according to some top scientists, it poses today the principal threat to the survival of humanity? Some philosophers reply to this question by saying that Descartes’ dream — ‘to become master and possessor of nature’ — has turned wrong, and that we should urgently return to the ‘mastery of mastery.’ They have understood nothing. They don’t see that the technology profiling itself at our horizon through ‘convergence’ of all disciplines aims precisely at nonmastery. The engineer of tomorrow will not be a sorcerer’s apprentice because of his negligence or ignorance, but by choice.”
Humanity is creating its own god or devil. While the outcome cannot be predicted, one thing is certain. If something resembling “post-humanity” emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature and divinity.
Our identity as humans can exist only against the background of impenetrable nature, but if life becomes something that can be fully manipulated by technology, it will lose its “natural” character. A fully controlled existence is one bereft of meaning, not to mention serendipity and wonder.
The same, of course, holds for any sense of the divine. The human experience of “god” has meaning only from the standpoint of human finitude and mortality. Once we become Homo deus and create properties that seem “supernatural” from our old human standpoint, “gods” as we knew them will disappear. The question is what, if anything, will be left. Will we worship the AIs that we created?
There is every reason to worry that tech-gnostic visions of a post-human world are ideological fantasies obfuscating the abyss that awaits us. Needless to say, it would take more than a six-month pause to ensure that humans do not become irrelevant, and their lives meaningless, in the not-too-distant future.
Slavoj Zizek, professor of philosophy at the European Graduate School, is international director of the Birkbeck Institute for the Humanities at the University of London and the author, most recently, of Heaven in Disorder.
Copyright: Project Syndicate
As the Chinese Communist Party (CCP) and its People’s Liberation Army (PLA) reach the point of confidence that they can start and win a war to destroy the democratic culture on Taiwan, any future decision to do so may likely be directly affected by the CCP’s ability to promote wars on the Korean Peninsula, in Europe, or, as most recently, on the Indian subcontinent. It stands to reason that the Trump Administration’s success early on May 10 to convince India and Pakistan to deescalate their four-day conventional military conflict, assessed to be close to a nuclear weapons exchange, also served to
The recent aerial clash between Pakistan and India offers a glimpse of how China is narrowing the gap in military airpower with the US. It is a warning not just for Washington, but for Taipei, too. Claims from both sides remain contested, but a broader picture is emerging among experts who track China’s air force and fighter jet development: Beijing’s defense systems are growing increasingly credible. Pakistan said its deployment of Chinese-manufactured J-10C fighters downed multiple Indian aircraft, although New Delhi denies this. There are caveats: Even if Islamabad’s claims are accurate, Beijing’s equipment does not offer a direct comparison
After India’s punitive precision strikes targeting what New Delhi called nine terrorist sites inside Pakistan, reactions poured in from governments around the world. The Ministry of Foreign Affairs (MOFA) issued a statement on May 10, opposing terrorism and expressing concern about the growing tensions between India and Pakistan. The statement noticeably expressed support for the Indian government’s right to maintain its national security and act against terrorists. The ministry said that it “works closely with democratic partners worldwide in staunch opposition to international terrorism” and expressed “firm support for all legitimate and necessary actions taken by the government of India
Minister of National Defense Wellington Koo (顧立雄) has said that the armed forces must reach a high level of combat readiness by 2027. That date was not simply picked out of a hat. It has been bandied around since 2021, and was mentioned most recently by US Senator John Cornyn during a question to US Secretary of State Marco Rubio at a US Senate Foreign Relations Committee hearing on Tuesday. It first surfaced during a hearing in the US in 2021, when then-US Navy admiral Philip Davidson, who was head of the US Indo-Pacific Command, said: “The threat [of military