The Future of Life Institute’s open letter demanding a six-month precautionary pause on artificial-intelligence (AI) development has already been signed by thousands of high-profile figures, including Elon Musk. The signatories worry that AI labs are “locked in an out-of-control race” to develop and deploy increasingly powerful systems that no one – including their creators — can understand, predict or control.
What explains this outburst of panic among a certain cohort of elites? Control and regulation are obviously at the center of the story, but whose? During the proposed half-year pause when humanity can take stock of the risks, who will stand for humanity? Since AI labs in China, India and Russia will continue their work (perhaps in secret), a global public debate on the issue is inconceivable.
Still, we should consider what is at stake, here. In his 2015 book, Homo Deus, the historian Yuval Harari predicted that the most likely outcome of AI would be a radical division — much stronger than the class divide — within human society. Soon enough, biotechnology and computer algorithms would join their powers in producing “bodies, brains and minds,” resulting in a widening gap “between those who know how to engineer bodies and brains and those who do not.” In such a world, “those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.”
Illustration: Yusha
The panic reflected in the AI letter stems from the fear that even those who are on the “train of progress” will be unable to steer it. Our current digital feudal masters are scared. However, what they want is not public debate, but rather an agreement among governments and tech corporations to keep power where it belongs.
A massive expansion of AI capabilities is a serious threat to those in power — including those who develop, own and control AI. It points to nothing less than the end of capitalism as we know it, manifest in the prospect of a self-reproducing AI system that will need less and less input from human agents (algorithmic market trading is merely the first step in this direction). The choice left to us will be between a new form of communism and uncontrollable chaos.
OLD PARADOX
The new chatbots will offer many lonely (or not so lonely) people endless evenings of friendly dialogue about movies, books, cooking or politics. To reuse an old metaphor of mine, what people will get is the AI version of decaffeinated coffee or sugar-free soda: a friendly neighbor with no skeletons in its closet, an Other that will simply accommodate itself to your own needs.
There is a structure of fetishist disavowal here: “I know very well that I am not talking to a real person, but it feels as though I am — and without any of the accompanying risks.”
In any case, a close examination of the AI letter shows it to be yet another attempt at prohibiting the impossible. This is an old paradox: It is impossible for us, as humans, to participate in a post-human future, so we must prohibit its development.
To orient ourselves around these technologies, we should ask Lenin’s old question: Freedom for whom to do what? In what sense were we free before? Were we not already controlled much more than we realized? Instead of complaining about the threat to our freedom and dignity in the future, perhaps we should first consider what freedom means now. Until we do this, we will act like hysterics who, as French psychoanalyst Jacques Lacan said, are desperate for a master, but only one that we can dominate.
The futurist Ray Kurzweil predicted that, owing to the exponential nature of technological progress, we will soon be dealing with “spiritual” machines that will not only display all the signs of self-awareness, but also far surpass human intelligence.
However, one should not confuse this “post-human” stance for the paradigmatically modern preoccupation with achieving total technological domination over nature. What we are witnessing, instead, is a dialectical reversal of this process.
GOD OR DEVIL
Today’s “post-human” sciences are no longer about domination. Their credo is surprise: What kind of contingent, unplanned emergent properties might “black-box” AI models acquire for themselves? No one knows, and therein lies the thrill — or, indeed, the banality — of the entire enterprise.
Hence, earlier this century, the French philosopher-engineer Jean-Pierre Dupuy discerned in the new robotics, genetics, nanotechnology, artificial life and AI a strange inversion of the traditional anthropocentric arrogance that technology enables: “How are we to explain that science became such a ‘risky’ activity that, according to some top scientists, it poses today the principal threat to the survival of humanity? Some philosophers reply to this question by saying that Descartes’ dream — ‘to become master and possessor of nature’ — has turned wrong, and that we should urgently return to the ‘mastery of mastery.’ They have understood nothing. They don’t see that the technology profiling itself at our horizon through ‘convergence’ of all disciplines aims precisely at nonmastery. The engineer of tomorrow will not be a sorcerer’s apprentice because of his negligence or ignorance, but by choice.”
Humanity is creating its own god or devil. While the outcome cannot be predicted, one thing is certain. If something resembling “post-humanity” emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature and divinity.
Our identity as humans can exist only against the background of impenetrable nature, but if life becomes something that can be fully manipulated by technology, it will lose its “natural” character. A fully controlled existence is one bereft of meaning, not to mention serendipity and wonder.
The same, of course, holds for any sense of the divine. The human experience of “god” has meaning only from the standpoint of human finitude and mortality. Once we become Homo deus and create properties that seem “supernatural” from our old human standpoint, “gods” as we knew them will disappear. The question is what, if anything, will be left. Will we worship the AIs that we created?
There is every reason to worry that tech-gnostic visions of a post-human world are ideological fantasies obfuscating the abyss that awaits us. Needless to say, it would take more than a six-month pause to ensure that humans do not become irrelevant, and their lives meaningless, in the not-too-distant future.
Slavoj Zizek, professor of philosophy at the European Graduate School, is international director of the Birkbeck Institute for the Humanities at the University of London and the author, most recently, of Heaven in Disorder.
Copyright: Project Syndicate
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.
As Maldivian President Mohamed Muizzu’s party won by a landslide in Sunday’s parliamentary election, it is a good time to take another look at recent developments in the Maldivian foreign policy. While Muizzu has been promoting his “Maldives First” policy, the agenda seems to have lost sight of a number of factors. Contemporary Maldivian policy serves as a stark illustration of how a blend of missteps in public posturing, populist agendas and inattentive leadership can lead to diplomatic setbacks and damage a country’s long-term foreign policy priorities. Over the past few months, Maldivian foreign policy has entangled itself in playing
A group of Chinese Nationalist Party (KMT) lawmakers led by the party’s legislative caucus whip Fu Kun-chi (?) are to visit Beijing for four days this week, but some have questioned the timing and purpose of the visit, which demonstrates the KMT caucus’ increasing arrogance. Fu on Wednesday last week confirmed that following an invitation by Beijing, he would lead a group of lawmakers to China from Thursday to Sunday to discuss tourism and agricultural exports, but he refused to say whether they would meet with Chinese officials. That the visit is taking place during the legislative session and in the aftermath