The arrival of a new generation of artificial intelligence (AI) chatbots and apps has fueled hysteria that humans might soon become obsolete, or worse, the victims of a Skynet scenario, in which our AI creations become sentient and turn against us. Even the biggest AI boosters recently called for a moratorium on further research until we can better assess the risks.
The perils posed by today’s technology might well be new and noteworthy, but our anxiety is not. For two centuries, humankind has fretted about what might happen if we endow our creations with intelligence, fearing they will go rogue, if not replace us entirely.
The idea that artificial helpers could rebel has many antecedents, including variations on the story of the sorcerer’s apprentice, popularized by Johann Wolfgang von Goethe and later Walt Disney, as well as the Jewish golem, mythical clay creatures brought to life by mystical incantations.
Illustration: Constance Chou
Although folk tales held that most golem served humanity, more secular versions of the story circulating in early 19th century Prague depicted a far more disobedient, destructive monster.
This version of the golem likely informed one of the first modern visions of artificial life and intelligence: Mary Shelley’s Frankenstein, published in 1818. Unlike Hollywood’s rendering of the story, Shelley’s original tale recounts a hyper-intelligent creature that absorbs the world around him, swiftly learning how to speak, read poetry and grasp human emotions, but humans had no appreciation for those feats, seeing only a monster, so the “monster” eventually turns on his creator.
Shelley’s story inspired what Isaac Asimov would derisively dub the “Frankenstein complex” — the fear that our doppelgangers will become sentient and replace or destroy their human creators. Still, Shelley’s monster was a thing of flesh and blood, not steel and circuitry. It was not a murderous android.
How, then, did we get from Frankenstein to The Terminator? Blame Charles Darwin. When Darwin’s first writings on evolution appeared in 1859, it became clear that humanity, far from walking out of the Garden of Eden fully formed, instead had been the product of endless evolution. This raised the equally troubling possibility that humanity, like other long-gone species, might well be supplanted by something superior.
From there it was only a short conceptual leap to imagine that machines, already stronger than humans, might one day become smarter, too. Four years after the publication of Darwin’s On the Origin of Species, British writer Samuel Butler published an essay under a pseudonym that anticipated virtually all of our current anxieties about AI run amok.
In Darwin Among the Machines, Butler observed that “we are ourselves creating our own successors ... we are daily giving [the machines] greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.”
When that process came to its culmination, “man will have become to the machine what the horse and the dog are to man,” he wrote.
Butler’s dark vision of a future dominated by immortal, hyper-intelligent machines would resurface in his widely read utopian novel Erewhon. The title, an anagram of “nowhere,” told the story of a lost primitive land where technology was conspicuously absent. The narrator eventually learns the evolution of machines had been deliberately halted and reversed in the distant past to prevent “the ultimate development of mechanical consciousness.”
The inhabitants of Erewhon had concluded that a six-month moratorium would not do.
Not every fictional society was so lucky. In the late 1880s, British novelist Reginald Colebrooke Reade wrote twin dystopian novels that described a Terminator-style scenario, complete with intelligent machines that revolt against the human race, nearly driving it to extinction. These works were products of their age: the omniscient, Skynet-style machine intelligence begins with a railroad locomotive that becomes sentient, eventually enlisting all machines in its revolution against humanity.
These and a handful of other works of science fiction anticipated the more famous work of Prague playwright Karel Capek, whose play R.U.R. gave us the word “robot.” Capek’s story told the rise and fall of Rossum’s Universal Robots, a firm that creates humanoid machines that become ever more lifelike. Capek described his play as “a transformation of the golem legend into modern form... Robots are golem made with factory mass production.”
In the play, the robots realize they are superior to their makers and opt to kill off the humans, becoming increasingly skilled at the task. At one point, one of the humans, reading a threatening missive from the robots, marvels at the machines’ growing facility with language.
“Good heavens, who taught them these phrases?” he asks.
Capek’s play, translated into many languages, spawned an entire dystopian genre of science fiction in which intelligent machines, created to serve humankind, revolt against their masters. As time went on, additional ingredients helped flesh out fears of AI still further.
The first new ingredients were the development of the computer and associated research into AI. Anxieties about these developments obsessed science-fiction writers in the post-World War II era. Some, like Asimov, wanted to imagine a world where AI would be servant, not master, but most writers, like Frank Herbert, who published Dune in 1965, embraced the Frankenstein complex.
Herbert’s sprawling epic, set thousands of years in the future, described a world after the “Butlerian Jihad” — a war against thinking machines. This resulted in an Erewhonian world where the one overriding law declared: “Thou shalt not make a machine in the likeness of a human mind.”
Hollywood got into the act as well with Stanley Kubrick’s 2001: A Space Odyssey, starring a murderous computer.
However, Kubrick’s HAL was a piker compared to the next generation of fictional sentient computers. A decade before Skynet became sentient and destroyed humanity in the Terminator franchise, Colossus: The Forbin Project told the “frightening story of the day man built himself out of existence” by creating Colossus, a super-intelligent computer given control over the nation’s nuclear arsenal.
Colossus — the name nodding to Alan Turing’s wartime code-cracking computer — quickly becomes self-aware and hooks up with its Soviet counterpart, which has also become sentient. Together the computers threaten to nuke the world unless they are put in charge of the world.
The humans try to rebel, but fail, becoming the dependents of all-powerful computer babysitters armed with nuclear weapons.
Though our angst about AI has grown even creepier in the past few years — here’s looking at you, M3gan — what is far more interesting is how little has changed in our thinking for close to a century. All the anxieties now making the rounds have a long and storied history, from fears of human obsolescence to predictions that AI will become a willful, malevolent force.
We have now seen dramatic advances in AI made over the past year, edging us closer to the kinds of machines envisioned in many of these apocalyptic stories. You may or may not find it comforting that people have been pondering the possibility of these frightening outcomes for more than a century, but at least knowing our deep history of skepticism helps put current reactions to AI in perspective.
And that is something that, for now at least, only a human can do.
Stephen Mihm is a professor of history at the University of Georgia.
The bird flu outbreak at US dairy farms keeps finding alarming new ways to surprise scientists. Last week, the US Department of Agriculture (USDA) confirmed that H5N1 is spreading not just from birds to herds, but among cows. Meanwhile, media reports say that an unknown number of cows are asymptomatic. Although the risk to humans is still low, it is clear that far more work needs to be done to get a handle on the reach of the virus and how it is being transmitted. That would require the USDA and the Centers for Disease Control and Prevention (CDC) to get
For the incoming Administration of President-elect William Lai (賴清德), successfully deterring a Chinese Communist Party (CCP) attack or invasion of democratic Taiwan over his four-year term would be a clear victory. But it could also be a curse, because during those four years the CCP’s People’s Liberation Army (PLA) will grow far stronger. As such, increased vigilance in Washington and Taipei will be needed to ensure that already multiplying CCP threat trends don’t overwhelm Taiwan, the United States, and their democratic allies. One CCP attempt to overwhelm was announced on April 19, 2024, namely that the PLA had erred in combining major missions
On April 11, Japanese Prime Minister Fumio Kishida delivered a speech at a joint meeting of the US Congress in Washington, in which he said that “China’s current external stance and military actions present an unprecedented and the greatest strategic challenge … to the peace and stability of the international community.” Kishida emphasized Japan’s role as “the US’ closest ally.” “The international order that the US worked for generations to build is facing new challenges,” Kishida said. “I understand it is a heavy burden to carry such hopes on your shoulders,” he said. “Japan is already standing shoulder to shoulder
Former president Chiang Ching-kuo (蔣經國) used to push for reforms to protect Taiwan by adopting the “three noes” policy as well as “Taiwanization.” Later, then-president Lee Teng-hui (李登輝) wished to save the Chinese Nationalist Party (KMT) by pushing for the party’s “localization,” hoping to compete with homegrown political parties as a pro-Taiwan KMT. However, the present-day members of the KMT do not know what they are talking about, and do not heed the two former presidents’ words, so the party has suffered a third consecutive defeat in the January presidential election. Soon after gaining power with the help of the KMT’s