In the mid-1960s, the mathematician and Bletchley Park cryptographer I. J. Good proposed a thought experiment that has since become the secular gospel of Silicon Valley. If we were to build an “ultra-intelligent machine,” it could then design even better machines, sparking an intelligence explosion that would leave human cognition far behind, he said, adding that the first such machine would be “the last invention that man need ever make.”
Today, that prophecy, once the stuff of science fiction, has become the core objective of the world’s most powerful institutions. Google DeepMind’s chief strategy officer Demis Hassabis, for example, speaks of “solving intelligence” to “solve everything else.” It is a seductive story. However, even if we assume, for the sake of argument, that future systems can learn, experiment and generate genuinely novel solutions far beyond today’s models, the last-invention thesis still rests on multiple questionable assumptions.
The first is that innovation resembles a frictionless sprint from idea to impact. It does not. The discovery process is more like a chain, only as strong as its weakest link.
Illustration: Yusha
These weak links define much of human progress. In 1986, the space shuttle Challenger broke apart 73 seconds after launch, not because of a failure in its world-class engines or software, but because a small rubber seal failed when subjected to cold atmospheric temperatures (as the Nobel laureate physicist Richard Feynman brilliantly exposed at hearings into the disaster). The “O-ring” has since become a metaphor for the kinds of critical bottlenecks that could sink even the most sophisticated systems.
Discovery works the same way. Artificial general intelligence (AGI), generally understood as a model that can perform any cognitive task, might dramatically accelerate early-stage medical research, but if it cannot navigate clinical trials, manufacture at scale, or secure regulatory approval, the “breakthrough” never becomes an invention that improves lives. When the early stages of discovery are automated, the human role does not vanish; it simply migrates toward the remaining bottlenecks, where judgment, tacit knowledge and practical know-how are what matter.
This complication points us to an even bigger one: AGI would not just have to outperform humans; it would have to outperform humans using AGI. For the last-invention story to hold, people would have to become unnecessary even as partners or supervisors to AIs.
Intelligence is not a quantity: “More” does not simply replace “less.” Even a very capable AGI might be different in kind than a human: exceptional at speed and pattern-finding, but fragile when confronted with rare cases. Different strengths imply different blind spots, and when those do not overlap, combining human and machine judgment continues to beat either one alone.
The game of go offers a useful reminder. After AlphaGo beat Lee Sedol 4-1 in 2016, its superiority to human players seemed settled. However, in 2023, researchers showed that by steering top engines into unusual positions outside their training, a human amateur with modest computing skills could reliably defeat the best programs. Apparent supremacy can still hide systematic weaknesses and that is often where human input adds the most value.
A third problem concerns knowledge itself. The last-invention thesis assumes that all relevant information can be codified, but this is usually not the case. Few inventions changed the world more than the Ford Model T, which transformed the automobile into a mass-market product. However, Henry Ford’s achievement lay not just in a new design. More important was his approach to organizing production.
That is why delegations from Italy, Germany, the Soviet Union and elsewhere traveled to study Ford’s factories firsthand. The crucial know-how could not be gleaned from any blueprint. It was embedded in routines, sequencing, tooling and day-to-day problem-solving by those on the shop floor. Similarly, Toyota’s lean-production system was difficult to replicate because it is embedded in human routines and culture, not hardware.
More intelligence does not automatically overcome the “knowledge problem” — the fact that what makes complex systems work is dispersed, local, often unspoken information. If knowledge were frictionlessly portable, industries would not cluster so intensely, as in Silicon Valley or the City of London.
AI enthusiasts might respond by saying, “Fine, put sensors, cameras and microphones everywhere, and we would codify the missing knowledge.” However, this strategy assumes that people being monitored would openly communicate and share the knowledge they generate, and it assumes away politics and the law. Recording “everything, everywhere” would collide with the EU’s General Data Protection Regulation, which has become a blueprint for privacy regulation worldwide.
Moreover, the EU’s AI Act does not give a free pass to the surveillance-heavy deployments that would be necessary to harvest human know-how at scale. Even if it did, one cannot assume that all human know-how, let alone judgment, is so easily digitized.
Ultimately, AGI could automate intelligence, but the process of invention depends on something more. Often, the hard part is not thinking up a solution but translating it into practice. You need local know-how, trusted routines, supply chains and institutional capacity to make something work reliably in the real world. More intelligence does not automatically produce those complements.
AGI might change discovery by making expertise cheaper and experimentation faster. However, “humanity’s last invention” is a much stronger claim. For it to be true, we would need a world where practical know-how is fully transferable through digital channels and where responsibility can be automated along with cognition. That is not the world we live in.
As intelligence gets cheaper, the assets that command the highest value could change. The advantage would go to those who can deliver outcomes. Humans are not becoming redundant; they are becoming the world’s most decisive bottlenecks.
Carl Benedikt Frey, associate professor of AI & Work at the Oxford Internet Institute and director of the Future of Work Program at the Oxford Martin School, is the author of How Progress Ends: Technology, Innovation and the Fate of Nations.
Copyright: Project Syndicate
In the event of a war with China, Taiwan has some surprisingly tough defenses that could make it as difficult to tackle as a porcupine: A shoreline dotted with swamps, rocks and concrete barriers; conscription for all adult men; highways and airports that are built to double as hardened combat facilities. This porcupine has a soft underbelly, though, and the war in Iran is exposing it: energy. About 39,000 ships dock at Taiwan’s ports each year, more than the 30,000 that transit the Strait of Hormuz. About one-fifth of their inbound tonnage is coal, oil, refined fuels and liquefied natural gas (LNG),
On Monday, the day before Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) departed on her visit to China, the party released a promotional video titled “Only with peace can we ‘lie flat’” to highlight its desire to have peace across the Taiwan Strait. However, its use of the expression “lie flat” (tang ping, 躺平) drew sarcastic comments, with critics saying it sounded as if the party was “bowing down” to the Chinese Communist Party (CCP). Amid the controversy over the opposition parties blocking proposed defense budgets, Cheng departed for China after receiving an invitation from the CCP, with a meeting with
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) is leading a delegation to China through Sunday. She is expected to meet with Chinese President Xi Jinping (習近平) in Beijing tomorrow. That date coincides with the anniversary of the signing of the Taiwan Relations Act (TRA), which marked a cornerstone of Taiwan-US relations. Staging their meeting on this date makes it clear that the Chinese Communist Party (CCP) intends to challenge the US and demonstrate its “authority” over Taiwan. Since the US severed official diplomatic relations with Taiwan in 1979, it has relied on the TRA as a legal basis for all
To counter the CCP’s escalating threats, Taiwan must build a national consensus and demonstrate the capability and the will to fight. The Chinese Communist Party (CCP) often leans on a seductive mantra to soften its threats, such as “Chinese do not kill Chinese.” The slogan is designed to frame territorial conquest (annexation) as a domestic family matter. A look at the historical ledger reveals a different truth. For the CCP, being labeled “family” has never been a guarantee of safety; it has been the primary prerequisite for state-sanctioned slaughter. From the forced starvation of 150,000 civilians at the Siege of Changchun