In the mid-1960s, the mathematician and Bletchley Park cryptographer I. J. Good proposed a thought experiment that has since become the secular gospel of Silicon Valley. If we were to build an “ultra-intelligent machine,” it could then design even better machines, sparking an intelligence explosion that would leave human cognition far behind, he said, adding that the first such machine would be “the last invention that man need ever make.”
Today, that prophecy, once the stuff of science fiction, has become the core objective of the world’s most powerful institutions. Google DeepMind’s chief strategy officer Demis Hassabis, for example, speaks of “solving intelligence” to “solve everything else.” It is a seductive story. However, even if we assume, for the sake of argument, that future systems can learn, experiment and generate genuinely novel solutions far beyond today’s models, the last-invention thesis still rests on multiple questionable assumptions.
The first is that innovation resembles a frictionless sprint from idea to impact. It does not. The discovery process is more like a chain, only as strong as its weakest link.
Illustration: Yusha
These weak links define much of human progress. In 1986, the space shuttle Challenger broke apart 73 seconds after launch, not because of a failure in its world-class engines or software, but because a small rubber seal failed when subjected to cold atmospheric temperatures (as the Nobel laureate physicist Richard Feynman brilliantly exposed at hearings into the disaster). The “O-ring” has since become a metaphor for the kinds of critical bottlenecks that could sink even the most sophisticated systems.
Discovery works the same way. Artificial general intelligence (AGI), generally understood as a model that can perform any cognitive task, might dramatically accelerate early-stage medical research, but if it cannot navigate clinical trials, manufacture at scale, or secure regulatory approval, the “breakthrough” never becomes an invention that improves lives. When the early stages of discovery are automated, the human role does not vanish; it simply migrates toward the remaining bottlenecks, where judgment, tacit knowledge and practical know-how are what matter.
This complication points us to an even bigger one: AGI would not just have to outperform humans; it would have to outperform humans using AGI. For the last-invention story to hold, people would have to become unnecessary even as partners or supervisors to AIs.
Intelligence is not a quantity: “More” does not simply replace “less.” Even a very capable AGI might be different in kind than a human: exceptional at speed and pattern-finding, but fragile when confronted with rare cases. Different strengths imply different blind spots, and when those do not overlap, combining human and machine judgment continues to beat either one alone.
The game of go offers a useful reminder. After AlphaGo beat Lee Sedol 4-1 in 2016, its superiority to human players seemed settled. However, in 2023, researchers showed that by steering top engines into unusual positions outside their training, a human amateur with modest computing skills could reliably defeat the best programs. Apparent supremacy can still hide systematic weaknesses and that is often where human input adds the most value.
A third problem concerns knowledge itself. The last-invention thesis assumes that all relevant information can be codified, but this is usually not the case. Few inventions changed the world more than the Ford Model T, which transformed the automobile into a mass-market product. However, Henry Ford’s achievement lay not just in a new design. More important was his approach to organizing production.
That is why delegations from Italy, Germany, the Soviet Union and elsewhere traveled to study Ford’s factories firsthand. The crucial know-how could not be gleaned from any blueprint. It was embedded in routines, sequencing, tooling and day-to-day problem-solving by those on the shop floor. Similarly, Toyota’s lean-production system was difficult to replicate because it is embedded in human routines and culture, not hardware.
More intelligence does not automatically overcome the “knowledge problem” — the fact that what makes complex systems work is dispersed, local, often unspoken information. If knowledge were frictionlessly portable, industries would not cluster so intensely, as in Silicon Valley or the City of London.
AI enthusiasts might respond by saying, “Fine, put sensors, cameras and microphones everywhere, and we would codify the missing knowledge.” However, this strategy assumes that people being monitored would openly communicate and share the knowledge they generate, and it assumes away politics and the law. Recording “everything, everywhere” would collide with the EU’s General Data Protection Regulation, which has become a blueprint for privacy regulation worldwide.
Moreover, the EU’s AI Act does not give a free pass to the surveillance-heavy deployments that would be necessary to harvest human know-how at scale. Even if it did, one cannot assume that all human know-how, let alone judgment, is so easily digitized.
Ultimately, AGI could automate intelligence, but the process of invention depends on something more. Often, the hard part is not thinking up a solution but translating it into practice. You need local know-how, trusted routines, supply chains and institutional capacity to make something work reliably in the real world. More intelligence does not automatically produce those complements.
AGI might change discovery by making expertise cheaper and experimentation faster. However, “humanity’s last invention” is a much stronger claim. For it to be true, we would need a world where practical know-how is fully transferable through digital channels and where responsibility can be automated along with cognition. That is not the world we live in.
As intelligence gets cheaper, the assets that command the highest value could change. The advantage would go to those who can deliver outcomes. Humans are not becoming redundant; they are becoming the world’s most decisive bottlenecks.
Carl Benedikt Frey, associate professor of AI & Work at the Oxford Internet Institute and director of the Future of Work Program at the Oxford Martin School, is the author of How Progress Ends: Technology, Innovation and the Fate of Nations.
Copyright: Project Syndicate
Minister of Labor Hung Sun-han (洪申翰) on April 9 said that the first group of Indian workers could arrive as early as this year as part of a memorandum of understanding (MOU) between the Taipei Economic and Cultural Center in India and the India Taipei Association. Signed in February 2024, the MOU stipulates that Taipei would decide the number of migrant workers and which industries would employ them, while New Delhi would manage recruitment and training. Employment would be governed by the laws of both countries. Months after its signing, the two sides agreed that 1,000 migrant workers from India would
In recent weeks, Taiwan has witnessed a surge of public anxiety over the possible introduction of Indian migrant workers. What began as a policy signal from the Ministry of Labor quickly escalated into a broader controversy. Petitions gathered thousands of signatures within days, political figures issued strong warnings, and social media became saturated with concerns about public safety and social stability. At first glance, this appears to be a straightforward policy question: Should Taiwan introduce Indian migrant workers or not? However, this framing is misleading. The current debate is not fundamentally about India. It is about Taiwan’s labor system, its
On March 31, the South Korean Ministry of Foreign Affairs released declassified diplomatic records from 1995 that drew wide domestic media attention. One revelation stood out: North Korea had once raised the possibility of diplomatic relations with Taiwan. In a meeting with visiting Chinese officials in May 1995, as then-Chinese president Jiang Zemin (江澤民) prepared for a visit to South Korea, North Korean officials objected to Beijing’s growing ties with Seoul and raised Taiwan directly. According to the newly released records, North Korean officials asked why Pyongyang should refrain from developing relations with Taiwan while China and South Korea were expanding high-level
Japan’s imminent easing of arms export rules has sparked strong interest from Warsaw to Manila, Reuters reporting found, as US President Donald Trump wavers on security commitments to allies, and the wars in Iran and Ukraine strain US weapons supplies. Japanese Prime Minister Sanae Takaichi’s ruling party approved the changes this week as she tries to invigorate the pacifist country’s military industrial base. Her government would formally adopt the new rules as soon as this month, three Japanese government officials told Reuters. Despite largely isolating itself from global arms markets since World War II, Japan spends enough on its own