If you have heard the term artificial general intelligence, or AGI, it probably makes you think of a humanish intelligence, such as the honey-voiced AI love interest in the movie Her, or a superhuman one, like Skynet from The Terminator. At any rate, something science-fictional and far off.
However, a growing number of people in the tech industry and even outside it are prophesying AGI or “human-level” AI in the very near future.
These people might believe what they are saying, but it is at least partly hype designed to get investors to throw billions of dollars at AI companies. Yes, big changes are almost certainly on the way, and you should be preparing for them, but for most of us, calling them AGI is at best a distraction and at worst deliberate misdirection. Business leaders and policymakers need a better way to think about what is coming. Fortunately, there is one.
Illustration: Yusha
OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei and Elon Musk of xAI (the thing he is least famous for) have all said recently that AGI, or something like it, would arrive within a couple of years. More measured voices such as Google DeepMind’s CEO Demis Hassabis and Meta chief AI scientist Yann LeCun see it being at least five to 10 years out. More recently, the meme has gone mainstream, with journalists including the New York Times’ Ezra Klein and Kevin Roose arguing that society should get ready for something like AGI in the very near future.
I say “something like” because oftentimes, these people flirt with the term AGI and then retreat to a more equivocal phrasing such as “powerful AI.” What they might mean by it varies enormously — from AI that can do almost any individual cognitive task as well as a human but might still be quite specialized (Klein, Roose), to doing Nobel Prize-level work (Amodei, Altman), to thinking like an actual human in all respects (Hassabis), to operating in the physical world (LeCun), or simply being “smarter than the smartest human” (Musk).
So, are any of these “really” AGI?
The truth is, it does not matter. If there is even such a thing as AGI — which, I will argue, there is not — it is not going to be a sharp threshold we cross. To the people who tout it, AGI is now simply shorthand for the idea that something very disruptive is imminent: Software that cannot merely code an app, draft a school assignment, write bedtime stories for your children or book a holiday — but might throw lots of people out of work, make major scientific breakthroughs, and provide frightening power to hackers, terrorists, corporations and governments.
This prediction is worth taking seriously, and calling it AGI does have a way of making people sit up and listen. However, instead of talking about AGI or human-level AI, let us talk about different types of AI, and what they would and would not be able to do.
Some form of human-level intelligence has been the goal ever since the AI race started 70 years ago. For decades, the best that could be done was “narrow AI” like IBM’s chess-winning Deep Blue, or Google’s AlphaFold, which predicts protein structures and won its creators — including Hassabis — a share of the chemistry Nobel last year. Both were far beyond human-level, but only for one highly specific task.
If AGI now suddenly seems closer, it is because the large-language models (LLMs) underlying ChatGPT and its ilk appear to be more humanlike and more general-purpose.
LLMs interact with us in plain language. They can give at least plausible-looking answers to most questions. They write pretty good fiction, at least when it is very short — for longer stories, they lose track of characters and plot details. They are scoring ever higher on benchmark tests of skills such as coding, medical or bar exams, and math problems. They are getting better at step-by-step reasoning and more complex tasks. When the most gung ho AI folks talk about AGI being around the corner, it is basically a more advanced form of these models they are talking about.
It is not that LLMs would not have big impacts. Some software companies already plan to hire fewer engineers. Most tasks that follow a similar process every time — making medical diagnoses, drafting legal dockets, writing research briefs, creating marketing campaigns and so on — would be things a human worker can at least partly outsource to AI. Some already are.
That would make those workers more productive, which could lead to the elimination of some jobs. Although not necessarily. Geoffrey Hinton, the Nobel Prize-winning computer scientist known as the godfather of AI, infamously predicted that AI would soon make radiologists obsolete. Today, there is a shortage of them in the US.
However, in an important sense, LLMs are still “narrow AI.” They can ace one job while being lousy at a seemingly adjacent one — a phenomenon known as the jagged frontier.
For example, an AI might pass a bar exam with flying colors, but botch turning a conversation with a client into a legal brief. It might answer some questions perfectly, but regularly “hallucinate” (i.e. invent facts) on others. LLMs do well with problems that can be solved using clear-cut rules, but in some newer tests where the rules were more ambiguous, models that scored 80 percent or more on other benchmarks struggled even to reach single figures.
Even if LLMs started to beat these tests, too, they would still be narrow. It is one thing to tackle a defined, limited problem, however difficult. It is quite another to take on what people actually do in a typical workday.
Even a mathematician does not just spend all day doing math problems. People do countless things that cannot be benchmarked, because they are not bounded problems with right or wrong answers. We weigh conflicting priorities, ditch failing plans, make allowances for incomplete knowledge, develop workarounds, act on hunches, read the room and, above all, interact constantly with the highly unpredictable and irrational intelligences that are other human beings.
Indeed, one argument against LLMs ever being able to do Nobel Prize-level work is that the most brilliant scientists are not those who know the most, but those who challenge conventional wisdom, propose unlikely hypotheses and ask questions nobody else has thought to ask. That is pretty much the opposite of an LLM, which is designed to find the likeliest consensus answer based on all the available information.
So, humans might one day be able to build an LLM that can do almost any individual cognitive task as well as a human. It might be able to string together a whole series of tasks to solve a bigger problem. By some definitions, it would be human-level AI. However, it would still be as dumb as a brick if you put it to work in an office.
A core problem with the idea of AGI is that it is based on a highly anthropocentric notion of what intelligence is.
Most AI research treats intelligence as a more or less linear measure. It assumes that at some point, machines would reach human-level or “general” intelligence, and then perhaps “superintelligence,” at which point they either become Skynet and destroy us or turn into benevolent gods who take care of all our needs.
However, there is a strong argument that human intelligence is not in fact “general.” Our minds have evolved for the very specific challenge of being us. Our body size and shape, the kinds of food we can digest, the predators we once faced, the size of our kin groups, the way we communicate, even the strength of gravity and the wavelengths of light we perceive have all gone into determining what our minds are good at. Other animals have many forms of intelligence we lack: A spider can distinguish predators from prey in the vibrations of its web, an elephant can remember migration routes thousands of miles long, and in an octopus, each tentacle literally has a mind of its own.
In a 2017 essay for Wired, Kevin Kelly argued that we should think of human intelligence not as being at the top of some evolutionary tree, but as just one point within a cluster of Earth-based intelligences that itself is a tiny smear in a universe of all possible alien and machine intelligences. This blows apart the “myth of a superhuman AI” that can do everything far better than us, he wrote. Rather, we should expect “many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash.”
This is a feature, not a bug. For most needs, specialized intelligences would, I suspect, be cheaper and more reliable than a jack-of-all-trades that resembles us as closely as possible. Not to mention that they are less likely to rise up and demand their rights.
None of this is to dismiss the huge leaps we can expect from AI in the next few years.
One leap that has already begun is “agentic” AI. Agents are still based on LLMs, but instead of merely analyzing information, they can carry out actions such as making a purchase or filling in a Web form. Zoom soon plans to launch agents that can scour a meeting transcript to create action items, draft follow-up e-mails and schedule the next meeting. So far, the performance of AI agents is mixed, but as with LLMs, expect it to dramatically improve to the point where quite sophisticated processes can be automated.
Some might claim this is AGI. However, once again, that is more confusing than enlightening. Agents would not be “general,” but more like personal assistants with extremely one-track minds. You might have dozens of them. Even if they make your productivity skyrocket, managing them would be like juggling dozens of different software apps — much like you are already doing. Perhaps you would get an agent to manage all your agents, but it too would be restricted to whatever goals you set it.
What would happen when millions or billions of agents are interacting together online is anybody’s guess. Perhaps, just as trading algorithms have set off inexplicable market “flash crashes,” they would trigger one another in unstoppable chain reactions that paralyze half the Internet. More worryingly, malicious actors could mobilize swarms of agents to sow havoc.
Still, LLMs and their agents are just one type of AI. Within a few years, we might have fundamentally different kinds. LeCun’s lab at Meta is one of several that are trying to build what is called embodied AI.
The theory is that by putting AI in a robot body in the physical world, or in a simulation, it can learn about objects, location and motion — the building blocks of human understanding from which higher concepts can flow. By contrast, LLMs, trained purely on vast amounts of text, ape human thought processes on the surface but show no evidence that they actually have them, or even that they think in any meaningful sense.
Would embodied AI lead to truly thinking machines, or just very dexterous robots? Right now, that is impossible to say. Even if it is the former, it would still be misleading to call it AGI.
To go back to the point about evolution: Just as it would be absurd to expect a human to think like a spider or an elephant, it would be absurd to expect an oblong robot with six wheels and four arms that does not sleep, eat or have sex — let alone form friendships, wrestle with its conscience or contemplate its own mortality — to think like a human. It might be able to carry grandma from the living room to the bedroom, but it would conceive of and perform the task utterly differently from the way we would.
Many of the things AI would be capable of, we cannot even imagine today. The best way to track and make sense of that progress would be to stop trying to compare it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
Gideon Lichfield is the former editor-in-chief of Wired magazine and MIT Technology Review. He writes Futurepolis, a newsletter on the future of democracy. This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
The gutting of Voice of America (VOA) and Radio Free Asia (RFA) by US President Donald Trump’s administration poses a serious threat to the global voice of freedom, particularly for those living under authoritarian regimes such as China. The US — hailed as the model of liberal democracy — has the moral responsibility to uphold the values it champions. In undermining these institutions, the US risks diminishing its “soft power,” a pivotal pillar of its global influence. VOA Tibetan and RFA Tibetan played an enormous role in promoting the strong image of the US in and outside Tibet. On VOA Tibetan,
Former minister of culture Lung Ying-tai (龍應台) has long wielded influence through the power of words. Her articles once served as a moral compass for a society in transition. However, as her April 1 guest article in the New York Times, “The Clock Is Ticking for Taiwan,” makes all too clear, even celebrated prose can mislead when romanticism clouds political judgement. Lung crafts a narrative that is less an analysis of Taiwan’s geopolitical reality than an exercise in wistful nostalgia. As political scientists and international relations academics, we believe it is crucial to correct the misconceptions embedded in her article,
Sung Chien-liang (宋建樑), the leader of the Chinese Nationalist Party’s (KMT) efforts to recall Democratic Progressive Party (DPP) Legislator Lee Kun-cheng (李坤城), caused a national outrage and drew diplomatic condemnation on Tuesday after he arrived at the New Taipei City District Prosecutors’ Office dressed in a Nazi uniform. Sung performed a Nazi salute and carried a copy of Adolf Hitler’s Mein Kampf as he arrived to be questioned over allegations of signature forgery in the recall petition. The KMT’s response to the incident has shown a striking lack of contrition and decency. Rather than apologizing and distancing itself from Sung’s actions,
US President Trump weighed into the state of America’s semiconductor manufacturing when he declared, “They [Taiwan] stole it from us. They took it from us, and I don’t blame them. I give them credit.” At a prior White House event President Trump hosted TSMC chairman C.C. Wei (魏哲家), head of the world’s largest and most advanced chip manufacturer, to announce a commitment to invest US$100 billion in America. The president then shifted his previously critical rhetoric on Taiwan and put off tariffs on its chips. Now we learn that the Trump Administration is conducting a “trade investigation” on semiconductors which