Preventing an AI apocalypse - Taipei Times
Thu, May 24, 2018 - Page 9 News List

Preventing an AI apocalypse

AI in its current state is under the control of humans and can only fulfill simple, narrow tasks, but ‘artificial general intelligence,’ which could surpass human cognition, poses a risk

By Seth Baum

Illustration: Mountain people

Recent advances in artificial intelligence (AI) have been nothing short of dramatic. AI is transforming nearly every sector of society, from transportation to medicine to defense. So it is worth considering what will happen when it becomes even more advanced than it already is.

The apocalyptic view is that AI-driven machines will outsmart humanity, take over the world, and kill us all. This scenario crops up often in science fiction and is easy enough to dismiss, given that humans remain firmly in control.

However, many AI experts take the apocalyptic perspective seriously, and they are right to do so. The rest of society should as well.

To understand what is at stake, consider the distinction between “narrow AI” and “artificial general intelligence” (AGI). Narrow AI can operate only in one or a few domains at a time, so while it might outperform humans in select tasks, it remains under human control.

AGI, by contrast, can reason across a wide range of domains, and thus, could replicate many human intellectual skills, while retaining all of the advantages of computers, such as perfect memory recall. Run on sophisticated computer hardware, AGI could outpace human cognition. It is actually hard to conceive an upper limit for how advanced AGI could become.

As it stands, most AI is narrow. Indeed, even the most advanced current systems have only limited amounts of generality. For example, while Google DeepMind’s AlphaZero system was able to master Go, chess and shogi — making it more general than most other AI systems, which can be applied only to a single specific activity — it has still demonstrated capability only within the limited confines of certain highly structured board games.

Many knowledgeable people dismiss the prospect of advanced AGI. Some, such as Selmer Bringsjord of Rensselaer Polytechnic Institute and Drew McDermott of Yale University, say that it is impossible for AI to outsmart humanity.

Others, such as Margaret Boden of the University of Sussex and Oren Etzioni of the Allen Institute for Artificial Intelligence, say that human-level AI might be possible in the distant future, but that it is far too early to start worrying about it now.

These skeptics are not marginal figures, like the cranks who try to cast doubt on climate-change science. They are distinguished academics in computer science and related fields and their opinions must be taken seriously.

Yet other distinguished academics — including David Chalmers of New York University, Yale University’s Allan Dafoe and Stuart Russell of the University of California, Berkeley, Nick Bostrom of Oxford University, and Roman Yampolskiy of the University of Louisville — do worry that AGI could pose a serious or even existential threat to humanity.

With experts lining up on both sides of the debate, the rest of us should keep an open mind.

Moreover, AGI is the focus of significant research and development (R&D). I recently completed a survey of AGI R&D projects, identifying 45 in 30 countries on six continents.

Many active initiatives are based in major corporations such as Baidu, Facebook, Google, Microsoft and Tencent, and in top universities, such as Carnegie Mellon, Harvard and Stanford, as well as the Chinese Academy of Sciences. It would be unwise to simply assume that none of these projects would succeed.

This story has been viewed 2947 times.

Comments will be moderated. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned.

TOP top