Tue, Nov 11, 2014 - Page 12 News List

Doomsday scenario

Scientists such as Stephen Hawking think artificial intelligence has the potential to be lethal

By Nick Bilton  /  NY Times News Service

Alone they look somewhat primitive. Together they have the potential to create superintelligence.

Photo: Reuters

Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence.

The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable.

Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent.

Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.

But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.

Nick Bostrom, author of the book Superintelligence, lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.”

Artificial-intelligence proponents argue that these things would never happen and that programmers are going to build safeguards. But let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?

I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said artificial intelligence is “potentially more dangerous than nukes.” And Stephen Hawking, one of the smartest people on earth, wrote that successful AI “would be the biggest event in human history. Unfortunately, it might also be the last.” There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future.

CONCERNS OVER AI

Two main problems with artificial intelligence lead people like Musk and Hawking to worry. The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.

The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.

“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era. “So when there is something smarter than us on the planet, it will rule over us on the planet.”

This story has been viewed 3238 times.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

TOP top