Wed, Mar 09, 2016 - Page 7 News List

AI looks to outsmart world champ in go challenge

FIENDISH COMPLEXITY:Game-playing is a crucial measure of artificial intelligence — it shows that a machine can execute a task better than the humans who created it

AFP, PARIS

Every two years or so, computer speed and memory capacity doubles — a head-spinning pace that experts say could see machines become smarter than humans within decades.

This week, one test of how far artificial intelligence (AI) has come is to happen in Seoul — a five-day battle between man and machine for supremacy in the 3,000-year-old Chinese board game go.

Said to be the most complex game ever designed, with an incomputable number of move options, go requires human-like “intuition” to prevail.

“If the machine wins, it will be an important symbolic moment,” AI expert Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris said. “Until now, the game of go has been problematic for computers as there are too many possible moves to develop an all-encompassing database of possibilities, as for chess.”

Go reputedly has more possible board configurations than there are atoms in the universe.

This fiendish complexity meant that mastery of the game by a computer was at least a decade away — or so it was thought.

The assumption began to crack when, in October last year, Google’s AlphaGo program beat Europe’s human champion, Fan Hui (樊麾).

Google has now upped the stakes and will put its machine through the ultimate wringer in a marathon match today against South Korean Lee Se-dol, who has held the world crown for a decade.

Game-playing is a crucial measure of AI progress — it shows that a machine can execute a certain “intellectual” task better than the humans who created it.

Key moments included IBM’s Deep Blue defeating chess grandmaster Garry Kasparov in 1997 and the Watson supercomputer outwitting humans in the TV quiz show Jeopardy in 2011, but AlphaGo is different.

It is partly self-taught — having played millions of games against itself after initial programming to hone its tactics through trial and error.

“AlphaGo is really more interesting than either Deep Blue or Watson, because the algorithms it uses are potentially more general-purpose,” said Nick Bostrom of Oxford University’s Future of Humanity Institute.

Creating “general” or multipurpose, rather than “narrow,” task-specific intelligence, is the ultimate goal in AI — something resembling human reasoning based on a variety of inputs.

“General intelligence is about being good at achieving one’s goals when solving problems that are new and perhaps not well-defined,” Bostrom’s colleague, Anders Sandberg, said. “So if the machine can do new things when needed, then it has ‘true’ intelligence.”

In the case of go, Google developers realized a more “human-like” approach would win over brute computing power.

To this end, AlphaGo uses two sets of “deep neural networks” containing millions of connections similar to neurons in the brain. It is able to predict a winner from each move, reducing the search base to manageable levels — something co-creator David Silver has described as “more akin to imagination.”

So what if we manage to build a truly smart machine?

For some, it means a world in which robots take care of our sick, fly and drive us around safely, stock our fridges, plan our holidays and do hazardous jobs humans should not or will not do. For others, it evokes apocalyptic images in which hostile machines are in charge.

Physicist Stephen Hawking is among the leading voices of caution.

“Computers are likely to overtake humans in intelligence at some point in the next 100 years,” Hawking told a conference of global thinkers in May last year.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

TOP top