Sat, Mar 03, 2018 - Page 9 News List

Science fiction dreams obscure artificial intelligence’s more immediate threats

Opportunities for misuse of existing artificial intelligence technology by actors with malicious intent are so great that continuing to focus on superintelligent robots risks missing the point

By John Naughton  /  The Guardian

Illustration: Lance Liu

As the science fiction novelist William Gibson famously observed: “The future is already here — it’s just not very evenly distributed.”

I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat — or promise, depending on your point of view — of “superintelligent” machines, that is, ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them.

The likelihood — or mirage — of such machines still remains a distant prospect, a point made by Andrew Ng (吳恩達), a leading AI researcher and Stanford University academic, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me. If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact that lower-level, but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics.

This technology is a combination of machine learning and big data and it is everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.

These corporations regard this version of “weak” AI as the biggest thing since sliced bread.

Google chief executive Sundar Pichai burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions.

If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it is legal, ethical and good for society — and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.

Fortunately, there are academics who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their report last year makes interesting reading.

Last week saw the publication of more in the same vein — a new critique of the technology by 26 experts from six major universities, plus a number of independent think tanks and non-governmental organizations. Its title — The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation — says it all.

The report fills a serious gap in our thinking about this stuff. We have heard the hype, corporate and governmental, about the wonderful things that AI can supposedly do and we have begun to pay attention to the unintentional downsides of legitimate applications of the technology. Now the time has come to pay attention to the really malign things bad actors could do with it.

The report looks at three main “domains” in which we can expect problems. One is digital security.

The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing trade-off between the scale and efficacy of attacks. We can also expect attacks that exploit human vulnerabilities (for example, through the use of speech synthesis for impersonation), existing software vulnerabilities (through automated hacking) or the vulnerabilities of legitimate AI systems (through corruption of the data streams on which machine learning depends).

This story has been viewed 2216 times.

Comments will be moderated. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned.

TOP top