As the science fiction novelist William Gibson famously observed: “The future is already here — it’s just not very evenly distributed.”
I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat — or promise, depending on your point of view — of “superintelligent” machines, that is, ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them.
The likelihood — or mirage — of such machines still remains a distant prospect, a point made by Andrew Ng (吳恩達), a leading AI researcher and Stanford University academic, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.
Illustration: Lance Liu
That seems about right to me. If one were a conspiracy theorist, one might ask if our obsession with a highly speculative future has been deliberately orchestrated to divert attention from the fact that lower-level, but exceedingly powerful AI is already here and playing an ever-expanding role in shaping our economies, societies and politics.
This technology is a combination of machine learning and big data and it is everywhere, controlled and deployed by a handful of powerful corporations, with occasional walk-on parts assigned to national security agencies.
These corporations regard this version of “weak” AI as the biggest thing since sliced bread.
Google chief executive Sundar Pichai burbles about “AI everywhere” in his company’s offerings. Same goes for the other digital giants. In the face of this hype onslaught, it takes a certain amount of courage to stand up and ask awkward questions.
If this stuff is so powerful, then surely we ought to be looking at how it is being used, asking whether it is legal, ethical and good for society — and thinking about what will happen when it gets into the hands of people who are even worse than the folks who run the big tech corporations. Because it will.
Fortunately, there are academics who have started to ask these awkward questions. There are, for example, the researchers who work at AI Now, a research institute at New York University focused on the social implications of AI. Their report last year makes interesting reading.
Last week saw the publication of more in the same vein — a new critique of the technology by 26 experts from six major universities, plus a number of independent think tanks and non-governmental organizations. Its title — The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation — says it all.
The report fills a serious gap in our thinking about this stuff. We have heard the hype, corporate and governmental, about the wonderful things that AI can supposedly do and we have begun to pay attention to the unintentional downsides of legitimate applications of the technology. Now the time has come to pay attention to the really malign things bad actors could do with it.
The report looks at three main “domains” in which we can expect problems. One is digital security.
The use of AI to automate tasks involved in carrying out cyberattacks will alleviate the existing trade-off between the scale and efficacy of attacks. We can also expect attacks that exploit human vulnerabilities (for example, through the use of speech synthesis for impersonation), existing software vulnerabilities (through automated hacking) or the vulnerabilities of legitimate AI systems (through corruption of the data streams on which machine learning depends).
A second threat domain is physical security — attacks with drones and autonomous weapons systems. Think v2.0 of the hobbyist drones that the Islamic State group deployed, but this time with face-recognition technology on board.
We can also expect new kinds of attacks that subvert physical systems that, for example, causes autonomous vehicles to crash, or deploy physical systems that would be impossible to remotely control from a distance, such as a 1,000-strong swarm of microdrones.
Finally, there is what the authors call “political security” — using AI to automate tasks involved in surveillance, persuasion (creating targeted propaganda) and deception (such as, manipulating videos).
We can also expect new kinds of attacks based on machine-learning’s capability to infer human behaviors, moods and beliefs from available data. This technology will obviously be welcomed by authoritarian states, but it will also further undermine the ability of democracies to sustain truthful public debates.
The bots and fake Facebook accounts that currently pollute our public sphere will look awfully amateurish in a couple of years.
The report is available as a free download and is worth reading in full.
If it were about the dangers of future or speculative technologies, then it might be reasonable to dismiss it as academic scaremongering. The alarming thing is that most of the problematic capabilities that its authors envisage are already available and in many cases are embedded in many of the networked services that we use every day.
Gibson was right: The future has already arrived.
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
The past few months have seen tremendous strides in India’s journey to develop a vibrant semiconductor and electronics ecosystem. The nation’s established prowess in information technology (IT) has earned it much-needed revenue and prestige across the globe. Now, through the convergence of engineering talent, supportive government policies, an expanding market and technologically adaptive entrepreneurship, India is striving to become part of global electronics and semiconductor supply chains. Indian Prime Minister Narendra Modi’s Vision of “Make in India” and “Design in India” has been the guiding force behind the government’s incentive schemes that span skilling, design, fabrication, assembly, testing and packaging, and
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.