Artificial intelligence (AI) has been moving so fast that even scientists are finding it hard to keep up. In the past year, machine learning algorithms have started to generate rudimentary movies and stunning fake photographs.
They are even writing code. In the future, people are likely to look back on this year as the year AI shifted from processing information to creating content as well as many humans.
Yet what if people also look back on it as the year AI took a step toward the destruction of the human species?
Illustration: Yusha
As hyperbolic and ridiculous as that sounds, public figures from Microsoft cofounder Bill Gates, SpaceX and Tesla CEO Elon Musk, and late physicist Stephen Hawking to British computer pioneer Alan Turing have expressed concerns about the fate of humans in a world where machines surpass them in intelligence, with Musk saying that AI is becoming more dangerous than nuclear warheads.
After all, humans do not treat less-intelligent species particularly well, so who is to say that computers, trained ubiquitously on data that reflects all the facets of human behavior, would not “place their goals ahead of ours” as legendary computer scientist Marvin Minsky once said.
Refreshingly, there is some good news. More scientists are seeking to make deep learning systems more transparent and measurable. That momentum must not stop. As these programs become ever more influential in financial markets, social media and supply chains, technology firms must start prioritizing AI safety over capability.
Across the world’s major AI labs last year, roughly 100 full-time researchers were focused on building safe systems, said last year’s State of AI report produced annually by London venture capital investors Ian Hogarth and Nathan Banaich.
Their report for this year found there are still only about 300 researchers working full-time on AI safety.
“It’s a very low number,” Hogarth said during a Twitter Spaces discussion this week on the threat of AI. “Not only are very few people working on making these systems aligned, but it’s also kind of a Wild West.”
Hogarth was referring to how in the past year a flurry of AI tools and research has been produced by open-source groups, who say super-intelligent machines should not be controlled and built in secret by a few large companies, but created out in the open.
For instance, the community-driven organization EleutherAI in August last year developed a public version of a powerful tool that could write realistic comments and essays on nearly any subject, called GPT-Neo.
The original tool, called GPT-3, was developed by OpenAI, a company cofounded by Musk and largely funded by Microsoft that offers limited access to its powerful systems.
This year, several months after OpenAI impressed the AI community with a revolutionary image-generating system called DALL-E 2, an open-sourced firm called Stable Diffusion released its own version of the tool to the public, free of charge.
One of the benefits of open-source software is that by being out in the open, a greater number of people are constantly probing it for inefficiencies. That is why Linux has historically been one of the most secure operating systems available to the public.
However, throwing powerful AI systems out into the open also raises the risk that they could be misused. If AI is as potentially damaging as a virus or nuclear contamination, then perhaps it makes sense to centralize its development. After all, viruses are scrutinized in bio-safety labs and uranium is enriched in carefully constrained environments.
Although research into viruses and nuclear power are overseen by regulation, and with governments trailing the rapid pace of AI, there are still no clear guidelines for its development.
“We’ve almost got the worst of both worlds,” Hogarth said.
AI risks misuse by being built out in the open, but no one is overseeing what is happening when it is created behind closed doors either.
For now at least, it is encouraging to see the spotlight growing on AI alignment, a growing field that refers to designing AI systems that are “aligned” with human goals.
Leading AI companies such as Alphabet Inc’s DeepMind and OpenAI have multiple teams working on AI alignment, and many researchers from those firms have gone on to launch their own start-ups, some of which are focused on making AI safe.
These include San Francisco-based Anthropic, whose founding team left OpenAI and raised US$580 million from investors earlier this year, and London-based Conjecture, which was recently backed by the founders of Github, Stripe and FTX Trading.
Conjecture is operating under the assumption that AI will reach parity with human intelligence in the next five years, and that its trajectory spells catastrophe for the human species.
Asked why AI might want to hurt humans, Conjecture CEO Connor Leahy said: “Imagine humans want to flood a valley to build a hydroelectric dam, and there is an anthill in the valley. This won’t stop the humans from their construction, and the anthill will promptly get flooded.”
“At no point did any humans even think about harming the ants. They just wanted more energy, and this was the most efficient way to achieve that goal,” he said. “Analogously, autonomous AI’s will need more energy, faster communication and more intelligence to achieve their goals.”
To prevent that dark future, the world needs a “portfolio of bets,” including scrutinizing deep learning algorithms to better understand how they make decisions, and trying to endow AI with more human-like reasoning, Leahy said.
Even if Leahy’s fears seem overblown, it is clear that AI is not on a path that is entirely aligned with human interests. Just look at some of the recent efforts to build chatbots.
Microsoft abandoned its 2016 bot Tay which learned from interacting with Twitter users, after it posted racist and sexually charged messages within hours of being released.
In August, Meta Platforms released a chatbot that said Donald Trump was still the US president, having been trained on public text on the Internet.
No one knows if AI will wreak havoc on financial markets or torpedo the food supply chain one day, but it could pit human beings against one another through social media, something that is arguably already happening.
The powerful AI systems recommending posts to people on Twitter and Facebook are aimed at juicing people’s engagement, which inevitably means serving up content that provokes outrage or misinformation.
When it comes to “AI alignment,” changing those incentives would be a good place to start.
Parmy Olson is a Bloomberg Opinion columnist covering technology, and a former reporter for the Wall Street Journal and Forbes.
This column does not necessarily reflect the opinion
of the editorial board or Bloomberg LP and its owners.
Speaking at the Asia-Pacific Forward Forum in Taipei, former Singaporean minister for foreign affairs George Yeo (楊榮文) proposed a “Chinese commonwealth” as a potential framework for political integration between Taiwan and China. Yeo said the “status quo” in the Taiwan Strait is unsustainable and that Taiwan should not be “a piece on the chessboard” in a geopolitical game between China and the US. Yeo’s remark is nothing but an ill-intentioned political maneuver that is made by all pro-China politicians in Singapore. Since when does a Southeast Asian nation have the right to stick its nose in where it is not wanted
The Chinese Communist Party (CCP) has released a plan to economically integrate China’s Fujian Province with Taiwan’s Kinmen County, outlining a cross-strait development project based on six major themes and 21 measures. This official document by the CCP is directed toward Taiwan’s three outlying island counties: Penghu County, Lienchiang County (Matsu) and Kinmen County. The plan sets out to construct a cohabiting sphere between Kinmen and the nearby Chinese city of Xiamen, as well as between Matsu and Fuzhou. It also aims to bring together Minnanese cultural areas including Taiwan’s Penghu and China’s cities of Quanzhou and Zhangzhou for further integrated
During a recent visit to Taiwan, I encountered repeated questions about “America skepticism” among the body politic. The basic premise of the “America skepticism” theory is that Taiwan people should view the United States as an unreliable, self-interested actor who is using Taiwan for its own purposes. According to this theory, America will abandon Taiwan when its interests are advanced by doing so. At one level, such skepticism is a sign of a healthy, well-functioning democratic society that protects the right for vigorous political debate. Indeed, around the world, the people of Taiwan are far from alone in debating America’s reliability
As China’s economy was meant to drive global economic growth this year, its dramatic slowdown is sounding alarm bells across the world, with economists and experts criticizing Chinese President Xi Jinping (習近平) for his unwillingness or inability to respond to the nation’s myriad mounting crises. The Wall Street Journal reported that investors have been calling on Beijing to take bolder steps to boost output — especially by promoting consumer spending — but Xi has deep-rooted philosophical objections to Western-style consumption-driven growth, seeing it as wasteful and at odds with his goal of making China a world-leading industrial and technological powerhouse, and