The singularity — or, to give it its proper title, the technological singularity. It is an idea that has taken on a life of its own; more of a life, I suspect, than what it predicts ever will. It is a Thing for techno-utopians: wealthy, middle-aged men who regard the singularity as their best chance of immortality. They are Singularitarians, some seemingly prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence (AI) — a man-made god that grants transcendence.
And it is a Thing for the doomsayers, the techno-dystopians. Apocalypsarians who are equally convinced that a super-intelligent AI will have no interest in curing cancer or old age, or ending poverty, but will — malevolently or maybe just accidentally — bring about the end of human civilization as we know it.
History and Hollywood are on their side. From the Golem to Frankenstein’s monster, Skynet and the Matrix, we are fascinated by the old story: Man plays god and then things go horribly wrong.
The singularity is basically the idea that as soon as AI exceeds human intelligence, everything changes.
There are two central planks to the hypothesis: One is that as soon as we succeed in building AI as smart as humans, it rapidly reinvents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans cannot possibly comprehend how it works. The other is that the future of humanity becomes in some sense out of control, from the moment of the singularity onward.
So should we be worried or optimistic about the technological singularity? I think we should be a little worried — cautious and prepared may be a better way of putting it — and at the same time a little optimistic.
However, I do not believe we need to be obsessively worried by a hypothesized existential risk to humanity. Why? Because, for the risk to become real, a sequence of things all need to happen, a sequence of big ifs.
If we succeed in building human-equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.
By worrying unnecessarily, we are falling into a trap: the fallacy of privileging the hypothesis. And, perhaps worse, taking our eyes off other risks we should really be worrying about, such as man-made climate change or bioterrorism.
Let me illustrate what I mean. Consider the possibility that we invent faster-than-light travel (FTL) some time in the next 100 years. Then I worry you by outlining all sorts of nightmare scenarios that might follow. At the end of it, you will be thinking: “My god, never mind climate change, we need to stop all FTL research right now.”
However, there are already lots of AI systems, so surely it is just a matter of time? Yes, we do have lots of AI systems, like chess programs or automated financial transaction systems, or the software in driverless cars. And some are already smarter than most humans, like language translation systems. Some are as good as some humans, such as driverless cars or natural speech recognition systems, and will soon be better than most humans. However, none of this has brought about the end of civilization (though I am suspiciously eyeing the financial transaction systems). The reason is that these are all narrow-AI systems: very good at doing one thing.
A human-equivalent AI would need to be a generalist, like humans. It would need to be able to learn, most likely by developing over the course of some years, then generalize what it has learned — in the same way we learned as toddlers that wooden blocks could be stacked, banged together or used as something to stand on to reach a bookshelf. It would need to understand meaning and context, be able to synthesize new knowledge, have intentionality and — in all likelihood — be self-aware, so it understands what it means to have agency in the world.
There is a huge gulf between present day narrow-AI systems and the kind of artificial general intelligence I have outlined. Opinions vary, but I think it is as wide a gulf as that between current space flight and practical faster-than-light spaceflight; wider perhaps, because we do not yet have a theory of general intelligence, whereas there are several candidate FTL drives consistent with general relativity, like the Alcubierre drive.
So we do not need to be obsessing about the risk of super-intelligent AI, but I do think we need to be cautious and prepared.
In a podcast last week, Swedish philosopher Nick Bostrom said that there are two big problems, which he calls competency and control.
The first is how to make super-intelligent AI, the second is how to control it (ie, to mitigate the risks). He says hardly anyone is working on the control problem, whereas loads of people are going hell for leather on the first.
On this, I 100 percent agree, and I am one of the small number of people working on the control problem. In 2010, I was part of a group that drew up a set of principles of robotics — principles that apply equally to AI systems.
I strongly believe science and technology research should be undertaken within a framework of responsible innovation, and have argued we should be thinking about subjecting robotics and AI research to ethical approval, in the same way we do for human subject research.
Recently, I have started work toward making ethical robots. This is not just to mitigate future risks, but because the kind of not-very-intelligent robots we make in the very near future will need to be ethical, as well as safe.
We should be worrying about present-day AI, rather than future super-intelligent AI.
Alan Winfield is professor of electronic engineering at University of the West of England in Bristol.
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.
Since the Russian invasion of Ukraine in February 2022, people have been asking if Taiwan is the next Ukraine. At a G7 meeting of national leaders in January, Japanese Prime Minister Fumio Kishida warned that Taiwan “could be the next Ukraine” if Chinese aggression is not checked. NATO Secretary-General Jens Stoltenberg has said that if Russia is not defeated, then “today, it’s Ukraine, tomorrow it can be Taiwan.” China does not like this rhetoric. Its diplomats ask people to stop saying “Ukraine today, Taiwan tomorrow.” However, the rhetoric and stated ambition of Chinese President Xi Jinping (習近平) on Taiwan shows strong parallels with