Media coverage of artificial intelligence (AI) tends to invoke tired references to The Terminator or 2001: A Space Odyssey’s HAL 9000 killing people on a spaceship. Hollywood loves a story about a sentient robot destroying humanity to survive.
Google researcher Blake Lemoine last week grabbed headlines for getting suspended after releasing transcripts of a “conversation” with its Lamda AI research experiment.
Lemoine believes that Lamda is sentient and aware of itself, and describes the machine as a “coworker.”
Illustration: Mountain People
He told the Washington Post that part of his motivation for going public was because he believes that “Google shouldn’t be the ones making all the choices” about what to do with it.
The overwhelming reaction among artificial intelligence experts was to pour cold water on the claims.
What is Lamda? It is an acronym for Language Model for Dialogue Applications. As the name might suggest, it is a tool designed to create a “model” of language so people can talk to it. Like similar experiments Generative Pre-trained Transformer 3 (GPT-3) from Elon Musk-backed OpenAI and Google’s earlier Bidirectional Encoder Representations from Transformers, these experiments are best thought of as amped up versions of the algebra you learned at school, with a twist. That twist is called machine learning, but before that we have to go back to the classroom and talk about algorithms.
An algorithm is a step-by-step process that solves a problem. Take an input, apply some logic and you get an output. Addition, one of the most basic problems in mathematics, can be solved with many different algorithms.
Humans have been using algorithms to solve problems for centuries. Financial analysts spend their careers building algorithms attempting to predict the future and tell them to buy or sell shares to make money. Our world runs on these “traditional” algorithms, but recently there has been a shift toward “machine learning,” which builds on these traditional ideas.
Machine learning tools take inputs and outputs, and create their own logic to connect the two to come up with correct outputs in response to new inputs. Google and OpenAI’s aims are to build machines that can learn the logic behind all human language, so the machine can speak in a way that humans can understand. The machine itself does not truly “understand” what it is doing. Instead it is following an incredibly detailed set of rules that it has invented, with the help of another set of rules invented by a human.
Other big differences between “traditional algorithms” and “machine learning algorithm” techniques are in the quantity of the data used to create an algorithm and how that data is processed. For them to work, machine learning tools are “trained” on billions of books, online articles and sentences shared on social media that have been collected from the public Internet and other sources. Increasingly, the result of that training is a model that can respond to human beings in an uncannily human way, creating the illusion of a conversation with a very clever entity.
This training requires huge amounts of computational power. Some estimate OpenAI’s GPT-3 cost about US$20 million simply to create its model, and every time you ask GPT-3 to respond to a prompt, it burns through many hours worth of computer processing time.
So you are actually talking to humanity?
Lemoine is right when he says that Lamda “reads” Twitter, although “ingests and processes” is probably a more accurate description. And that is how problems of bias creep in. The machine’s entire understanding of language is based around the information it is been given.
We know that Wikipedia is “biased” toward a Western viewpoint, as only 16 percent of its content about sub-Saharan Africa is written by people from the region. Machine learning inherits this bias because it almost certainly relies heavily on Wikipedia’s data.
Why is everyone so excited about machine learning? As computational power increases and the cost of that processing falls, machine learning will get more powerful and more available, so it can be applied to more problems. Right now your smart speaker is mostly useful for setting timers or playing music.
However, airlines and shipping companies have used traditional algorithms for decades to maximize the efficiency of their boats and aircraft.
The dream is that with enough cheap computing power, machine learning tools can make new treatments for diseases like cancer, enable fully autonomous self-driving vehicles or create a perfect nuclear fusion reactor design.
So what is actually happening when I talk with Siri, Alexa or Lamda? When you think you are “conversing” with a machine language model, you are actually talking to a very complicated mathematical formula that has determined in advance how it should respond to your words with the help of calculations based on trillions of words written by human beings. Artificial intelligence tools like GPT-3 and Lamda are designed to solve specific problems like speaking conversationally to humans, but the ultimate goal of companies like Google’s DeepMind is to create something called “artificial general intelligence” (AGI).
In theory, an AGI would be able to understand or learn any task that a human can, leading to a rapid speeding up of problem solving.
Could a machine learning powered artificial intelligence eventually become sentient? Any progress toward a machine that has an inner mind and can feel or express emotions might be possible, but the expert consensus is that it is impossible with the current state of technology.
Here is what some had to say:
Cognitive scientist Steven Pinker said that Lemoine is confused.
Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them),” Pinker wrote on Twitter.
These three concepts are what Pinker believes are required for any being to be conscious, and in his view Lamda is far from passing any of those bars.
Gary Marcus, author of Rebooting AI, said more bluntly in a blog post entitled “Nonsense on stilts” that “In truth, literally everything that the system says is bullshit.”
Marcus says there is no concept of meaning behind Lamda’s words, Lamda is just “predicting what words best fit a given context.”
Ilya Sutskever, chief scientist of OpenAI, tweeted cryptically in February that “it may be that today’s large neural networks are slightly conscious.”
Murray Shanahan, the research director at DeepMind, replied that Lamda is slightly conscious “in the same sense that a large field of wheat may be slightly pasta.”
It is worth reading Alex Hern’s experiments with GPT-3 showing how easy it is to generate complete and utter nonsense if you tweak your questions.
Randall Munroe, author of web comic XKCD and his conversation with GPT-3 as William Shakespeare is informative too. Who knew that if he were alive today, he would add Shrek to Romeo and Juliet’s balcony scene?
So, nothing to worry about then?
Tom Chivers, author of The AI Does Not Hate You, argued that the thing we should really worry about is the competence of these systems, not their sentience.
“AI may or may not be becoming conscious, but it is certainly becoming competent,” he said. “It can solve problems and is becoming more general, and whether or not it’s got an inner life doesn’t really matter.”
There are already reports of AI-powered autonomous drones being used to kill people, and machine learning enabled deepfakes have the potential to make disinformation worse. And these are still early days.
The doomsday bomb in Dr Strangelove that ends the world did not need to be intelligent or sentient to accidentally end the world. All it needed was simple logic (if attacked by the Americans, explode) applied in a really stupid way (the Soviets forgetting to tell the Americans).
As Terry Pratchett wrote, “real stupidity beats artificial intelligence every time.”
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.
Since the Russian invasion of Ukraine in February 2022, people have been asking if Taiwan is the next Ukraine. At a G7 meeting of national leaders in January, Japanese Prime Minister Fumio Kishida warned that Taiwan “could be the next Ukraine” if Chinese aggression is not checked. NATO Secretary-General Jens Stoltenberg has said that if Russia is not defeated, then “today, it’s Ukraine, tomorrow it can be Taiwan.” China does not like this rhetoric. Its diplomats ask people to stop saying “Ukraine today, Taiwan tomorrow.” However, the rhetoric and stated ambition of Chinese President Xi Jinping (習近平) on Taiwan shows strong parallels with