Media coverage of artificial intelligence (AI) tends to invoke tired references to The Terminator or 2001: A Space Odyssey’s HAL 9000 killing people on a spaceship. Hollywood loves a story about a sentient robot destroying humanity to survive.
Google researcher Blake Lemoine last week grabbed headlines for getting suspended after releasing transcripts of a “conversation” with its Lamda AI research experiment.
Lemoine believes that Lamda is sentient and aware of itself, and describes the machine as a “coworker.”
Illustration: Mountain People
He told the Washington Post that part of his motivation for going public was because he believes that “Google shouldn’t be the ones making all the choices” about what to do with it.
The overwhelming reaction among artificial intelligence experts was to pour cold water on the claims.
What is Lamda? It is an acronym for Language Model for Dialogue Applications. As the name might suggest, it is a tool designed to create a “model” of language so people can talk to it. Like similar experiments Generative Pre-trained Transformer 3 (GPT-3) from Elon Musk-backed OpenAI and Google’s earlier Bidirectional Encoder Representations from Transformers, these experiments are best thought of as amped up versions of the algebra you learned at school, with a twist. That twist is called machine learning, but before that we have to go back to the classroom and talk about algorithms.
An algorithm is a step-by-step process that solves a problem. Take an input, apply some logic and you get an output. Addition, one of the most basic problems in mathematics, can be solved with many different algorithms.
Humans have been using algorithms to solve problems for centuries. Financial analysts spend their careers building algorithms attempting to predict the future and tell them to buy or sell shares to make money. Our world runs on these “traditional” algorithms, but recently there has been a shift toward “machine learning,” which builds on these traditional ideas.
Machine learning tools take inputs and outputs, and create their own logic to connect the two to come up with correct outputs in response to new inputs. Google and OpenAI’s aims are to build machines that can learn the logic behind all human language, so the machine can speak in a way that humans can understand. The machine itself does not truly “understand” what it is doing. Instead it is following an incredibly detailed set of rules that it has invented, with the help of another set of rules invented by a human.
Other big differences between “traditional algorithms” and “machine learning algorithm” techniques are in the quantity of the data used to create an algorithm and how that data is processed. For them to work, machine learning tools are “trained” on billions of books, online articles and sentences shared on social media that have been collected from the public Internet and other sources. Increasingly, the result of that training is a model that can respond to human beings in an uncannily human way, creating the illusion of a conversation with a very clever entity.
This training requires huge amounts of computational power. Some estimate OpenAI’s GPT-3 cost about US$20 million simply to create its model, and every time you ask GPT-3 to respond to a prompt, it burns through many hours worth of computer processing time.
So you are actually talking to humanity?
Lemoine is right when he says that Lamda “reads” Twitter, although “ingests and processes” is probably a more accurate description. And that is how problems of bias creep in. The machine’s entire understanding of language is based around the information it is been given.
We know that Wikipedia is “biased” toward a Western viewpoint, as only 16 percent of its content about sub-Saharan Africa is written by people from the region. Machine learning inherits this bias because it almost certainly relies heavily on Wikipedia’s data.
Why is everyone so excited about machine learning? As computational power increases and the cost of that processing falls, machine learning will get more powerful and more available, so it can be applied to more problems. Right now your smart speaker is mostly useful for setting timers or playing music.
However, airlines and shipping companies have used traditional algorithms for decades to maximize the efficiency of their boats and aircraft.
The dream is that with enough cheap computing power, machine learning tools can make new treatments for diseases like cancer, enable fully autonomous self-driving vehicles or create a perfect nuclear fusion reactor design.
So what is actually happening when I talk with Siri, Alexa or Lamda? When you think you are “conversing” with a machine language model, you are actually talking to a very complicated mathematical formula that has determined in advance how it should respond to your words with the help of calculations based on trillions of words written by human beings. Artificial intelligence tools like GPT-3 and Lamda are designed to solve specific problems like speaking conversationally to humans, but the ultimate goal of companies like Google’s DeepMind is to create something called “artificial general intelligence” (AGI).
In theory, an AGI would be able to understand or learn any task that a human can, leading to a rapid speeding up of problem solving.
Could a machine learning powered artificial intelligence eventually become sentient? Any progress toward a machine that has an inner mind and can feel or express emotions might be possible, but the expert consensus is that it is impossible with the current state of technology.
Here is what some had to say:
Cognitive scientist Steven Pinker said that Lemoine is confused.
Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them),” Pinker wrote on Twitter.
These three concepts are what Pinker believes are required for any being to be conscious, and in his view Lamda is far from passing any of those bars.
Gary Marcus, author of Rebooting AI, said more bluntly in a blog post entitled “Nonsense on stilts” that “In truth, literally everything that the system says is bullshit.”
Marcus says there is no concept of meaning behind Lamda’s words, Lamda is just “predicting what words best fit a given context.”
Ilya Sutskever, chief scientist of OpenAI, tweeted cryptically in February that “it may be that today’s large neural networks are slightly conscious.”
Murray Shanahan, the research director at DeepMind, replied that Lamda is slightly conscious “in the same sense that a large field of wheat may be slightly pasta.”
It is worth reading Alex Hern’s experiments with GPT-3 showing how easy it is to generate complete and utter nonsense if you tweak your questions.
Randall Munroe, author of web comic XKCD and his conversation with GPT-3 as William Shakespeare is informative too. Who knew that if he were alive today, he would add Shrek to Romeo and Juliet’s balcony scene?
So, nothing to worry about then?
Tom Chivers, author of The AI Does Not Hate You, argued that the thing we should really worry about is the competence of these systems, not their sentience.
“AI may or may not be becoming conscious, but it is certainly becoming competent,” he said. “It can solve problems and is becoming more general, and whether or not it’s got an inner life doesn’t really matter.”
There are already reports of AI-powered autonomous drones being used to kill people, and machine learning enabled deepfakes have the potential to make disinformation worse. And these are still early days.
The doomsday bomb in Dr Strangelove that ends the world did not need to be intelligent or sentient to accidentally end the world. All it needed was simple logic (if attacked by the Americans, explode) applied in a really stupid way (the Soviets forgetting to tell the Americans).
As Terry Pratchett wrote, “real stupidity beats artificial intelligence every time.”
Almost as soon as the plane carrying a US delegation led by US House of Representatives Speaker Nancy Pelosi took off from Taipei International Airport (Songshan airport) on Thursday, Beijing announced four days of live-fire military drills around Taiwan. China unilaterally cordoned off six maritime exclusion zones around Taiwan proper to simulate a blockade of the nation, fired 11 Dongfeng ballistic missiles and conducted coordinated maneuvers using naval vessels and aircraft. Although the drills were originally to end on Sunday, the Chinese People’s Liberation Army’s (PLA) Eastern Theater Command issued a statement through Chinese state media that the exercises would continue,
In an August 12 Wall Street Journal report, Chinese sources contend that in their July 28 phone call, United States President Joe Biden was told by Chinese Communist Party (CCP) leader Xi Jinping (習近平) that “he had no intention of going to war with the US” over House of Representatives Speaker Nancy Pelosi’s then upcoming visit to Taiwan. However, there should be global alarm that Xi did use that visit to begin the CCP’s active war against democracy in Taiwan and globally, and that the Biden Administration’s response has been insufficient. To hear CCP officials, People’s Liberation Army (PLA) spokesmen, and a
Despite political pressure at home to keep her from doing so, US House of Representatives Speaker Nancy Pelosi finally visited Taiwan last week, causing quite a stir. As Pelosi stuck to her guns, her visit was of considerable significance. Pelosi was born into the D’Alesandro political family. Her father, Thomas D’Alesandro Jr, was a US Representative and later mayor of Baltimore for 12 years. Pelosi was elected to the US House of Representatives at the age of 47 after her children were grown, and became the US’ first female House speaker in 2007 after the Democratic Party won the House majority.
Much of the foreign policy conversation in the US over the past two weeks has centered on whether US House of Representatives Speaker Nancy Pelosi ought to have visited Taiwan. Her backers pointed out that there was precedent for such a visit — a previous House speaker and US Cabinet members had visited Taiwan — and that it is important for officials to underscore the US’ commitment to Taiwan in the face of increasing Chinese pressure. Critics argued that the trip was ill-timed, because Chinese President Xi Jinping (習近平) would likely feel a need to respond, lest he appear weak