Media coverage of artificial intelligence (AI) tends to invoke tired references to The Terminator or 2001: A Space Odyssey’s HAL 9000 killing people on a spaceship. Hollywood loves a story about a sentient robot destroying humanity to survive.
Google researcher Blake Lemoine last week grabbed headlines for getting suspended after releasing transcripts of a “conversation” with its Lamda AI research experiment.
Lemoine believes that Lamda is sentient and aware of itself, and describes the machine as a “coworker.”
Illustration: Mountain People
He told the Washington Post that part of his motivation for going public was because he believes that “Google shouldn’t be the ones making all the choices” about what to do with it.
The overwhelming reaction among artificial intelligence experts was to pour cold water on the claims.
What is Lamda? It is an acronym for Language Model for Dialogue Applications. As the name might suggest, it is a tool designed to create a “model” of language so people can talk to it. Like similar experiments Generative Pre-trained Transformer 3 (GPT-3) from Elon Musk-backed OpenAI and Google’s earlier Bidirectional Encoder Representations from Transformers, these experiments are best thought of as amped up versions of the algebra you learned at school, with a twist. That twist is called machine learning, but before that we have to go back to the classroom and talk about algorithms.
An algorithm is a step-by-step process that solves a problem. Take an input, apply some logic and you get an output. Addition, one of the most basic problems in mathematics, can be solved with many different algorithms.
Humans have been using algorithms to solve problems for centuries. Financial analysts spend their careers building algorithms attempting to predict the future and tell them to buy or sell shares to make money. Our world runs on these “traditional” algorithms, but recently there has been a shift toward “machine learning,” which builds on these traditional ideas.
Machine learning tools take inputs and outputs, and create their own logic to connect the two to come up with correct outputs in response to new inputs. Google and OpenAI’s aims are to build machines that can learn the logic behind all human language, so the machine can speak in a way that humans can understand. The machine itself does not truly “understand” what it is doing. Instead it is following an incredibly detailed set of rules that it has invented, with the help of another set of rules invented by a human.
Other big differences between “traditional algorithms” and “machine learning algorithm” techniques are in the quantity of the data used to create an algorithm and how that data is processed. For them to work, machine learning tools are “trained” on billions of books, online articles and sentences shared on social media that have been collected from the public Internet and other sources. Increasingly, the result of that training is a model that can respond to human beings in an uncannily human way, creating the illusion of a conversation with a very clever entity.
This training requires huge amounts of computational power. Some estimate OpenAI’s GPT-3 cost about US$20 million simply to create its model, and every time you ask GPT-3 to respond to a prompt, it burns through many hours worth of computer processing time.
So you are actually talking to humanity?
Lemoine is right when he says that Lamda “reads” Twitter, although “ingests and processes” is probably a more accurate description. And that is how problems of bias creep in. The machine’s entire understanding of language is based around the information it is been given.
We know that Wikipedia is “biased” toward a Western viewpoint, as only 16 percent of its content about sub-Saharan Africa is written by people from the region. Machine learning inherits this bias because it almost certainly relies heavily on Wikipedia’s data.
Why is everyone so excited about machine learning? As computational power increases and the cost of that processing falls, machine learning will get more powerful and more available, so it can be applied to more problems. Right now your smart speaker is mostly useful for setting timers or playing music.
However, airlines and shipping companies have used traditional algorithms for decades to maximize the efficiency of their boats and aircraft.
The dream is that with enough cheap computing power, machine learning tools can make new treatments for diseases like cancer, enable fully autonomous self-driving vehicles or create a perfect nuclear fusion reactor design.
So what is actually happening when I talk with Siri, Alexa or Lamda? When you think you are “conversing” with a machine language model, you are actually talking to a very complicated mathematical formula that has determined in advance how it should respond to your words with the help of calculations based on trillions of words written by human beings. Artificial intelligence tools like GPT-3 and Lamda are designed to solve specific problems like speaking conversationally to humans, but the ultimate goal of companies like Google’s DeepMind is to create something called “artificial general intelligence” (AGI).
In theory, an AGI would be able to understand or learn any task that a human can, leading to a rapid speeding up of problem solving.
Could a machine learning powered artificial intelligence eventually become sentient? Any progress toward a machine that has an inner mind and can feel or express emotions might be possible, but the expert consensus is that it is impossible with the current state of technology.
Here is what some had to say:
Cognitive scientist Steven Pinker said that Lemoine is confused.
Lemoine “doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them),” Pinker wrote on Twitter.
These three concepts are what Pinker believes are required for any being to be conscious, and in his view Lamda is far from passing any of those bars.
Gary Marcus, author of Rebooting AI, said more bluntly in a blog post entitled “Nonsense on stilts” that “In truth, literally everything that the system says is bullshit.”
Marcus says there is no concept of meaning behind Lamda’s words, Lamda is just “predicting what words best fit a given context.”
Ilya Sutskever, chief scientist of OpenAI, tweeted cryptically in February that “it may be that today’s large neural networks are slightly conscious.”
Murray Shanahan, the research director at DeepMind, replied that Lamda is slightly conscious “in the same sense that a large field of wheat may be slightly pasta.”
It is worth reading Alex Hern’s experiments with GPT-3 showing how easy it is to generate complete and utter nonsense if you tweak your questions.
Randall Munroe, author of web comic XKCD and his conversation with GPT-3 as William Shakespeare is informative too. Who knew that if he were alive today, he would add Shrek to Romeo and Juliet’s balcony scene?
So, nothing to worry about then?
Tom Chivers, author of The AI Does Not Hate You, argued that the thing we should really worry about is the competence of these systems, not their sentience.
“AI may or may not be becoming conscious, but it is certainly becoming competent,” he said. “It can solve problems and is becoming more general, and whether or not it’s got an inner life doesn’t really matter.”
There are already reports of AI-powered autonomous drones being used to kill people, and machine learning enabled deepfakes have the potential to make disinformation worse. And these are still early days.
The doomsday bomb in Dr Strangelove that ends the world did not need to be intelligent or sentient to accidentally end the world. All it needed was simple logic (if attacked by the Americans, explode) applied in a really stupid way (the Soviets forgetting to tell the Americans).
As Terry Pratchett wrote, “real stupidity beats artificial intelligence every time.”
The EU’s biggest banks have spent years quietly creating a new way to pay that could finally allow customers to ditch their Visa Inc and Mastercard Inc cards — the latest sign that the region is looking to dislodge two of the most valuable financial firms on the planet. Wero, as the project is known, is now rolling out across much of western Europe. Backed by 16 major banks and payment processors including BNP Paribas SA, Deutsche Bank AG and Worldline SA, the platform would eventually allow a German customer to instantly settle up with, say, a hotel in France
On August 6, Ukraine crossed its northeastern border and invaded the Russian region of Kursk. After spending more than two years seeking to oust Russian forces from its own territory, Kiev turned the tables on Moscow. Vladimir Putin seemed thrown off guard. In a televised meeting about the incursion, Putin came across as patently not in control of events. The reasons for the Ukrainian offensive remain unclear. It could be an attempt to wear away at the morale of both Russia’s military and its populace, and to boost morale in Ukraine; to undermine popular and elite confidence in Putin’s rule; to
With escalating US-China competition and mutual distrust, the trend of supply chain “friend shoring” in the wake of the COVID-19 pandemic and the fragmentation of the world into rival geopolitical blocs, many analysts and policymakers worry the world is retreating into a new cold war — a world of trade bifurcation, protectionism and deglobalization. The world is in a new cold war, said Robin Niblett, former director of the London-based think tank Chatham House. Niblett said he sees the US and China slowly reaching a modus vivendi, but it might take time. The two great powers appear to be “reversing carefully
A traffic accident in Taichung — a city bus on Sept. 22 hit two Tunghai University students on a pedestrian crossing, killing one and injuring the other — has once again brought up the issue of Taiwan being a “living hell for pedestrians” and large vehicle safety to public attention. A deadly traffic accident in Taichung on Dec. 27, 2022, when a city bus hit a foreign national, his Taiwanese wife and their one-year-old son in a stroller on a pedestrian crossing, killing the wife and son, had shocked the public, leading to discussions and traffic law amendments. However, just after the