English mathematician Alan Turing’s famous test of whether machines could fool us into believing they were human — “the imitation game” — has become a mundane, daily question for all of us.
We are surrounded by machine voices and think nothing of conversing with them — although each time I hear my car tell me where to turn left, I am reminded of my grandmother, who, having installed a telephone late in life, used to routinely say good night to the speaking clock.
We find ourselves locked into interminable text chats with breezy automated bank tellers and offer our mother’s maiden name to a variety of robotic speakers that sound plausibly alive.
Illustration: Yusha
I have resisted the domestic spies of Apple and Amazon, but one or two friends jokingly describe the rapport they and their kids have built up with Amazon’s Alexa or Google’s Home Hub — and they are right about that: The more you tell your virtual valet, the more you disclose of wants and desires, the more speedily it can learn and commit to memory those last few fragments of your inner life you had kept to yourself.
As the line between human and digital voices blurs, our suspicions are raised: Who exactly are we talking to? No online conversation or message board spat is complete without its doubters: “Are you a bot?” Or, the contemporary door slam: “Bot: blocked.” Those doubts will only increase.
The ability of bots — a term that can describe any automated process present in a computer network — to mimic human online behavior and language has developed sharply over the past three years.
For the moment, most of us remain serenely confident that we can tell the difference between a human presence and the voices of the encoded “foot soldiers” of the Internet that perform more than 50 percent of its tasks and contribute about 20 percent of all social media “conversation.”
However, that confidence does not extend to those who have devoted the past decade to trying to detect, and defend against, that bot invasion.
Naturally, because of the scale of the task, they must enlist bots to help them find bots. The most accessible automated Turing test is the creation of Emilio Ferrara, principal investigator in machine intelligence and data science at the University of Southern California.
In its infancy the bot detector “BotOrNot?” allowed you to use many of the conventional indicators of automation — abnormal account activity, repetition, generic profiles — to determine the origin of a Twitter feed.
Now called the Botometer — after the original was targeted by copycat hacks — it boasts a sophisticated algorithm based on all that it has learned. It is a neat trick. You can feed it your own — or anyone else’s — Twitter name and quickly establish how bot-like your bons mots are.
On a scale where zero is human and five is machine, mine scored 0.2, putting @TimAdamsWrites on a sentient level with @JeremyCorbyn, but — disturbingly — slightly more robotic than
@theresa_may.
Speaking to me on the phone last week, Ferrara explained how in the five years since BotOrNot has been up and running, detection has become vastly more complex.
“The advance in artificial intelligence [AI] and natural language processing makes the bots better each day,” he said.
The incalculable data sets that Google and others have harvested from our incessant online chatter are helping to make bots sound much more like us.
The Botometer is powered by two systems. One is a “white box” that has been trained over the years to examine statistical patterns in the language, “as well as the sentiment, the opinion” of tweets, Ferrara said.
In all, there are more than 1,200 weighted features that a Twitter feed is measured against to determine if it has a pulse. Alongside that, the Botometer has a “black-box model” fed with a mass of data from bots and humans, which has developed its own sets of criteria to separate human from machine.
Ferrara and his team are not exactly sure what this system relies on for its judgements, but they are impressed by its accuracy.
When Ferrara started on this work, he felt he had developed his own sixth sense for sniffing out AI on Twitter. Now he is no longer so confident.
“Today it is not clear to me that I interact with as many humans as I thought I did,” he said. “We look hard at some accounts, we run them through the algorithm and it is undecided. Quite often now it is a coin toss. The language seems too good to be true.”
Not all bots aim to deceive; many perform routine operations. Bots were originally created to help automate repetitive tasks, saving companies money and time. Some bots help to refresh your Facebook feed or keep you up to date with the weather.
On social media, bots were originally coded to search for hashtags and keywords, and retweet or amplify messages: “OMG have you seen this?” They acted as cheerleaders for Justin Bieber or Star Wars, or Taylor Swift. There were “vanity bots,” which added numbers and fake “likes” to profiles to artificially enhance their status and “traffic bots” designed to drive customers to a particular shopping Web site.
There were also bots that acted as grammarians, making pedantic corrections to tweets, or simple gags like Robot J. McCarthy, which sought out conversations using the word “communist” and replied with a nonsensical slogan.
At some point, political bots entered the fray, mostly on Twitter, with the intent of spreading propaganda and misinformation. Originally these appear to have been the work of individual hackers, before the techniques were adopted by organized and lavishly funded groups.
These bots proved to be a highly effective way to broadcast extremist viewpoints and spread conspiracy theories, but were also programmed to search out such views from other, genuine accounts by liking, sharing, retweeting and following to give them disproportionate prominence, Ferrara said.
It is these bots that the social media platforms have been trying to cull in the wake of investigations into the 2016 US presidential election by US Special Counsel Robert Mueller and others.
When I spoke to Ferrara, he was looking at the data from the US midterm elections, examining the viral spread of fake news and the ways in which it was still being “weaponized” by battalions of automated users.
“If you were an optimist, you would think that the numbers look OK,” he said. “Between 10 and 11 percent of the users involved in conversations around the election are flagged as bots — and that is significantly less than in 2016, when it was something like 20 percent.”
“The pessimistic interpretation is that our bot-detection systems are not picking up the more sophisticated bots, which look just like humans even to the eyes of the algorithms,” Ferrara added.
The unseen global army of “bot herders,” those shadowy individuals, corporations and rogue government agencies that send their bots out into the virtual world, have a number of advantages in this latter respect.
One is that they are now able to find enormous amounts of natural-language data to develop the next generation of talkative bots.
The other is that these creations can exploit our tendency to ascribe trusted human characteristics to voices even if, on a rational level, we suspect that they are artificial. That psychology is as old as electronic communication itself.
All modern chatbots trace their family tree back to the experiments by Joseph Weizenbaum with Eliza, named after Ms Doolittle in Pygmalion for “her” ability to master received pronunciation.
In 1966, Weizenbaum, a German-American professor at the Massachusetts Institute of Technology, created a prototype chatbot that searched for keywords in conversations conducted with humans typing at keyboards. The rudimentary program would pick up these words and use them in its reply. If it did not locate a useful word, it would offer a neutral response.
Weizenbaum set up Eliza to mimic the tone and manner of an east coast psychotherapist, a dynamic that allowed its voice to “adopt the pose of knowing almost nothing of the real world.” The professor had assumed that his bit of trickery would be a useful way of teaching his students about the structures of language and dialogue.
However, what he discovered was that several of them became hooked on conversing with Eliza and telling her their troubles, opening up to the program as if it were a therapist. When Weizenbaum’s secretary asked him to leave the room one morning because she was in the middle of a session with Eliza, he became alarmed by the general willingness to be so seduced by a machine voice.
Weizenbaum published an academic paper that included the first notable conversation between woman and machine. It went like this:
“Woman: Men are all alike.
“Eliza: In what way?
“W: They’re always bugging us about something or other.
“E: Can you think of a specific example?
“W: Well, my boyfriend made me come here.
“E: Your boyfriend made you come here?
“W: He says I’m depressed much of the time.
“E: I am sorry to hear that you are depressed.
“W: It’s true. I am unhappy.
“E: Do you think coming here will help you not to be unhappy?
“W: I need some help, that much seems certain.
“E: What would it mean to you if you got some help?
“W: Perhaps I could learn to get along with my mother.
“E: Tell me more about your family.
“W: My mother takes care of me.
“E: Who else in your family takes care of you?
“W: My father.
“E: Your father?
“W: You are like my father in some ways.
“E: What resemblance do you see?
“W: You are not very aggressive, but I think you don’t want me to notice that.
“E: What makes you think I am not very aggressive?
“W: You don’t argue with me?”
Weizenbaum later became a fierce critic of the ways that human intelligence was becoming in thrall to AI, which he viewed as a corporatized, reactionary force. He was stunned by how quickly the users of Eliza gave human agency to what was a relatively simple piece of code.
It indicated to him that the brain had evolved to view all speech as meaningful, even if it came from a patently fake source. He worried, extremely presciently, about the implications of this.
“The whole issue of the credibility [to humans] of machine output demands investigation,” he concluded in his paper. “Important decisions increasingly tend to be made in response to computer output. Eliza shows, if nothing else, how easy it is to create and maintain the illusion of understanding.”
The many progeny of Eliza have evolved into chatbots — bits of software designed to mimic human conversation. They include recent entries into the annual Loebner prize, which offers chatbot contestants the chance to fool a panel of human judges with their intelligence.
The comforting principle of telling our deepest fears to a machine is also exploited in various “therapy” platforms, marketed as a genuine alternative to conventional talking cures. Each of them trades on the idea of our fundamental desire to be listened to, the impulse that shapes social media.
Lisa-Maria Neudert is part of the computational propaganda project at Oxford University, which studies the ways in which political bots have been used to spread disinformation and distort online discourse.
The seductive intimacy of chatbots will prove to be the next battleground in this ongoing war, Neudert said.
The Oxford research team began examining the huge growth of bot activity on social media after Malaysia Airlines Flight 17 was shot down with a Russian missile in 2014. A dizzying number of competing conspiracy theories were “seeded” and encouraged to spread by a “red army” of automated agents, muddying the facts of the atrocity.
The more Oxford researchers looked, the more they saw how similar patterns of online activity were amplifying specific hashtags or distorting news.
In the beginning, the bots would rely on volume, Neudert said.
“For example, in the Arab spring, bots were flooding hashtags that activists were using underground to make the conversation useless,” she said.
Or, like Eliza, bots would respond to a keyword to get a marginal topic trending and, often, into the news. This was an effective, but blunt instrument.
“If I tweet something saying ‘I hate Trump,’” an old-style bot “would send me a message about Trump because it is responding to that keyword, but if I say ‘I love Trump,’ it would send me the same message,” Neudert said.
These bots were not smart enough to recognize intent, but that is changing.
“The commercial companies that are using AI and natural language processing right now are already building such technologies,” she said. “What we are doing as a project is to try to find out if the political actors are already using them also.”
Neudert is particularly interested in the new generation of branded chatbots that push content and initiate conversations on messaging platforms. Such chatbots — which openly declare themselves to be automated — represent a new way for businesses and news services to attract your attention, giving the impression of speaking just to you.
Neudert imagines the propaganda bots will use the same technology, but without declaring themselves, she said.
“They’ll present themselves as human users participating in online conversation in comment sections, group chats and message boards,” she said.
QUICK GUIDE
Chatbots for health, wealth and music
WoeBot
Designed to help those suffering from depression by facilitating quick conversations. It will even check up on you every now and then to see how you are doing. The company bills it as a “robot friend, who’s ready to listen.”
Cleo
An AI chatbot aimed at helping you to organize your finances. It connects with your bank account and can give you detailed information via Facebook Messenger about what you spent and where you spent it.
Robot Pires
Arsenal FC invite you to talk to a cartoon-bot Roberto Pires — ask for news about the club, as well as the player’s own record, including how many goals he scored for the Gunners.
Paul McCartney
The “official Messenger bot for the music legend Paul McCartney” will react with gifs of the singer and can tell you when he is on tour, about his latest projects and more. However, it does not respond too well to questions. When asked: “How old is Paul?” the bot replied with a video of a flying baguette.
TfL TravelBot
Designed to allow you to check on how London’s transit system is running, all via Facebook Messenger. It can be asked about the status of lines and when asked how to get from A to B will provide three links with the fastest route.
Lark
A health coach that can help users manage the symptoms of hypertension, diabetes, etc. Using data gathered from the user’s connected devices, it makes data-driven nudges and recommendations to encourage healthier behavior.
OllyBot
The Olly Murs official chatbot can answer questions about the singer, provide fans with information about his upcoming tours and offer playlists of his music. The bot replicates the celebrity’s tone by calling himself “29+3” years old and ending messages with winking emojis.
Insomnobot-3000
Created by mattress company Casper, this chatbot allows sleepless users to message it for recommendations and suggestions that may improve their sleeping routine.
This is part I of a two-part story. Part II will be published in tomorrow’s edition.
Because much of what former US president Donald Trump says is unhinged and histrionic, it is tempting to dismiss all of it as bunk. Yet the potential future president has a populist knack for sounding alarums that resonate with the zeitgeist — for example, with growing anxiety about World War III and nuclear Armageddon. “We’re a failing nation,” Trump ranted during his US presidential debate against US Vice President Kamala Harris in one particularly meandering answer (the one that also recycled urban myths about immigrants eating cats). “And what, what’s going on here, you’re going to end up in World War
Earlier this month in Newsweek, President William Lai (賴清德) challenged the People’s Republic of China (PRC) to retake the territories lost to Russia in the 19th century rather than invade Taiwan. He stated: “If it is for the sake of territorial integrity, why doesn’t [the PRC] take back the lands occupied by Russia that were signed over in the treaty of Aigun?” This was a brilliant political move to finally state openly what many Chinese in both China and Taiwan have long been thinking about the lost territories in the Russian far east: The Russian far east should be “theirs.” Granted, Lai issued
On Tuesday, President William Lai (賴清德) met with a delegation from the Hoover Institution, a think tank based at Stanford University in California, to discuss strengthening US-Taiwan relations and enhancing peace and stability in the region. The delegation was led by James Ellis Jr, co-chair of the institution’s Taiwan in the Indo-Pacific Region project and former commander of the US Strategic Command. It also included former Australian minister for foreign affairs Marise Payne, influential US academics and other former policymakers. Think tank diplomacy is an important component of Taiwan’s efforts to maintain high-level dialogue with other nations with which it does
On Sept. 2, Elbridge Colby, former deputy assistant secretary of defense for strategy and force development, wrote an article for the Wall Street Journal called “The US and Taiwan Must Change Course” that defends his position that the US and Taiwan are not doing enough to deter the People’s Republic of China (PRC) from taking Taiwan. Colby is correct, of course: the US and Taiwan need to do a lot more or the PRC will invade Taiwan like Russia did against Ukraine. The US and Taiwan have failed to prepare properly to deter war. The blame must fall on politicians and policymakers