At present the feasibility of a truly conversational chatbot — one that can understand the context of any conversational gambit, pick up tonal ambiguities and retain a sense of how the discussion is evolving — is still a long way off.
The new generation of chatbots might be good at answering direct questions or interrupting debates, but they are ill-equipped to sustain coherence over a range of subjects.
What they might soon be capable of is maintaining short bursts of plausible dialogue with a predetermined narrative.
In a paper in the MIT Review, Neudert suggested that in the near future, such “conversational bots might seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyze a user’s data to deliver customized propaganda.”
In this scenario, and judging by what is already happening, the bots would have the capacity to “point people towards extremist viewpoints, counter arguments in a conversational manner [and] attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.”
Of course all of this will be done by a voice that engages one on one, that talks just to us.
There are a number of fast-growing companies that are beginning offer the kind of technology that Neudert describes as a legitimate marketing tool.
Several are official partners of Facebook to use its Messenger service.
They include the market-leading Russian-based company Chatfuel, which has enabled thousands of organizations to build Messenger chatbots, including headline acts, such as the NFL and the British Labour Party, and a number of smaller operations, such as Berlin-based Spectrm, which has created Messenger chatbots for the likes of CNN and Red Bull.
I spoke to Max Koziolek, one of the founders of Spectrm, who is (predictably) evangelical about the new way of businesses speaking “like a friend” to their users and customers.
Using a combination of natural language data and human input, Spectrm has created bots that can already converse on a narrow range of subject matter.
“On a specialist subject you can now get to 85 percent of queries pretty fast,” Koziolek said, “and then you will have the long tail, all those surprise questions which take ages to get right. But if you are making something to answer queries about Red Bull, for example, does it really need to know who is the chancellor of Germany?”
One of the most successful chatbots that Spectrm has created was a public health initiative to advise on the morning-after contraceptive pill.
“It is one of those times when someone might prefer to speak to a bot than a human because they are a bit embarrassed,” Koziolek said. “They talk to the bot about what they should do about having had unprotected sex and it understands naturally 75 percent of queries, even if they are writing in a language which is not very clear.”
Within a year of listening and learning, he is confident that capacity will have increased to nearly 100 percent.
Increasingly we will become used to almost every entity in our lives “talking to us as if it is a friend,” he said, a relationship that will require certain rules of engagement. “If you send messages after 11pm, that’s bad. And also if you send too many. I wouldn’t send more than two messages a day as a publisher, for example. It’s a very intimate space. A friend is sending me relevant information and at the right time.”
Many social media followers of celebrities, such as Justin Bieber and Taylor Swift, have turned out to be bots.
Far from being a new frontier in the propaganda wars, Koziolek believes — hugely optimistically — that such direct conversation could help to clear the Internet of hate speech, giving users more control over who they hear from.
Does it matter whether they know that the chat is from a machine?
“We don’t see big differences,” he said. “Sometimes our bots have a clear personality and sometimes they don’t. Bots which have a personality will always say ‘goodnight,’ for example. Or ‘How are you?’”
Do those types of bots produce longer conversations?
“Different kinds of conversations. Even though you know this thing is a robot, you behave differently toward it. I would say you cannot avoid that. Even though you know it is a machine, you immediately talk to it just like it is a human,” Koziolek said.
This blurring of the lines is less welcome to observers like Ferrara, who has had a front-row seat in the changing dialogues between human and machine.
I wonder, in observing at close quarters for so long if he has, anecdotally, felt the mood of conversations changing, if interactions have become angrier?
He said he has.
“The thing was, I was becoming increasingly concerned about all sorts of phenomena,” he said. “I worked on a variety of problems, bots was one. I also looked at radicalization, at how Twitter was being used to recruit ISIS and at how conspiracies affected people’s decisionmaking when it comes to public health, when it comes to vaccines and smoking. I looked at how bots and other campaigns [that] had been used to try to manipulate the stock market. There are all sorts of things that have nefarious consequences.”
What aspect of this behavior alarmed him the most?
“The most striking thing to me to this day is that people are really, really bad at assessing the source of information,” he said.
One thing his team have shown is that the rate at which people retweet information from bots is identical to that from humans.
“That is concerning for all sorts of reasons,” Ferrara said.
Despite the revelation of such findings, he gets frustrated that people, for political purposes, still seek to dismiss the ways in which these phenomena have changed the nature of online discourse. As if the most targeted propaganda, employed on the most unregulated of mass media, had no effect on opinion or behavior.
One of his later projects has been to try to show how quickly messages can spread from, and be adopted by, targeted user groups.
Last year, Ferrara’s team received permission to introduce a series of “good” health messages to Twitter via bots posing as humans.
They quickly built up thousands of followers, revealing the ways in which a flood of messages, from apparently like-minded agents, can very quickly and effectively change the tone and attitude of online conversation.
Unfortunately, such “good” bots are vastly outnumbered by those seeking to spread discord and disinformation.
Where does he place his faith in a solution?
“This is not a problem you can solve with technology alone,” he said. “You need tech, you need some system of regulation that incentivizes companies to do that. It requires a lot of money. And then you need public opinion to care enough to want to do something about it.”
I suggested to him that there seems to be a grain of hope in that people are reaching out in greater numbers toward trusted, fact-checked news sources: Subscriptions to the New York Times and the Washington Post are up, and the Guardian and the Observer have notched up 1 million online supporters.
“It’s true,” he said. “But then I have a chart on my screen which I am looking at as I talk to you. It gives live information on the sources of things being retweeted by different groups. Way at the top is Breitbart: 31 percent. Second: Fox News. Then the Gateway Pundit [a far-right news site]. Looking at this, it is like we haven’t yet learned anything from 2016.”
This is part II of a two-part story. Part I was published in yesterday’s edition.
In their recent op-ed “Trump Should Rein In Taiwan” in Foreign Policy magazine, Christopher Chivvis and Stephen Wertheim argued that the US should pressure President William Lai (賴清德) to “tone it down” to de-escalate tensions in the Taiwan Strait — as if Taiwan’s words are more of a threat to peace than Beijing’s actions. It is an old argument dressed up in new concern: that Washington must rein in Taipei to avoid war. However, this narrative gets it backward. Taiwan is not the problem; China is. Calls for a so-called “grand bargain” with Beijing — where the US pressures Taiwan into concessions
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s
During the “426 rally” organized by the Chinese Nationalist Party (KMT) and the Taiwan People’s Party under the slogan “fight green communism, resist dictatorship,” leaders from the two opposition parties framed it as a battle against an allegedly authoritarian administration led by President William Lai (賴清德). While criticism of the government can be a healthy expression of a vibrant, pluralistic society, and protests are quite common in Taiwan, the discourse of the 426 rally nonetheless betrayed troubling signs of collective amnesia. Specifically, the KMT, which imposed 38 years of martial law in Taiwan from 1949 to 1987, has never fully faced its