At present the feasibility of a truly conversational chatbot — one that can understand the context of any conversational gambit, pick up tonal ambiguities and retain a sense of how the discussion is evolving — is still a long way off.
The new generation of chatbots might be good at answering direct questions or interrupting debates, but they are ill-equipped to sustain coherence over a range of subjects.
What they might soon be capable of is maintaining short bursts of plausible dialogue with a predetermined narrative.
In a paper in the MIT Review, Neudert suggested that in the near future, such “conversational bots might seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyze a user’s data to deliver customized propaganda.”
In this scenario, and judging by what is already happening, the bots would have the capacity to “point people towards extremist viewpoints, counter arguments in a conversational manner [and] attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.”
Of course all of this will be done by a voice that engages one on one, that talks just to us.
There are a number of fast-growing companies that are beginning offer the kind of technology that Neudert describes as a legitimate marketing tool.
Several are official partners of Facebook to use its Messenger service.
They include the market-leading Russian-based company Chatfuel, which has enabled thousands of organizations to build Messenger chatbots, including headline acts, such as the NFL and the British Labour Party, and a number of smaller operations, such as Berlin-based Spectrm, which has created Messenger chatbots for the likes of CNN and Red Bull.
I spoke to Max Koziolek, one of the founders of Spectrm, who is (predictably) evangelical about the new way of businesses speaking “like a friend” to their users and customers.
Using a combination of natural language data and human input, Spectrm has created bots that can already converse on a narrow range of subject matter.
“On a specialist subject you can now get to 85 percent of queries pretty fast,” Koziolek said, “and then you will have the long tail, all those surprise questions which take ages to get right. But if you are making something to answer queries about Red Bull, for example, does it really need to know who is the chancellor of Germany?”
One of the most successful chatbots that Spectrm has created was a public health initiative to advise on the morning-after contraceptive pill.
“It is one of those times when someone might prefer to speak to a bot than a human because they are a bit embarrassed,” Koziolek said. “They talk to the bot about what they should do about having had unprotected sex and it understands naturally 75 percent of queries, even if they are writing in a language which is not very clear.”
Within a year of listening and learning, he is confident that capacity will have increased to nearly 100 percent.
Increasingly we will become used to almost every entity in our lives “talking to us as if it is a friend,” he said, a relationship that will require certain rules of engagement. “If you send messages after 11pm, that’s bad. And also if you send too many. I wouldn’t send more than two messages a day as a publisher, for example. It’s a very intimate space. A friend is sending me relevant information and at the right time.”
Many social media followers of celebrities, such as Justin Bieber and Taylor Swift, have turned out to be bots.
Far from being a new frontier in the propaganda wars, Koziolek believes — hugely optimistically — that such direct conversation could help to clear the Internet of hate speech, giving users more control over who they hear from.
Does it matter whether they know that the chat is from a machine?
“We don’t see big differences,” he said. “Sometimes our bots have a clear personality and sometimes they don’t. Bots which have a personality will always say ‘goodnight,’ for example. Or ‘How are you?’”
Do those types of bots produce longer conversations?
“Different kinds of conversations. Even though you know this thing is a robot, you behave differently toward it. I would say you cannot avoid that. Even though you know it is a machine, you immediately talk to it just like it is a human,” Koziolek said.
This blurring of the lines is less welcome to observers like Ferrara, who has had a front-row seat in the changing dialogues between human and machine.
I wonder, in observing at close quarters for so long if he has, anecdotally, felt the mood of conversations changing, if interactions have become angrier?
He said he has.
“The thing was, I was becoming increasingly concerned about all sorts of phenomena,” he said. “I worked on a variety of problems, bots was one. I also looked at radicalization, at how Twitter was being used to recruit ISIS and at how conspiracies affected people’s decisionmaking when it comes to public health, when it comes to vaccines and smoking. I looked at how bots and other campaigns [that] had been used to try to manipulate the stock market. There are all sorts of things that have nefarious consequences.”
What aspect of this behavior alarmed him the most?
“The most striking thing to me to this day is that people are really, really bad at assessing the source of information,” he said.
One thing his team have shown is that the rate at which people retweet information from bots is identical to that from humans.
“That is concerning for all sorts of reasons,” Ferrara said.
Despite the revelation of such findings, he gets frustrated that people, for political purposes, still seek to dismiss the ways in which these phenomena have changed the nature of online discourse. As if the most targeted propaganda, employed on the most unregulated of mass media, had no effect on opinion or behavior.
One of his later projects has been to try to show how quickly messages can spread from, and be adopted by, targeted user groups.
Last year, Ferrara’s team received permission to introduce a series of “good” health messages to Twitter via bots posing as humans.
They quickly built up thousands of followers, revealing the ways in which a flood of messages, from apparently like-minded agents, can very quickly and effectively change the tone and attitude of online conversation.
Unfortunately, such “good” bots are vastly outnumbered by those seeking to spread discord and disinformation.
Where does he place his faith in a solution?
“This is not a problem you can solve with technology alone,” he said. “You need tech, you need some system of regulation that incentivizes companies to do that. It requires a lot of money. And then you need public opinion to care enough to want to do something about it.”
I suggested to him that there seems to be a grain of hope in that people are reaching out in greater numbers toward trusted, fact-checked news sources: Subscriptions to the New York Times and the Washington Post are up, and the Guardian and the Observer have notched up 1 million online supporters.
“It’s true,” he said. “But then I have a chart on my screen which I am looking at as I talk to you. It gives live information on the sources of things being retweeted by different groups. Way at the top is Breitbart: 31 percent. Second: Fox News. Then the Gateway Pundit [a far-right news site]. Looking at this, it is like we haven’t yet learned anything from 2016.”
This is part II of a two-part story. Part I was published in yesterday’s edition.
Because much of what former US president Donald Trump says is unhinged and histrionic, it is tempting to dismiss all of it as bunk. Yet the potential future president has a populist knack for sounding alarums that resonate with the zeitgeist — for example, with growing anxiety about World War III and nuclear Armageddon. “We’re a failing nation,” Trump ranted during his US presidential debate against US Vice President Kamala Harris in one particularly meandering answer (the one that also recycled urban myths about immigrants eating cats). “And what, what’s going on here, you’re going to end up in World War
Earlier this month in Newsweek, President William Lai (賴清德) challenged the People’s Republic of China (PRC) to retake the territories lost to Russia in the 19th century rather than invade Taiwan. He stated: “If it is for the sake of territorial integrity, why doesn’t [the PRC] take back the lands occupied by Russia that were signed over in the treaty of Aigun?” This was a brilliant political move to finally state openly what many Chinese in both China and Taiwan have long been thinking about the lost territories in the Russian far east: The Russian far east should be “theirs.” Granted, Lai issued
On Tuesday, President William Lai (賴清德) met with a delegation from the Hoover Institution, a think tank based at Stanford University in California, to discuss strengthening US-Taiwan relations and enhancing peace and stability in the region. The delegation was led by James Ellis Jr, co-chair of the institution’s Taiwan in the Indo-Pacific Region project and former commander of the US Strategic Command. It also included former Australian minister for foreign affairs Marise Payne, influential US academics and other former policymakers. Think tank diplomacy is an important component of Taiwan’s efforts to maintain high-level dialogue with other nations with which it does
On Sept. 2, Elbridge Colby, former deputy assistant secretary of defense for strategy and force development, wrote an article for the Wall Street Journal called “The US and Taiwan Must Change Course” that defends his position that the US and Taiwan are not doing enough to deter the People’s Republic of China (PRC) from taking Taiwan. Colby is correct, of course: the US and Taiwan need to do a lot more or the PRC will invade Taiwan like Russia did against Ukraine. The US and Taiwan have failed to prepare properly to deter war. The blame must fall on politicians and policymakers