At present the feasibility of a truly conversational chatbot — one that can understand the context of any conversational gambit, pick up tonal ambiguities and retain a sense of how the discussion is evolving — is still a long way off.
The new generation of chatbots might be good at answering direct questions or interrupting debates, but they are ill-equipped to sustain coherence over a range of subjects.
What they might soon be capable of is maintaining short bursts of plausible dialogue with a predetermined narrative.
In a paper in the MIT Review, Neudert suggested that in the near future, such “conversational bots might seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyze a user’s data to deliver customized propaganda.”
In this scenario, and judging by what is already happening, the bots would have the capacity to “point people towards extremist viewpoints, counter arguments in a conversational manner [and] attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.”
Of course all of this will be done by a voice that engages one on one, that talks just to us.
There are a number of fast-growing companies that are beginning offer the kind of technology that Neudert describes as a legitimate marketing tool.
Several are official partners of Facebook to use its Messenger service.
They include the market-leading Russian-based company Chatfuel, which has enabled thousands of organizations to build Messenger chatbots, including headline acts, such as the NFL and the British Labour Party, and a number of smaller operations, such as Berlin-based Spectrm, which has created Messenger chatbots for the likes of CNN and Red Bull.
I spoke to Max Koziolek, one of the founders of Spectrm, who is (predictably) evangelical about the new way of businesses speaking “like a friend” to their users and customers.
Using a combination of natural language data and human input, Spectrm has created bots that can already converse on a narrow range of subject matter.
“On a specialist subject you can now get to 85 percent of queries pretty fast,” Koziolek said, “and then you will have the long tail, all those surprise questions which take ages to get right. But if you are making something to answer queries about Red Bull, for example, does it really need to know who is the chancellor of Germany?”
One of the most successful chatbots that Spectrm has created was a public health initiative to advise on the morning-after contraceptive pill.
“It is one of those times when someone might prefer to speak to a bot than a human because they are a bit embarrassed,” Koziolek said. “They talk to the bot about what they should do about having had unprotected sex and it understands naturally 75 percent of queries, even if they are writing in a language which is not very clear.”
Within a year of listening and learning, he is confident that capacity will have increased to nearly 100 percent.
Increasingly we will become used to almost every entity in our lives “talking to us as if it is a friend,” he said, a relationship that will require certain rules of engagement. “If you send messages after 11pm, that’s bad. And also if you send too many. I wouldn’t send more than two messages a day as a publisher, for example. It’s a very intimate space. A friend is sending me relevant information and at the right time.”
Many social media followers of celebrities, such as Justin Bieber and Taylor Swift, have turned out to be bots.
Far from being a new frontier in the propaganda wars, Koziolek believes — hugely optimistically — that such direct conversation could help to clear the Internet of hate speech, giving users more control over who they hear from.
Does it matter whether they know that the chat is from a machine?
“We don’t see big differences,” he said. “Sometimes our bots have a clear personality and sometimes they don’t. Bots which have a personality will always say ‘goodnight,’ for example. Or ‘How are you?’”
Do those types of bots produce longer conversations?
“Different kinds of conversations. Even though you know this thing is a robot, you behave differently toward it. I would say you cannot avoid that. Even though you know it is a machine, you immediately talk to it just like it is a human,” Koziolek said.
This blurring of the lines is less welcome to observers like Ferrara, who has had a front-row seat in the changing dialogues between human and machine.
I wonder, in observing at close quarters for so long if he has, anecdotally, felt the mood of conversations changing, if interactions have become angrier?
He said he has.
“The thing was, I was becoming increasingly concerned about all sorts of phenomena,” he said. “I worked on a variety of problems, bots was one. I also looked at radicalization, at how Twitter was being used to recruit ISIS and at how conspiracies affected people’s decisionmaking when it comes to public health, when it comes to vaccines and smoking. I looked at how bots and other campaigns [that] had been used to try to manipulate the stock market. There are all sorts of things that have nefarious consequences.”
What aspect of this behavior alarmed him the most?
“The most striking thing to me to this day is that people are really, really bad at assessing the source of information,” he said.
One thing his team have shown is that the rate at which people retweet information from bots is identical to that from humans.
“That is concerning for all sorts of reasons,” Ferrara said.
Despite the revelation of such findings, he gets frustrated that people, for political purposes, still seek to dismiss the ways in which these phenomena have changed the nature of online discourse. As if the most targeted propaganda, employed on the most unregulated of mass media, had no effect on opinion or behavior.
One of his later projects has been to try to show how quickly messages can spread from, and be adopted by, targeted user groups.
Last year, Ferrara’s team received permission to introduce a series of “good” health messages to Twitter via bots posing as humans.
They quickly built up thousands of followers, revealing the ways in which a flood of messages, from apparently like-minded agents, can very quickly and effectively change the tone and attitude of online conversation.
Unfortunately, such “good” bots are vastly outnumbered by those seeking to spread discord and disinformation.
Where does he place his faith in a solution?
“This is not a problem you can solve with technology alone,” he said. “You need tech, you need some system of regulation that incentivizes companies to do that. It requires a lot of money. And then you need public opinion to care enough to want to do something about it.”
I suggested to him that there seems to be a grain of hope in that people are reaching out in greater numbers toward trusted, fact-checked news sources: Subscriptions to the New York Times and the Washington Post are up, and the Guardian and the Observer have notched up 1 million online supporters.
“It’s true,” he said. “But then I have a chart on my screen which I am looking at as I talk to you. It gives live information on the sources of things being retweeted by different groups. Way at the top is Breitbart: 31 percent. Second: Fox News. Then the Gateway Pundit [a far-right news site]. Looking at this, it is like we haven’t yet learned anything from 2016.”
This is part II of a two-part story. Part I was published in yesterday’s edition.
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
Can US dialogue and cooperation with the communist dictatorship in Beijing help avert a Taiwan Strait crisis? Or is US President Joe Biden playing into Chinese President Xi Jinping’s (習近平) hands? With America preoccupied with the wars in Europe and the Middle East, Biden is seeking better relations with Xi’s regime. The goal is to responsibly manage US-China competition and prevent unintended conflict, thereby hoping to create greater space for the two countries to work together in areas where their interests align. The existing wars have already stretched US military resources thin, and the last thing Biden wants is yet another war.
As Maldivian President Mohamed Muizzu’s party won by a landslide in Sunday’s parliamentary election, it is a good time to take another look at recent developments in the Maldivian foreign policy. While Muizzu has been promoting his “Maldives First” policy, the agenda seems to have lost sight of a number of factors. Contemporary Maldivian policy serves as a stark illustration of how a blend of missteps in public posturing, populist agendas and inattentive leadership can lead to diplomatic setbacks and damage a country’s long-term foreign policy priorities. Over the past few months, Maldivian foreign policy has entangled itself in playing