Tue, Nov 27, 2018 - Page 9 News List

Charge of the chatbots: How do you tell who is human online?

Automated ‘voices’ that were supposed to perform mundane tasks online now also spread hate speech and polarize opinion, acting on people’s desire to be listened to and the impulse to ascribe trusted human characteristics to voices

By Tim Adams  /  The Observer

Illustration: Yusha

English mathematician Alan Turing’s famous test of whether machines could fool us into believing they were human — “the imitation game” — has become a mundane, daily question for all of us.

We are surrounded by machine voices and think nothing of conversing with them — although each time I hear my car tell me where to turn left, I am reminded of my grandmother, who, having installed a telephone late in life, used to routinely say good night to the speaking clock.

We find ourselves locked into interminable text chats with breezy automated bank tellers and offer our mother’s maiden name to a variety of robotic speakers that sound plausibly alive.

I have resisted the domestic spies of Apple and Amazon, but one or two friends jokingly describe the rapport they and their kids have built up with Amazon’s Alexa or Google’s Home Hub — and they are right about that: The more you tell your virtual valet, the more you disclose of wants and desires, the more speedily it can learn and commit to memory those last few fragments of your inner life you had kept to yourself.

As the line between human and digital voices blurs, our suspicions are raised: Who exactly are we talking to? No online conversation or message board spat is complete without its doubters: “Are you a bot?” Or, the contemporary door slam: “Bot: blocked.” Those doubts will only increase.

The ability of bots — a term that can describe any automated process present in a computer network — to mimic human online behavior and language has developed sharply over the past three years.

For the moment, most of us remain serenely confident that we can tell the difference between a human presence and the voices of the encoded “foot soldiers” of the Internet that perform more than 50 percent of its tasks and contribute about 20 percent of all social media “conversation.”

However, that confidence does not extend to those who have devoted the past decade to trying to detect, and defend against, that bot invasion.

Naturally, because of the scale of the task, they must enlist bots to help them find bots. The most accessible automated Turing test is the creation of Emilio Ferrara, principal investigator in machine intelligence and data science at the University of Southern California.

In its infancy the bot detector “BotOrNot?” allowed you to use many of the conventional indicators of automation — abnormal account activity, repetition, generic profiles — to determine the origin of a Twitter feed.

Now called the Botometer — after the original was targeted by copycat hacks — it boasts a sophisticated algorithm based on all that it has learned. It is a neat trick. You can feed it your own — or anyone else’s — Twitter name and quickly establish how bot-like your bons mots are.

On a scale where zero is human and five is machine, mine scored 0.2, putting @TimAdamsWrites on a sentient level with @JeremyCorbyn, but — disturbingly — slightly more robotic than

@theresa_may.

Speaking to me on the phone last week, Ferrara explained how in the five years since BotOrNot has been up and running, detection has become vastly more complex.

“The advance in artificial intelligence [AI] and natural language processing makes the bots better each day,” he said.

The incalculable data sets that Google and others have harvested from our incessant online chatter are helping to make bots sound much more like us.

This story has been viewed 1825 times.

Comments will be moderated. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned.

TOP top