Sun, Apr 14, 2019 - Page 15 News List

The role of Amazon’s human eavesdroppers needs demystifying

Firms do not readily reveal the human input required to make AI software work, because people are more willing to entrust their secrets to a disembodied algorithm

By Leonid Bershidsky  /  Bloomberg Opinion

Alexa, are you really a human?

The revelation that a large team of Amazon.com Inc employees listens to conversations recorded by the company’s digital assistant has exposed the contrast between the hype of artificial intelligence (AI) and the reality of the armies of underpaid people that make the technology work in real life. It is these battalions that are leading Silicon Valley’s massive privacy invasion.

AI is supposed to be good at pattern recognition and natural language processing. However, it is all but impossible to train a neural network to recognize speech or faces with certainty. Algorithms that have to interact seamlessly with people need to be constantly retrained to allow for changes in slang, population movements that bring new accents, cultural phenomena and fashion trends.

That does not happen by magic; the algorithms cannot find out about the latest pop sensation or TV series all by themselves.

During last year’s soccer World Cup in Russia, the authorities used a sophisticated facial recognition system to exclude known hooligans from stadiums. It worked — until the final game, when members of punk band Pussy Riot rushed onto the field, dressed in police uniforms. They were not in the database.

For AI to work, it needs constant human input, but companies selling AI-based products do not want to tell customers about the role played by what Wired staff writer Lily Hay Newman has called their “covert human workforces” for two reasons.

One is that using thousands of people to annotate data collected from customers does not sound as magical as “deep learning,” “neural networks,” and “human-level image and speech recognition.”

The other is that people are prepared to entrust their secrets to a disembodied algorithm, in the same the way as King Midas’ barber whispered to the reeds about the king’s donkey ears.

However, if those secrets risked being heard by people, especially those with access to information that might identify the customer, it would be a different matter.

In the Midas myth, the barber’s whispers were picked up and amplified by the echo — coincidentally, the name of one Amazon device used to summon Alexa.

Employees who annotate its audio recordings and help train the virtual assistant to recognize that Taylor Swift does not mean a rush order for a suit do not see customers’ full names and addresses, but apparently do get access to account numbers and device serial numbers.

That is not a big distinction — especially when it comes to private conversations involving financial transactions or sensitive family matters. These, of course, are picked up by Alexa when the digital assistant is accidentally triggered.

There is not much difference between this level of access and that enjoyed by employees at the Kiev office of Ring, the security camera firm owned by Amazon.

The Intercept earlier this year reported that, unbeknownst to clients, employees tasked with annotating videos were watching camera feeds from inside and outside people’s homes.

Tellingly, the wording of Amazon’s response to the Intercept’s story was identical to the one it provided to Bloomberg.

The firm said that it has “zero tolerance for abuse of our systems.”

This kind of boilerplate response does little to inspire trust.

Amazon is not, of course, the only company that does this kind of thing.

This story has been viewed 1456 times.

Comments will be moderated. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned.

TOP top