Alexa, are you really a human?
The revelation that a large team of Amazon.com Inc employees listens to conversations recorded by the company’s digital assistant has exposed the contrast between the hype of artificial intelligence (AI) and the reality of the armies of underpaid people that make the technology work in real life. It is these battalions that are leading Silicon Valley’s massive privacy invasion.
AI is supposed to be good at pattern recognition and natural language processing. However, it is all but impossible to train a neural network to recognize speech or faces with certainty. Algorithms that have to interact seamlessly with people need to be constantly retrained to allow for changes in slang, population movements that bring new accents, cultural phenomena and fashion trends.
That does not happen by magic; the algorithms cannot find out about the latest pop sensation or TV series all by themselves.
During last year’s soccer World Cup in Russia, the authorities used a sophisticated facial recognition system to exclude known hooligans from stadiums. It worked — until the final game, when members of punk band Pussy Riot rushed onto the field, dressed in police uniforms. They were not in the database.
For AI to work, it needs constant human input, but companies selling AI-based products do not want to tell customers about the role played by what Wired staff writer Lily Hay Newman has called their “covert human workforces” for two reasons.
One is that using thousands of people to annotate data collected from customers does not sound as magical as “deep learning,” “neural networks,” and “human-level image and speech recognition.”
The other is that people are prepared to entrust their secrets to a disembodied algorithm, in the same the way as King Midas’ barber whispered to the reeds about the king’s donkey ears.
However, if those secrets risked being heard by people, especially those with access to information that might identify the customer, it would be a different matter.
In the Midas myth, the barber’s whispers were picked up and amplified by the echo — coincidentally, the name of one Amazon device used to summon Alexa.
Employees who annotate its audio recordings and help train the virtual assistant to recognize that Taylor Swift does not mean a rush order for a suit do not see customers’ full names and addresses, but apparently do get access to account numbers and device serial numbers.
That is not a big distinction — especially when it comes to private conversations involving financial transactions or sensitive family matters. These, of course, are picked up by Alexa when the digital assistant is accidentally triggered.
There is not much difference between this level of access and that enjoyed by employees at the Kiev office of Ring, the security camera firm owned by Amazon.
The Intercept earlier this year reported that, unbeknownst to clients, employees tasked with annotating videos were watching camera feeds from inside and outside people’s homes.
Tellingly, the wording of Amazon’s response to the Intercept’s story was identical to the one it provided to Bloomberg.
The firm said that it has “zero tolerance for abuse of our systems.”
This kind of boilerplate response does little to inspire trust.
Amazon is not, of course, the only company that does this kind of thing.
In 2017, Expensify, which helps companies manage employees’ expense accounts, hired workers on Mechanical Turk, the Amazon-owned labor exchange, to analyze receipts.
Last year, the Guardian wrote of the rise of what it called pseudo-AI, and identified a number of cases where tech companies hired low-paid people to generate training data for AI.
The line between training and imitation is thin: To create the necessary dataset, people are sometimes needed to replicate the work expected of the algorithm, and this can go on for a long time.
Facebook Inc and Google are unlikely to ever get rid of the tens of thousands of contractors who scan posts for offensive, sensitive or criminal content, because their algorithms will never be good enough to prevent scandalous failures without human help.
In principle, there is nothing wrong with this human participation in AI-based endeavors. It is actually how things should work if a cataclysm in the labor market is to be avoided.
As Daron Acemoglu from the Massachusetts Institute of Technology and Pascual Restrepo from Boston University wrote in a recent paper: “The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage.”
Those “covert human workforces” are doing tasks that would not have emerged without AI.
The problem lies elsewhere. Companies working on AI projects should be honest about human participation.
Facebook and Google are already: Their moderators and quality raters do similar work to that performed by Amazon’s Alexa-training team.
Of course, openness would demystify AI, and perhaps curb sales of intrusive products such as the Echo, but many people these days are happy to sacrifice privacy for convenience, so there would still be money to be made from these devices.
Regulators have a useful role to play here. They should make sure companies really anonymize their AI-training datasets and make it impossible to link sensitive data to actual people, rather than contingent on a company’s goodwill or enforcement practices.
It is also up to the authorities to examine the pay and conditions of workers in these new, data-oriented occupations. These people are often treated by the tech industry as nonessential, unqualified and easily replaceable, but are doing a job with an emotional and psychological toll that is not well understood.
The reason these workers talk to reporters, despite their nondisclosure agreements, is that they are underappreciated, operating in a gray backwater of the much-glorified tech industry. Their important role should not be a dirty secret.
Taiwan Semiconductor Manufacturing Co (TSMC, 台積電) last week recorded an increase in the number of shareholders to the highest in almost eight months, despite its share price falling 3.38 percent from the previous week, Taiwan Stock Exchange data released on Saturday showed. As of Friday, TSMC had 1.88 million shareholders, the most since the week of April 25 and an increase of 31,870 from the previous week, the data showed. The number of shareholders jumped despite a drop of NT$50 (US$1.59), or 3.38 percent, in TSMC’s share price from a week earlier to NT$1,430, as investors took profits from their earlier gains
AI TALENT: No financial details were released about the deal, in which top Groq executives, including its CEO, would join Nvidia to help advance the technology Nvidia Corp has agreed to a licensing deal with artificial intelligence (AI) start-up Groq, furthering its investments in companies connected to the AI boom and gaining the right to add a new type of technology to its products. The world’s largest publicly traded company has paid for the right to use Groq’s technology and is to integrate its chip design into future products. Some of the start-up’s executives are leaving to join Nvidia to help with that effort, the companies said. Groq would continue as an independent company with a new chief executive, it said on Wednesday in a post on its Web
CHINA RIVAL: The chips are positioned to compete with Nvidia’s Hopper and Blackwell products and would enable clusters connecting more than 100,000 chips Moore Threads Technology Co (摩爾線程) introduced a new generation of chips aimed at reducing artificial intelligence (AI) developers’ dependence on Nvidia Corp’s hardware, just weeks after pulling off one of the most successful Chinese initial public offerings (IPOs) in years. “These products will significantly enhance world-class computing speed and capabilities that all developers aspire to,” Moore Threads CEO Zhang Jianzhong (張建中), a former Nvidia executive, said on Saturday at a company event in Beijing. “We hope they can meet the needs of more developers in China so that you no longer need to wait for advanced foreign products.” Chinese chipmakers are in
POLICY REVERSAL: The decision to allow sales of Nvidia’s H200 chips to China came after years of tightening controls and has drawn objections among some Republicans US House Republicans are calling for arms-sale-style congressional oversight of artificial intelligence (AI) chip exports as US President Donald Trump’s administration moves to approve licenses for Nvidia Corp to ship its H200 processor to China. US Representative Brian Mast, the Republican chairman of the US House Committee on Foreign Affairs, which oversees export controls, on Friday introduced a bill dubbed the AI Overwatch Act that would require the US Congress to be notified of AI chips sales to adversaries. Any processors equal to or higher in capabilities than Nvidia’s H20 would be subject to oversight, the draft bill says. Lawmakers would have