AI ‘godfather’ concerned about China

BIG BROTHER SCENARIO::Yoshua Bengio said he is scared the technology he helped to create is being used to control people when it should instead be highly regulated


Sun, Feb 03, 2019 - Page 16

Yoshua Bengio, a Canadian computer scientist who helped pioneer the techniques underpinning much of the current excitement around artificial intelligence (AI), said he is worried about China’s use of AI for surveillance and political control.

Bengio, who is also a cofounder of Montreal-based software company Element AI, said he was concerned about the technology he helped create being used to control people’s behavior and influence their minds.

“This is the 1984 Big Brother scenario,” he said in an interview. “I think it’s becoming more and more scary.”

Bengio, a professor at the University of Montreal, is considered one of the three “godfathers” of deep learning, along with Yann LeCun and Geoff Hinton.

It is a technology that uses neural networks — a kind of software loosely based on the human brain — to make predictions based on data. It is responsible for recent advances in facial recognition, natural language processing, translation and recommendation algorithms.

Deep learning requires a large amount of data to provide examples from which to learn, but China, with its vast population and system of state record-keeping, has a lot of that.

The Chinese government has begun using closed-circuit video cameras and facial recognition to monitor what its citizens do in public, from jaywalking to engaging in political dissent. It has also created a National Credit Information Sharing Platform, which is being used to blacklist rail and air passengers for “anti-social” behavior and is considering expanding uses of this system to other situations.

“The use of your face to track you should be highly regulated,” Bengio said.

Bengio is not alone in his concern about China’s use-cases for AI. Billionaire George Soros used a speech at the World Economic Forum on Jan. 24 to highlight the risks the country’s use of AI poses to civil liberties and minority rights.

Unlike some peers, Bengio, who heads the Montreal Institute for Learning Algorithms, has resisted the temptation to work for a large, advertising-driven technology company.

Responsible development of AI might require some large technology companies to change the way they operate, he said.

The amount of data large tech companies control is also a concern.

The creation of data trusts — non-profit entities or legal frameworks under which people own their data and allow them to be used only for certain purposes — might be one solution, Bengio said.

If a trust held enough data, it could negotiate better terms with big tech companies that needed them, he said on Thursday during a talk at Amnesty International UK’s office in London.

There were many ways deep-learning software could be used for good, Bengio said.

In Thursday’s talk, he unveiled a project he is working on that uses AI to create augmented-reality images depicting what people’s individual homes or neighborhoods might look like as the result of natural disasters spawned by climate change.

However, he said there was also a risk that the implementation of AI would cause job losses on a scale, and at a speed, that is different from what has happened with other technological innovations.

Governments need to be proactive in thinking about these risks, including considering new ways to redistribute wealth within society, he said.

“Technology, as it gets more powerful, outside of other influences, just leads to more concentration of power and wealth,” Bengio said. “That is bad for democracy, that is bad for social justice and the general well-being of most people.”