Thu, May 17, 2018 - Page 10 News List

Internal movement opposed to Google’s military AI project gains momentum

AFP, SAN FRANCISCO

An internal petition calling for Google to stay out of “the business of war” gained support on Tuesday, with some workers reportedly quitting to protest a collaboration with the US military.

About 4,000 Google employees reportedly signed a petition that began circulating about three months ago urging the Internet giant to refrain from using artificial intelligence (AI) to make US military drones better at recognizing what they are monitoring.

Tech news Web site Gizmodo earlier this week reported that about a dozen Google employees are quitting in an ethical stand.

The California-based company did not immediately respond to inquiries about what was referred to as Project Maven, which reportedly uses machine learning and engineering talent to distinguish people and objects in drone videos for the US Department of Defense.

“We believe that Google should not be in the business of war,” the petition reads, according to copies posted online. “Therefore, we ask that Project Maven be canceled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

The Electronic Frontier Foundation, an Internet rights group, and the International Committee for Robot Arms Control were among those who have weighed in with support.

While reports indicated that AI findings would be reviewed by human analysts, the technology could pave the way for automated targeting systems on armed drones, the committee said in an open letter of support to Google employees opposing the project.

“As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems,” it said in the letter. “We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control.”

Google has gone on the record saying that its work to improve machines’ ability to recognize objects is not for offensive uses, but published documents show a “murkier” picture, the foundation’s Cindy Cohn and Peter Eckersley said in an online post last month.

“If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects,” they said. “Those are hefty ethical stakes, even with humans in the loop further along the ‘kill chain.’”

“The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety,” Cohn and Eckersley said.

“Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behavior from the military agencies that seek their expertise — and from themselves,” they said.

This story has been viewed 2019 times.

Comments will be moderated. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned.

TOP top