Everyone has heard the nightmare scenarios that might come with artificial intelligence (AI). A video showing so-called “slaughterbots” — machines that combine AI, facial recognition and drone technologies to create efficient killing machines — went viral last year. A Stanford University laboratory developed AI that can identify gay and lesbian people through facial recognition technology, thereby threatening human and civil rights. AI and machine learning are being used to predict who is likely to become sick and could be used by health insurance companies to deny coverage.
These examples are a good reminder of why we need to infuse AI with an ethical imperative. Machines will not do that on their own.
Yet AI can also help solve the major problems facing humanity.
Welcome to “AI for social good”: AI can assist in preventing the next infectious disease outbreak, predict devastating wildfires, decrease risks of famine and genocide, stem wildlife poaching and disrupt human trafficking networks.
AI can be used to develop solutions in renewable energy, mitigate climate change and manage traffic. It can assist disaster relief and facilitate sustainable development. Through low-altitude sensors, AI can be deployed to analyze plant damage and help subsistence farmers to increase yields. It can develop and distribute educational modules better tailored for each student’s success. AI can be utilized for urban planning, waste management, crime prevention and the safe maintenance of public infrastructure.
Certainly, finding ways for AI and machine learning to solve humanity’s greatest challenges is a worthy challenge for those companies brave enough to invest in the technology.
“AI for social good” is the new mantra for this quickly evolving industry and it has come none too soon. IBM helped introduce this new ethics-centric approach, but in the past few months, Google has been leading the AI-for-social-good charge on the heels of repairing its image after a series of public relations fiascoes.
In October, Google decided not to bid on the US Department of Defense’s US$10 billion Joint Enterprise Defense Initiative cloud computing project after a protest by its employees. In June, Google backed off on “Project Maven” — a true-life version of the “slaughterbot” video.
Are ethics at last factoring into the technology giant’s business decisions? Don’t hold your breath. Even with all the negative press concerning his company’s “Project Dragonfly” to design a censored search engine for the People’s Republic of China, Google chief executive officer Sundar Pichai appears to be doubling down on the decision to go forward.
While corporations are fundamentally vehicles to maximize wealth for their shareholders, ethics can actually be good for business. The corporate social responsibility movement, social choice options for investment companies and human rights-friendly supply chains have all demonstrated that transnational corporations do not have to put profit above all.
If sustainable development goals are to be met, corporations must be committed and help with implementation. New technologies such as AI, just like the smokestack economies of yesteryear, require that values and principles be applied in production and rollout decisions.
In the Act for Uncrewed Vehicle Technology Innovations and Experiments (無人載具創新實驗條例), which coauthor Jason Hsu sponsored, ethics clauses were included to ensure data privacy and ownership.
In this coming era of AI, algorithmic decisionmaking will drive causes and consequences. Technology must be held accountable; those who build technology must bear in mind potential harms and take preventive measures.
We are creating a council on AI, ethics and law to bring together technologists, philosophers, legal practitioners, engineers and policymakers to develop solutions and address some of these concerns.
With such a focus on algorithmic justice, data protection and cybersecurity concerns can be met, with privacy paramount.
There is also the issue of how data sets are utilized for prediction or decisionmaking. AI must be used to undo reigning prejudices and reverse social inequities, not reinforce them. Governments and corporations must abide by strict principles — whether legislated or through self-regulating organizations — in the use of AI technology.
AI and machine learning bring with them many opportunities to benefit humanity, but also pose significant risks. As “AI for social good” gathers pace, it is time to consider people above profit.
Companies could do well by doing good. A dystopian future could be averted. The machines they build and the software they code cannot promote ethics by themselves, so humans must develop rules for such advances.
James Cooper is a professor of law at California Western School of Law in San Diego. Jason Hsu is a futurist and a legislator.
An article on the Nature magazine Web site reports that 22 scientists last month wrote to the daily Dagens Nyheter criticizing Sweden’s no-lockdown response to COVID-19. However, evidence-based analysis shows that a lockdown is not a one-size-fits-all strategy and Sweden is showing the world a sustainable way for everybody to fearlessly live with the virus, which is an inevitable situation that everyone must face and accept for a while. The biggest myth about lockdowns is that they are the only solution when an epidemic worsens. A lockdown is a measure to cordon off a seriously affected area so that people in
On Monday, Chinese President Xi Jinping (習近平) spoke during the opening ceremony of this year’s World Health Assembly (WHA). For the first time in the assembly’s history, attendees, including Xi, had to dial in virtually. Xi made no acknowledgement of the Chinese government’s role in causing the COVID-19 pandemic, nor was there any meaningful apology. Instead, he painted China as a benign force for good and a friend to all nations. Except Taiwan, of course. The address was a reheated version of the speech Xi gave at the 2017 World Economic Forum in Davos, Switzerland. Xi again attempted to step into the