Everyone has heard the nightmare scenarios that might come with artificial intelligence (AI). A video showing so-called “slaughterbots” — machines that combine AI, facial recognition and drone technologies to create efficient killing machines — went viral last year. A Stanford University laboratory developed AI that can identify gay and lesbian people through facial recognition technology, thereby threatening human and civil rights. AI and machine learning are being used to predict who is likely to become sick and could be used by health insurance companies to deny coverage.
These examples are a good reminder of why we need to infuse AI with an ethical imperative. Machines will not do that on their own.
Yet AI can also help solve the major problems facing humanity.
Welcome to “AI for social good”: AI can assist in preventing the next infectious disease outbreak, predict devastating wildfires, decrease risks of famine and genocide, stem wildlife poaching and disrupt human trafficking networks.
AI can be used to develop solutions in renewable energy, mitigate climate change and manage traffic. It can assist disaster relief and facilitate sustainable development. Through low-altitude sensors, AI can be deployed to analyze plant damage and help subsistence farmers to increase yields. It can develop and distribute educational modules better tailored for each student’s success. AI can be utilized for urban planning, waste management, crime prevention and the safe maintenance of public infrastructure.
Certainly, finding ways for AI and machine learning to solve humanity’s greatest challenges is a worthy challenge for those companies brave enough to invest in the technology.
“AI for social good” is the new mantra for this quickly evolving industry and it has come none too soon. IBM helped introduce this new ethics-centric approach, but in the past few months, Google has been leading the AI-for-social-good charge on the heels of repairing its image after a series of public relations fiascoes.
In October, Google decided not to bid on the US Department of Defense’s US$10 billion Joint Enterprise Defense Initiative cloud computing project after a protest by its employees. In June, Google backed off on “Project Maven” — a true-life version of the “slaughterbot” video.
Are ethics at last factoring into the technology giant’s business decisions? Don’t hold your breath. Even with all the negative press concerning his company’s “Project Dragonfly” to design a censored search engine for the People’s Republic of China, Google chief executive officer Sundar Pichai appears to be doubling down on the decision to go forward.
While corporations are fundamentally vehicles to maximize wealth for their shareholders, ethics can actually be good for business. The corporate social responsibility movement, social choice options for investment companies and human rights-friendly supply chains have all demonstrated that transnational corporations do not have to put profit above all.
If sustainable development goals are to be met, corporations must be committed and help with implementation. New technologies such as AI, just like the smokestack economies of yesteryear, require that values and principles be applied in production and rollout decisions.
In the Act for Uncrewed Vehicle Technology Innovations and Experiments (無人載具創新實驗條例), which coauthor Jason Hsu sponsored, ethics clauses were included to ensure data privacy and ownership.
In this coming era of AI, algorithmic decisionmaking will drive causes and consequences. Technology must be held accountable; those who build technology must bear in mind potential harms and take preventive measures.
We are creating a council on AI, ethics and law to bring together technologists, philosophers, legal practitioners, engineers and policymakers to develop solutions and address some of these concerns.
With such a focus on algorithmic justice, data protection and cybersecurity concerns can be met, with privacy paramount.
There is also the issue of how data sets are utilized for prediction or decisionmaking. AI must be used to undo reigning prejudices and reverse social inequities, not reinforce them. Governments and corporations must abide by strict principles — whether legislated or through self-regulating organizations — in the use of AI technology.
AI and machine learning bring with them many opportunities to benefit humanity, but also pose significant risks. As “AI for social good” gathers pace, it is time to consider people above profit.
Companies could do well by doing good. A dystopian future could be averted. The machines they build and the software they code cannot promote ethics by themselves, so humans must develop rules for such advances.
James Cooper is a professor of law at California Western School of Law in San Diego. Jason Hsu is a futurist and a legislator.
An April circular by the Chinese Ministry of Education on student admission criteria at Tibetan universities has been harrowing and discriminating to say the least. The circular said that prospective students must state their “political attitude and ideological morality” to be considered for admission. It also said that students should not be involved in religious movements and students who are proficient in Marxist theory should be preferred. Since Beijing started occupying Tibet, it has meticulously introduced policies to dismantle the Tibetan education system, which is closely tied to its rich monastic tradition, and has even pulled students from Afghanistan and eastern
Opinion polls show that Taiwan’s judicial system and law enforcement “enjoy” low approval ratings among Taiwanese. In spite of data showing low crime rates, many Taiwanese drivers have faced aggressive driving, unprovoked road rage, road blocking and unmotivated police officers. Some criminals seem to consider themselves above the law, which is not completely wrong. Reports about so-called “road blocking” can be found in newspapers or on YouTube. An example of this is when “road rowdies” block a vehicle on a road, get out of their vehicle and start to attack the occupants of the blocked vehicle — often attacking in a
The Jumbo Floating Restaurant was a landmark in Hong Kong for nearly half a century. The palatial restaurant, with its pastiche Chinese architecture and neon lights perfectly encapsulated the territory’s beguiling balance of East and West, tradition and modernity. It was a feature backdrop in numerous Hong Kong films. However, forced to close amid the stringent COVID-19 lockdown policies of Hong Kong Chief Executive Carrie Lam (林鄭月娥) and denied financial support from her government, the floating temple to Cantonese gastronomy was towed from its mooring in Aberdeen Harbour this month by its owners with its planned destination not released. On June
When I was teaching in Lesotho in southern Africa during the 1980s, I taught a class on comparative foreign policy. The course included trips to the US embassy, the Soviet embassy, the British embassy and the newly established Chinese embassy. The students could ask the ambassadors and staff questions about foreign policy, and would then write a report as their final term paper. The Chinese ambassador felt that the US-style education I delivered was unique and invited me to go to China to teach. At the time, China was planning to open up to the world, and it needed professors versed