Everyone has heard the nightmare scenarios that might come with artificial intelligence (AI). A video showing so-called “slaughterbots” — machines that combine AI, facial recognition and drone technologies to create efficient killing machines — went viral last year. A Stanford University laboratory developed AI that can identify gay and lesbian people through facial recognition technology, thereby threatening human and civil rights. AI and machine learning are being used to predict who is likely to become sick and could be used by health insurance companies to deny coverage.
These examples are a good reminder of why we need to infuse AI with an ethical imperative. Machines will not do that on their own.
Yet AI can also help solve the major problems facing humanity.
Welcome to “AI for social good”: AI can assist in preventing the next infectious disease outbreak, predict devastating wildfires, decrease risks of famine and genocide, stem wildlife poaching and disrupt human trafficking networks.
AI can be used to develop solutions in renewable energy, mitigate climate change and manage traffic. It can assist disaster relief and facilitate sustainable development. Through low-altitude sensors, AI can be deployed to analyze plant damage and help subsistence farmers to increase yields. It can develop and distribute educational modules better tailored for each student’s success. AI can be utilized for urban planning, waste management, crime prevention and the safe maintenance of public infrastructure.
Certainly, finding ways for AI and machine learning to solve humanity’s greatest challenges is a worthy challenge for those companies brave enough to invest in the technology.
“AI for social good” is the new mantra for this quickly evolving industry and it has come none too soon. IBM helped introduce this new ethics-centric approach, but in the past few months, Google has been leading the AI-for-social-good charge on the heels of repairing its image after a series of public relations fiascoes.
In October, Google decided not to bid on the US Department of Defense’s US$10 billion Joint Enterprise Defense Initiative cloud computing project after a protest by its employees. In June, Google backed off on “Project Maven” — a true-life version of the “slaughterbot” video.
Are ethics at last factoring into the technology giant’s business decisions? Don’t hold your breath. Even with all the negative press concerning his company’s “Project Dragonfly” to design a censored search engine for the People’s Republic of China, Google chief executive officer Sundar Pichai appears to be doubling down on the decision to go forward.
While corporations are fundamentally vehicles to maximize wealth for their shareholders, ethics can actually be good for business. The corporate social responsibility movement, social choice options for investment companies and human rights-friendly supply chains have all demonstrated that transnational corporations do not have to put profit above all.
If sustainable development goals are to be met, corporations must be committed and help with implementation. New technologies such as AI, just like the smokestack economies of yesteryear, require that values and principles be applied in production and rollout decisions.
In the Act for Uncrewed Vehicle Technology Innovations and Experiments (無人載具創新實驗條例), which coauthor Jason Hsu sponsored, ethics clauses were included to ensure data privacy and ownership.
In this coming era of AI, algorithmic decisionmaking will drive causes and consequences. Technology must be held accountable; those who build technology must bear in mind potential harms and take preventive measures.
We are creating a council on AI, ethics and law to bring together technologists, philosophers, legal practitioners, engineers and policymakers to develop solutions and address some of these concerns.
With such a focus on algorithmic justice, data protection and cybersecurity concerns can be met, with privacy paramount.
There is also the issue of how data sets are utilized for prediction or decisionmaking. AI must be used to undo reigning prejudices and reverse social inequities, not reinforce them. Governments and corporations must abide by strict principles — whether legislated or through self-regulating organizations — in the use of AI technology.
AI and machine learning bring with them many opportunities to benefit humanity, but also pose significant risks. As “AI for social good” gathers pace, it is time to consider people above profit.
Companies could do well by doing good. A dystopian future could be averted. The machines they build and the software they code cannot promote ethics by themselves, so humans must develop rules for such advances.
James Cooper is a professor of law at California Western School of Law in San Diego. Jason Hsu is a futurist and a legislator.
The government and local industries breathed a sigh of relief after Shin Kong Life Insurance Co last week said it would relinquish surface rights for two plots in Taipei’s Beitou District (北投) to Nvidia Corp. The US chip-design giant’s plan to expand its local presence will be crucial for Taiwan to safeguard its core role in the global artificial intelligence (AI) ecosystem and to advance the nation’s AI development. The land in dispute is owned by the Taipei City Government, which in 2021 sold the rights to develop and use the two plots of land, codenamed T17 and T18, to the
US President Donald Trump has announced his eagerness to meet North Korean leader Kim Jong-un while in South Korea for the APEC summit. That implies a possible revival of US-North Korea talks, frozen since 2019. While some would dismiss such a move as appeasement, renewed US engagement with North Korea could benefit Taiwan’s security interests. The long-standing stalemate between Washington and Pyongyang has allowed Beijing to entrench its dominance in the region, creating a myth that only China can “manage” Kim’s rogue nation. That dynamic has allowed Beijing to present itself as an indispensable power broker: extracting concessions from Washington, Seoul
Taiwan’s labor force participation rate among people aged 65 or older was only 9.9 percent for 2023 — far lower than in other advanced countries, Ministry of Labor data showed. The rate is 38.3 percent in South Korea, 25.7 percent in Japan and 31.5 percent in Singapore. On the surface, it might look good that more older adults in Taiwan can retire, but in reality, it reflects policies that make it difficult for elderly people to participate in the labor market. Most workplaces lack age-friendly environments, and few offer retraining programs or flexible job arrangements for employees older than 55. As
Donald Trump’s return to the White House has offered Taiwan a paradoxical mix of reassurance and risk. Trump’s visceral hostility toward China could reinforce deterrence in the Taiwan Strait. Yet his disdain for alliances and penchant for transactional bargaining threaten to erode what Taiwan needs most: a reliable US commitment. Taiwan’s security depends less on US power than on US reliability, but Trump is undermining the latter. Deterrence without credibility is a hollow shield. Trump’s China policy in his second term has oscillated wildly between confrontation and conciliation. One day, he threatens Beijing with “massive” tariffs and calls China America’s “greatest geopolitical