Over the past few years, the Massachusetts Institute of Technology-hosted “Moral Machine” study has surveyed public preferences regarding how artificial intelligence (AI) applications should behave in various settings.
One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.
For example, in an AV version of the classic “trolley problem,” some might prefer that the vehicle strike a convicted murderer before harming others, or that it hit a senior citizen before a child.
Still others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination. Generally, such quandaries are reserved for courtrooms or police investigations after the fact.
However, in the case of AVs, choices would be made in a matter of milliseconds, which is not nearly enough time to reach an informed decision. What matters is not what we know, but what the vehicle knows.
The question, then, is what information AVs should have about the people around them, and should firms be allowed to offer different ethical systems in pursuit of a competitive advantage?
Consider the following scenario: A vehicle from China has different factory standards than a vehicle from the US, but is shipped to and used in the US. This Chinese-made vehicle and a US-made vehicle are heading for an unavoidable collision. If the Chinese vehicle’s driver has different ethical preferences than the driver of the US vehicle, which system should prevail?
Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries.
A Chinese-made vehicle, for example, might have access to social-scoring data, allowing its decisionmaking algorithm to incorporate additional inputs that are unavailable to US automakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?
Clearly, before AVs take to the road en masse, we would need to establish where responsibility for algorithmic decisionmaking lies, be it with municipal authorities, national governments or multilateral institutions.
More than that, we would need new frameworks for governing this intersection of business and the state.
At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decisionmaking algorithms.
It is easy to imagine that all AV manufacturers would simply advertise ethical systems that prize the life of the driver above all else, or that allow the user to toggle their own ethical settings.
To prevent this “tragedy of the commons,” there would have to be frameworks for establishing communication and coordinating decisions between AVs.
However, in developing such systems across different cultural contexts, policymakers and businesses would come face to face with different cultural notions of sovereignty, privacy and individual autonomy.
This poses additional challenges, because AI systems do not tolerate ambiguity. Designing an AI application from scratch requires deep specificity; for better or worse, these systems do only what you tell them to do.
That means firms, governments and other providers would need to make explicit choices when coding response protocols for varying situations.
Yet before that happens, policymakers would need to establish the scope of algorithmic accountability, to determine what, if any, decisions should be left to businesses or individuals. Those that fall within the remit of the state would have to be debated. Given that such ethical and moral questions do not have easy answers, a consensus is unlikely to emerge.
Barring an ultimate resolution, we would need to create systems that at least facilitate communication between AVs and adjudicate algorithmic disputes and roadway incidents.
Given the need for specificity in designing decisionmaking algorithms, it stands to reason that an international body would be needed to set the standards according to which moral and ethical dilemmas are resolved.
AVs, after all, are just one application of algorithmic decisionmaking. Looking ahead, standards of algorithmic accountability would have to be managed across many domains.
Ultimately, the first question we must decide is whether firms have a right to design alternative ethical frameworks for algorithmic decisionmaking. We would argue that they do not.
In an age of AI, some components of global value chains would end up being automated as a matter of course, at which point they would no longer be regarded as areas for firms to pursue a competitive edge.
The process for determining and adjudicating algorithmic accountability should be one such area.
One way or another, decisions will be made. It is better that they be settled uniformly and as democratically as possible.
Mark Esposito is a cofounder of Nexus FrontierTech and a fellow at the Mohammed Bin Rashid School of Government in Dubai and Judge Business School in Cambridge, Massachusetts. Terence Tse, a professor at ESCP Europe Business School in London, is a cofounder of Nexus FrontierTech. Joshua Entsminger is a researcher at Nexus FrontierTech and a senior fellow at Ecole des Ponts Center for Policy and Competitiveness. Aurelie Jean is the founder of In Silico Veritas and an adviser for the Boston Consulting Group.
Copyright: Project Syndicate
When US budget carrier Southwest Airlines last week announced a new partnership with China Airlines, Southwest’s social media were filled with comments from travelers excited by the new opportunity to visit China. Of course, China Airlines is not based in China, but in Taiwan, and the new partnership connects Taiwan Taoyuan International Airport with 30 cities across the US. At a time when China is increasing efforts on all fronts to falsely label Taiwan as “China” in all arenas, Taiwan does itself no favors by having its flagship carrier named China Airlines. The Ministry of Foreign Affairs is eager to jump at
The muting of the line “I’m from Taiwan” (我台灣來欸), sung in Hoklo (commonly known as Taiwanese), during a performance at the closing ceremony of the World Masters Games in New Taipei City on May 31 has sparked a public outcry. The lyric from the well-known song All Eyes on Me (世界都看見) — originally written and performed by Taiwanese hip-hop group Nine One One (玖壹壹) — was muted twice, while the subtitles on the screen showed an alternate line, “we come here together” (阮作伙來欸), which was not sung. The song, performed at the ceremony by a cheerleading group, was the theme
Secretary of State Marco Rubio raised eyebrows recently when he declared the era of American unipolarity over. He described America’s unrivaled dominance of the international system as an anomaly that was created by the collapse of the Soviet Union at the end of the Cold War. Now, he observed, the United States was returning to a more multipolar world where there are great powers in different parts of the planet. He pointed to China and Russia, as well as “rogue states like Iran and North Korea” as examples of countries the United States must contend with. This all begs the question:
In China, competition is fierce, and in many cases suppliers do not get paid on time. Rather than improving, the situation appears to be deteriorating. BYD Co, the world’s largest electric vehicle manufacturer by production volume, has gained notoriety for its harsh treatment of suppliers, raising concerns about the long-term sustainability. The case also highlights the decline of China’s business environment, and the growing risk of a cascading wave of corporate failures. BYD generally does not follow China’s Negotiable Instruments Law when settling payments with suppliers. Instead the company has created its own proprietary supply chain finance system called the “D-chain,” through which