Over the past few years, the Massachusetts Institute of Technology-hosted “Moral Machine” study has surveyed public preferences regarding how artificial intelligence (AI) applications should behave in various settings.
One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.
For example, in an AV version of the classic “trolley problem,” some might prefer that the vehicle strike a convicted murderer before harming others, or that it hit a senior citizen before a child.
Still others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination. Generally, such quandaries are reserved for courtrooms or police investigations after the fact.
However, in the case of AVs, choices would be made in a matter of milliseconds, which is not nearly enough time to reach an informed decision. What matters is not what we know, but what the vehicle knows.
The question, then, is what information AVs should have about the people around them, and should firms be allowed to offer different ethical systems in pursuit of a competitive advantage?
Consider the following scenario: A vehicle from China has different factory standards than a vehicle from the US, but is shipped to and used in the US. This Chinese-made vehicle and a US-made vehicle are heading for an unavoidable collision. If the Chinese vehicle’s driver has different ethical preferences than the driver of the US vehicle, which system should prevail?
Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries.
A Chinese-made vehicle, for example, might have access to social-scoring data, allowing its decisionmaking algorithm to incorporate additional inputs that are unavailable to US automakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?
Clearly, before AVs take to the road en masse, we would need to establish where responsibility for algorithmic decisionmaking lies, be it with municipal authorities, national governments or multilateral institutions.
More than that, we would need new frameworks for governing this intersection of business and the state.
At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decisionmaking algorithms.
It is easy to imagine that all AV manufacturers would simply advertise ethical systems that prize the life of the driver above all else, or that allow the user to toggle their own ethical settings.
To prevent this “tragedy of the commons,” there would have to be frameworks for establishing communication and coordinating decisions between AVs.
However, in developing such systems across different cultural contexts, policymakers and businesses would come face to face with different cultural notions of sovereignty, privacy and individual autonomy.
This poses additional challenges, because AI systems do not tolerate ambiguity. Designing an AI application from scratch requires deep specificity; for better or worse, these systems do only what you tell them to do.
That means firms, governments and other providers would need to make explicit choices when coding response protocols for varying situations.
Yet before that happens, policymakers would need to establish the scope of algorithmic accountability, to determine what, if any, decisions should be left to businesses or individuals. Those that fall within the remit of the state would have to be debated. Given that such ethical and moral questions do not have easy answers, a consensus is unlikely to emerge.
Barring an ultimate resolution, we would need to create systems that at least facilitate communication between AVs and adjudicate algorithmic disputes and roadway incidents.
Given the need for specificity in designing decisionmaking algorithms, it stands to reason that an international body would be needed to set the standards according to which moral and ethical dilemmas are resolved.
AVs, after all, are just one application of algorithmic decisionmaking. Looking ahead, standards of algorithmic accountability would have to be managed across many domains.
Ultimately, the first question we must decide is whether firms have a right to design alternative ethical frameworks for algorithmic decisionmaking. We would argue that they do not.
In an age of AI, some components of global value chains would end up being automated as a matter of course, at which point they would no longer be regarded as areas for firms to pursue a competitive edge.
The process for determining and adjudicating algorithmic accountability should be one such area.
One way or another, decisions will be made. It is better that they be settled uniformly and as democratically as possible.
Mark Esposito is a cofounder of Nexus FrontierTech and a fellow at the Mohammed Bin Rashid School of Government in Dubai and Judge Business School in Cambridge, Massachusetts. Terence Tse, a professor at ESCP Europe Business School in London, is a cofounder of Nexus FrontierTech. Joshua Entsminger is a researcher at Nexus FrontierTech and a senior fellow at Ecole des Ponts Center for Policy and Competitiveness. Aurelie Jean is the founder of In Silico Veritas and an adviser for the Boston Consulting Group.
Copyright: Project Syndicate
On May 7, 1971, Henry Kissinger planned his first, ultra-secret mission to China and pondered whether it would be better to meet his Chinese interlocutors “in Pakistan where the Pakistanis would tape the meeting — or in China where the Chinese would do the taping.” After a flicker of thought, he decided to have the Chinese do all the tape recording, translating and transcribing. Fortuitously, historians have several thousand pages of verbatim texts of Dr. Kissinger’s negotiations with his Chinese counterparts. Paradoxically, behind the scenes, Chinese stenographers prepared verbatim English language typescripts faster than they could translate and type them
More than 30 years ago when I immigrated to the US, applied for citizenship and took the 100-question civics test, the one part of the naturalization process that left the deepest impression on me was one question on the N-400 form, which asked: “Have you ever been a member of, involved in or in any way associated with any communist or totalitarian party anywhere in the world?” Answering “yes” could lead to the rejection of your application. Some people might try their luck and lie, but if exposed, the consequences could be much worse — a person could be fined,
On May 13, the Legislative Yuan passed an amendment to Article 6 of the Nuclear Reactor Facilities Regulation Act (核子反應器設施管制法) that would extend the life of nuclear reactors from 40 to 60 years, thereby providing a legal basis for the extension or reactivation of nuclear power plants. On May 20, Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP) legislators used their numerical advantage to pass the TPP caucus’ proposal for a public referendum that would determine whether the Ma-anshan Nuclear Power Plant should resume operations, provided it is deemed safe by the authorities. The Central Election Commission (CEC) has
Xiaomi Corp founder Lei Jun (雷軍) on May 22 made a high-profile announcement, giving online viewers a sneak peek at the company’s first 3-nanometer mobile processor — the Xring O1 chip — and saying it is a breakthrough in China’s chip design history. Although Xiaomi might be capable of designing chips, it lacks the ability to manufacture them. No matter how beautifully planned the blueprints are, if they cannot be mass-produced, they are nothing more than drawings on paper. The truth is that China’s chipmaking efforts are still heavily reliant on the free world — particularly on Taiwan Semiconductor Manufacturing