Tue, Apr 23, 2019 - Page 9 News List

Who should decide how autonomous vehicle algorithms decide?

By Mark Esposito, Joshua Entsminger, Terence Tse and Aurelie Jea

Over the past few years, the Massachusetts Institute of Technology-hosted “Moral Machine” study has surveyed public preferences regarding how artificial intelligence (AI) applications should behave in various settings.

One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.

For example, in an AV version of the classic “trolley problem,” some might prefer that the vehicle strike a convicted murderer before harming others, or that it hit a senior citizen before a child.

Still others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination. Generally, such quandaries are reserved for courtrooms or police investigations after the fact.

However, in the case of AVs, choices would be made in a matter of milliseconds, which is not nearly enough time to reach an informed decision. What matters is not what we know, but what the vehicle knows.

The question, then, is what information AVs should have about the people around them, and should firms be allowed to offer different ethical systems in pursuit of a competitive advantage?

Consider the following scenario: A vehicle from China has different factory standards than a vehicle from the US, but is shipped to and used in the US. This Chinese-made vehicle and a US-made vehicle are heading for an unavoidable collision. If the Chinese vehicle’s driver has different ethical preferences than the driver of the US vehicle, which system should prevail?

Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries.

A Chinese-made vehicle, for example, might have access to social-scoring data, allowing its decisionmaking algorithm to incorporate additional inputs that are unavailable to US automakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?

Clearly, before AVs take to the road en masse, we would need to establish where responsibility for algorithmic decisionmaking lies, be it with municipal authorities, national governments or multilateral institutions.

More than that, we would need new frameworks for governing this intersection of business and the state.

At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decisionmaking algorithms.

It is easy to imagine that all AV manufacturers would simply advertise ethical systems that prize the life of the driver above all else, or that allow the user to toggle their own ethical settings.

To prevent this “tragedy of the commons,” there would have to be frameworks for establishing communication and coordinating decisions between AVs.

However, in developing such systems across different cultural contexts, policymakers and businesses would come face to face with different cultural notions of sovereignty, privacy and individual autonomy.

This poses additional challenges, because AI systems do not tolerate ambiguity. Designing an AI application from scratch requires deep specificity; for better or worse, these systems do only what you tell them to do.

This story has been viewed 1920 times.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

TOP top