Wed, May 22, 2019 - Page 8 News List

Preventing AI abuse requires cooperation

By Su Kuan-pin 蘇冠賓

In Junior-high school science class, we assembled radios, motors and even an Apple II computer. We understood the principles behind how those devices worked and interested students could even write code in different programming languages to perform specific functions.

However, the complexity of computers and applications has advanced far beyond ordinary people’s understanding.

As programs continue to evolve, the computational processes behind social prejudice, human rights violations and even human health are to disappear in the black box of algorithms. No one will be able to understand them and once things begin to go awry, there will be no one to take responsibility.

In an article in this month’s issue of Nature, Harvard University law professor Yochai Benkler put forward concerns about the development of artificial intelligence (AI).

In AI research, development and innovation, technology companies such as Google and Apple play a decisive role and prevail over many governments and nonprofit companies, Benkler wrote.

As businesses direct the development of AI, it is unavoidable that they would use their own data and influence in ways that are beneficial to themselves as they determine the effects of their business systems on society and morals, and then incorporate that into their programs.

In the foreseeable future, algorithms are to influence every aspect of everyday life, such as health, insurance, finance, transportation, national defense, law and order, news, politics, advertising and so on.

If all these algorithms are designed based on the interests of certain businesses or groups, they will move away from the public interest.

As machine learning algorithms are based on existing data, future systems could become permanently unfair unless people design fraud prevention measures.

However, most of the time when a government is involved in management or prevention of abuse, it sides with those who want to block technological and social progress.

For example, to win votes from taxi drivers and disadvantaged groups, politicians have been blocking Uber and automation.

Tragically, the technologies that politicians are able to understand and block are the ones that are mature, stable and pose no threat. When it comes to AI’s possible threat to human rights and fairness, politicians are incapable of understanding the implications, let alone create measures to prevent abuse.

Taiwan has solid foundations in science, technology and education, and the development of AI presents a good opportunity.

If the government does not want to oppose scientific and technological development, and wants to guide companies to maintain a balance between their own and others’ interests, it must stop imposing laws and instead rely on the humanities, reason, data and science.

For example, government agencies should subsidize independent research by universities and research institutions on the effects of AI technology.

This should not only be the responsibility of the Ministry of Science and Technology, Ministry of Economic Affairs, Ministry of Health and Welfare, and Ministry of Education, but also involve the Ministry of Culture, Ministry of the Interior, Ministry of Foreign Affairs, Ministry of National Defense, Ministry of Justice and others.

The government should also conduct cross-industry and cross-departmental discussions on how to regulate businesses so they share enough data to prevent abusive development of AI.

This story has been viewed 1808 times.

Comments will be moderated. Keep comments relevant to the article. Remarks containing abusive and obscene language, personal attacks of any kind or promotion will be removed and the user banned. Final decision will be at the discretion of the Taipei Times.

TOP top