In Junior-high school science class, we assembled radios, motors and even an Apple II computer. We understood the principles behind how those devices worked and interested students could even write code in different programming languages to perform specific functions.
However, the complexity of computers and applications has advanced far beyond ordinary people’s understanding.
As programs continue to evolve, the computational processes behind social prejudice, human rights violations and even human health are to disappear in the black box of algorithms. No one will be able to understand them and once things begin to go awry, there will be no one to take responsibility.
In an article in this month’s issue of Nature, Harvard University law professor Yochai Benkler put forward concerns about the development of artificial intelligence (AI).
In AI research, development and innovation, technology companies such as Google and Apple play a decisive role and prevail over many governments and nonprofit companies, Benkler wrote.
As businesses direct the development of AI, it is unavoidable that they would use their own data and influence in ways that are beneficial to themselves as they determine the effects of their business systems on society and morals, and then incorporate that into their programs.
In the foreseeable future, algorithms are to influence every aspect of everyday life, such as health, insurance, finance, transportation, national defense, law and order, news, politics, advertising and so on.
If all these algorithms are designed based on the interests of certain businesses or groups, they will move away from the public interest.
As machine learning algorithms are based on existing data, future systems could become permanently unfair unless people design fraud prevention measures.
However, most of the time when a government is involved in management or prevention of abuse, it sides with those who want to block technological and social progress.
For example, to win votes from taxi drivers and disadvantaged groups, politicians have been blocking Uber and automation.
Tragically, the technologies that politicians are able to understand and block are the ones that are mature, stable and pose no threat. When it comes to AI’s possible threat to human rights and fairness, politicians are incapable of understanding the implications, let alone create measures to prevent abuse.
Taiwan has solid foundations in science, technology and education, and the development of AI presents a good opportunity.
If the government does not want to oppose scientific and technological development, and wants to guide companies to maintain a balance between their own and others’ interests, it must stop imposing laws and instead rely on the humanities, reason, data and science.
For example, government agencies should subsidize independent research by universities and research institutions on the effects of AI technology.
This should not only be the responsibility of the Ministry of Science and Technology, Ministry of Economic Affairs, Ministry of Health and Welfare, and Ministry of Education, but also involve the Ministry of Culture, Ministry of the Interior, Ministry of Foreign Affairs, Ministry of National Defense, Ministry of Justice and others.
The government should also conduct cross-industry and cross-departmental discussions on how to regulate businesses so they share enough data to prevent abusive development of AI.
Su Kuan-pin is a professor and director of China Medical University’s College of Medicine and Mind-Body Interface Research Center.
Translated by Lin Lee-kai
Over the past few years, migrant workers’ rights have improved in Taiwan, but there has not been a comparable improvement in protections for employers, who are faced with a range of challenges, such as family nurses mistreating patients or workers threatening to change brokers or demanding that employers change their jobs. Then there is the decrease in work standards. Migrant workers too often find the lure of the underground jobs market irresistible, are unaware of employment laws and regulations, or have found that National Immigration Agency (NIA) checks are lax, and as a result abscond. If this happens, what protections or
The World Health Assembly (WHA) held its annual meeting this week; Taiwan was still not represented. Its journalists were also barred from covering the online-only proceedings, despite the nation’s clearly demonstrated pandemic expertise that has set an example for the world. When the SARS epidemic reached Taiwan from southern China in 2003, dozens of lives were lost, but its health experts learned the importance of general testing, masks, technology to locate infected persons, swift decisions and quarantines. The lessons were applied immediately across Taiwan when COVID-19 arrived this year. From 2009 to 2016, Taiwan participated as an observer in the assembly under
The Central Epidemic Command Center (CECC) has been giving daily COVID-19 updates for almost four months, and on several occasions when major developments have arisen, the news conferences have attracted large numbers of viewers. The entire nation is anxious about the pandemic, and interest in the latest news has become a part of daily life. Watching the center’s daily news conferences has become something of a national ritual. The pandemic has stabilized within Taiwan due to the admirable efforts of each person living in the nation conducting themselves with the utmost responsibility, and in certain cases making considerable sacrifices within their