As the use of facial recognition continues its development around the world, governments have come in a little late to regulate the deployment of this and other technologies that use artificial intelligence (AI).
Over the past half year, there have been a few models rolled out to do this.
There is the incentivist model, which the authorities in the People’s Republic of China (PRC) are pursuing, as they invest in state owned-enterprises and for-profit technology companies to dominate the industry and gain strategic advantage.
The PRC uses AI technology to surveil its own citizens and even uses AI for the much-maligned social credit program it is developing.
On the other end of the spectrum is a more restrictionist model, which is pursued among some municipal governments in the US, curtailing the use of facial recognition technology in police investigations and municipal surveillance programs.
Many countries are somewhere between these two models.
There are many ethical issues that come with such emerging technology. While AI is terrific at speeding up processing, it cannot be trusted to be fair, let alone neutral, particularly in the criminal justice context.
Data sets that concern human behavior can be susceptible to bias. Machines cannot factor in racial or other human rights sensitivities. They could replicate human bias, including racism, homophobia and other forms of discrimination.
As an example, a 2017 Stanford University laboratory study developed AI to identify gay and lesbian people. Such technology could easily become a dangerous tool in the hands of state authorities in Brunei, Iran or the many others that legislate against gays and lesbians.
Facial recognition also erodes the right to privacy.
That is in part the reason why some municipalities in the US are putting a hold on the use of AI in criminal and administrative matters.
In May, San Francisco banned the use of facial recognition technology by law enforcement and other departments. In June, the city council of Somerville, Massachusetts, followed suit when it voted 11-0 to ban the use of facial recognition technology. In July, Oakland, California, banned the use of facial recognition technologies by local government agencies, becoming the third city in the US to tackle the issue of facial surveillance head-on.
These cities are concerned about the ethics of AI and machine learning. In of the absence of state or federal guidance, cities are doing the brunt of the legislative work. A few countries and two regional organizations have gotten into the action.
While not binding, there have been in the past half year a plethora of ethical guidelines unveiled as they navigate the manner in which to best deal with AI without disrupting their respective national industrial policies.
It is not easy to balance data privacy, cybersecurity concerns and the desire to gain commercial strategic advantage in new technology industries.
There now appears to be a not just a competition among the countries who wish to dominate AI, but also among their would-be regulators.
Australia has its own draft code on ethics for AI. The UK government has a plan, too. Not to be outdone, in the middle of the Brexit debacles, the EU has its own code, released in April.
Even the country that is defending its social credit program, the PRC, has its own principles. The Beijing AI Principles, released by the Beijing Academy of Artificial Intelligence, an organization supported by the Chinese Ministry of Science and Technology and the Beijing municipal government, offers 15-point principles that call for AI to be beneficial and responsible.
Developed in collaboration with Peking University, Tsinghua University and the Chinese Academy of Sciences’ Institute of Automation and Institute of Computing Technology, these principles also include the support of China’s three big tech firms: Baidu, Alibaba, and Tencent.
The Organisation for Economic Co-operation and Development (OECD) should not be counted out.
From the institution that gave the world the Guidelines on Multinational Enterprises comes the OECD Principles on Artificial Intelligence. About 42 countries signed on to these policy guidelines so that AI systems are safe, fair and trustworthy.
While not legally binding, OECD principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. This could have good moral authority and provide the groundwork for customary international law in these areas.
It is no surprise that the World Economic Forum wants to develop its own policy guidelines, too.
If sovereign states cannot get the norms and rules right, Big Tech is there at the read to step in and self-regulate. Google’s AI for Social Good initiative is a case in point, but that tech behemoth has lost much credibility after it was fined for illegally tracking the YouTube preferences of minors and earning advertising revenue.
Neither fully transparent nor timely, Google had to shut down Google+ after a security bug dating back to 2015 allowed third-party developers to access user profile data.
Self-regulating organizations, comprised of Big Tech, could fill the vacuum where elected officials and international financial institutions have just entered with policy guidelines rather than legally enforceable standards.
Taiwan has a wonderful opportunity with its “Taiwan AI Action Plan” to not only develop smart technology, but to facilitate an ethical approach to AI development and deployment.
There is a wide-open legislative space between the incentivist and restrictionist models to demonstrate how to best regulate AI.
James Cooper is a professor of law at the California Western School of Law in San Diego and directs its international studies program.
Taiwan aims to elevate its strategic position in supply chains by becoming an artificial intelligence (AI) hub for Nvidia Corp, providing everything from advanced chips and components to servers, in an attempt to edge out its closest rival in the region, South Korea. Taiwan’s importance in the AI ecosystem was clearly reflected in three major announcements Nvidia made during this year’s Computex trade show in Taipei. First, the US company’s number of partners in Taiwan would surge to 122 this year, from 34 last year, according to a slide shown during CEO Jensen Huang’s (黃仁勳) keynote speech on Monday last week.
When China passed its “Anti-Secession” Law in 2005, much of the democratic world saw it as yet another sign of Beijing’s authoritarianism, its contempt for international law and its aggressive posture toward Taiwan. Rightly so — on the surface. However, this move, often dismissed as a uniquely Chinese form of legal intimidation, echoes a legal and historical precedent rooted not in authoritarian tradition, but in US constitutional history. The Chinese “Anti-Secession” Law, a domestic statute threatening the use of force should Taiwan formally declare independence, is widely interpreted as an emblem of the Chinese Communist Party’s disregard for international norms. Critics
Birth, aging, illness and death are inevitable parts of the human experience. Yet, living well does not necessarily mean dying well. For those who have a chronic illness or cancer, or are bedridden due to significant injuries or disabilities, the remainder of life can be a torment for themselves and a hardship for their caregivers. Even if they wish to end their life with dignity, they are not allowed to do so. Bih Liu-ing (畢柳鶯), former superintendent of Chung Shan Medical University Hospital, introduced the practice of Voluntary Stopping of Eating and Drinking as an alternative to assisted dying, which remains
President William Lai (賴清德) has rightly identified the Chinese Communist Party (CCP) as a hostile force; and yet, Taiwan’s response to domestic figures amplifying CCP propaganda remains largely insufficient. The Mainland Affairs Council (MAC) recently confirmed that more than 20 Taiwanese entertainers, including high-profile figures such as Ouyang Nana (歐陽娜娜), are under investigation for reposting comments and images supporting People’s Liberation Army (PLA) drills and parroting Beijing’s unification messaging. If found in contravention of the law, they may be fined between NT$100,000 and NT$500,000. That is not a deterrent. It is a symbolic tax on betrayal — perhaps even a way for