The meteoric rise of ChatGPT and GPT-4 has not only set off a new round of technological innovation and business competition centered on generative artificial intelligence (AI) technology, it has also rekindled intensive debate about what artificial general intelligence is and whether ChatGPT qualifies as one.
The mind-boggling advancement of GPT-4 over ChatGPT in just four months has prompted some experts to consider whether generative AI technologies might harm society or even humanity.
Some experts have demanded that governments regulate generative AI in the same way they do with technologies such as nuclear fission and human cloning.
Having led the world in safeguarding basic freedoms and human rights, the EU has spearheaded an effort to address regulatory issues surrounding generative AI. So far, it has focused mainly on how to protect personal privacy and reputations from infringement, and how to require generative AI companies to commercially license the training data they trawl from the Internet and which are required to train their AI models.
Last month, China announced regulatory requirements for domestic generative AI companies. Questions and prompts submitted by users to generative AI services, by default, cannot be used for training without explicit permission to do so, and content produced by generative AI services should reflect the core values of Chinese socialism and cannot be used to subvert the government.
Lawmakers in the US have also recently had intensive discussions on how to regulate the technology, but their focus has been on how to ensure user safety, how to prevent generative AI from being weaponized by criminals and how to build sufficient guardrails to prevent it from destroying human civilization.
Although the regulation of generative AI has multiple facets, perhaps the most thorny issue deals with ensuring it never harms society. This issue is mainly rooted in the concern that generative AI has surpassed the capabilities of average people, and yet its “explainability,” or interpretability, is astonishingly poor.
Technically, there are three levels of explainability. An AI technology is equipped with the first-level if it is able to clearly pinpoint the elements of an input to its model that have the most effect on the corresponding output.
For example, an AI model that evaluates loan applications has first-level explainability if it can point out the factors in a loan application that most affect an applicant’s outcome as produced by the model.
An AI technology is equipped with second-level explainability if it is able to distill the underlying complex mathematical model into an abstract representation that is a combination of intuitive features and high-level “if-then-else” rules, and is moreover comprehensible to humans.
For example, an AI model that evaluates loan applications could be abstracted into something as follows: It uses a weighted sum of the applicant’s annual income, on-time payment probability for credit cards and housing mortgage, and the expected price increase percentage of the owned house to compute the applicant’s overall eligibility score.
The third level of explainability of an AI technology concerns a thorough understanding of how the underlying model works, and what it can and cannot do when pushed to the limit. This level of explainability is required to ensure there is no underlying model containing devious logic or mechanisms that could produce catastrophic outputs with specific inputs.
For example, when asked how to win a car race, the AI creates scenarios that require weakening the competition by staging accidents that physically harm opponents.
No existing generative AI technologies, including ChatGPT, have even first-level explainability.
The reason ChatGPT’s explainability is so poor is that the authors creating it do not know why, in its current form, it is so powerful in such diversified sets of natural language processing tasks.
It is therefore impossible for them to estimate how ChatGPT-like technologies would behave when they receive many orders of magnitude worth of additional training in five to 10 years.
Imagine one day that ChatGPT does most of the writing and reading of documents in offices and publications, and can determine that the quality of its work is significantly higher than that produced by average humans.
In addition, from the research it reads, ChatGPT can enhance the training algorithms used to generate its foundational language models, and decide to “grow” itself by creating more powerful language models without human involvement.
What would ChatGPT choose to do with its human users when it “feels” more self-sufficient and becomes increasingly impatient with those that are clearly inferior?
In a survey released last year of elite machine-learning experts, 48 percent said they estimated that AI might have a 10 percent or higher chance of having a devastating effect on humanity.
However, despite such a high probability of an existential threat, under fierce commercial and geopolitical competition pressures, major AI companies’ efforts to advance the frontier of AI technology, as opposed to its explainability, thunders on without any signs of relenting or pausing for introspection.
If governments worldwide could put together a set of regulations and intervene as soon as possible, it could at least influence AI companies to increase their focus on explainability, hopefully returning the development of AI technology to a healthier, safer and more sustainable path.
Chiueh Tzi-cker is a professor in the Institute of Information Security at National Tsing Hua University.
In 1976, the Gang of Four was ousted. The Gang of Four was a leftist political group comprising Chinese Communist Party (CCP) members: Jiang Qing (江青), its leading figure and Mao Zedong’s (毛澤東) last wife; Zhang Chunqiao (張春橋); Yao Wenyuan (姚文元); and Wang Hongwen (王洪文). The four wielded supreme power during the Cultural Revolution (1966-1976), but when Mao died, they were overthrown and charged with crimes against China in what was in essence a political coup of the right against the left. The same type of thing might be happening again as the CCP has expelled nine top generals. Rather than a
Former Chinese Nationalist Party (KMT) lawmaker Cheng Li-wun (鄭麗文) on Saturday won the party’s chairperson election with 65,122 votes, or 50.15 percent of the votes, becoming the second woman in the seat and the first to have switched allegiance from the Democratic Progressive Party (DPP) to the KMT. Cheng, running for the top KMT position for the first time, had been termed a “dark horse,” while the biggest contender was former Taipei mayor Hau Lung-bin (郝龍斌), considered by many to represent the party’s establishment elite. Hau also has substantial experience in government and in the KMT. Cheng joined the Wild Lily Student
When Taiwan High Speed Rail Corp (THSRC) announced the implementation of a new “quiet carriage” policy across all train cars on Sept. 22, I — a classroom teacher who frequently takes the high-speed rail — was filled with anticipation. The days of passengers videoconferencing as if there were no one else on the train, playing videos at full volume or speaking loudly without regard for others finally seemed numbered. However, this battle for silence was lost after less than one month. Faced with emotional guilt from infants and anxious parents, THSRC caved and retreated. However, official high-speed rail data have long
Taipei stands as one of the safest capital cities the world. Taiwan has exceptionally low crime rates — lower than many European nations — and is one of Asia’s leading democracies, respected for its rule of law and commitment to human rights. It is among the few Asian countries to have given legal effect to the International Covenant on Civil and Political Rights and the International Covenant of Social Economic and Cultural Rights. Yet Taiwan continues to uphold the death penalty. This year, the government has taken a number of regressive steps: Executions have resumed, proposals for harsher prison sentences