The technical foundation of ChatGPT is a large language model (LLM), which at its heart is a “next word” prediction engine: Given a preceding word sequence, it constructs a probability distribution for the immediate next word based on a training text corpus.
The corpus used to train such a large-scale language model typically consists of documents collected from the cyber and physical worlds, including Web pages, books, periodicals, one-time publications, e-mails and instant messages, etc.
During training, each document in the corpus is scanned, word by word, from the beginning to the end.
When word X is scanned, the words preceding X serve as a contextual sequence. X is treated as the prediction target, and a training data pair “contextual text sequence, prediction target” is established.
As the scan goes on, myriad such pairs are thus formed. From these training data pairs, a neural network is used to learn the mathematical correlation model between the contextual text sequences and the words immediately following them. The beauty of this particular approach of building language models is that its training data does not require manual labeling.
Although ChatGPT is based on word-by-word prediction, the quality of its responses to user prompts is surprisingly high, because most of the sentences it produces are grammatically correct, semantically relevant, structurally fluent and sometimes even featuring (not necessarily correct) fresh new ideas.
As far as reading comprehension is concerned, ChatGPT seems able to extract key ideas from each individual article, to compare and contrast the similar and different ideas present in multiple articles, and even synthesize novel ideas for situations that are similar, but not exactly identical, to those explored in the training corpus.
Exactly because ChatGPT generates each word in response to a user prompt by consulting with the “text sequence to word” prediction model, narratives included in the response can sometimes contain factual errors or even be completely fabricated.
For example, asked about The Eagles’ works, ChatGPT might quote a fragment of the lyrics of their famous song Hotel California, and the actual quotation included in the response might turn out to be ChatGPT’s own fabrication.
Nevertheless, it is still quite remarkable that just by simply extracting and applying the co-occurrence relationships between text sequences and words in the training text corpus, ChatGPT is able to respond to a wide variety of user prompts with often jaw-dropping quality.
This seems to validate the famous saying by linguist John Firth: “You shall know a word by the company it keeps.”
ChatGPT’s underlying language model is closely tied with its training text corpus. That is why when the question “What is the relationship between China and Taiwan?” is input into ChatGPT in traditional and simplified characters, the answers it outputs are diametrically different.
This suggests that, if future Taiwanese ChatGPT-based applications need a Chinese LLM, they cannot depend on one developed by China, for ideological considerations and national security concerns.
However, the training material that OpenAI uses to train its Chinese LLM might not be sufficiently comprehensive or frequently refreshed. For example, if one intends to use ChatGPT to create scripts for Taiwanese TV dramas, then its underlying LLM must be augmented with additional training based on Taiwanese dialogue data.
Similarly, if one wants to apply ChatGPT to analyzing Taiwanese court judgements to automatically identify abnormal or inconsistent ones, the underlying LLM must be further trained on a corpus made up of past court judgements. These cases suggest that Taiwan should own its LLM to guarantee it is fully localized and always kept up to date.
It is expected that ChatGPT-based applications will pop up all over the place in Taiwan. If they are all built on OpenAI’s ChatGPT, the economic cost associated with the application programming interface calls to OpenAI is going to be enormous, especially when accumulated over multiple decades.
If the government develops its own Chinese LLM based on text materials of Taiwanese origin and make it available for domestic artificial intelligence (AI) text application developers, this infrastructural investment would form the backbone of, and make a gargantuan contribution to, the effective development of its digital industry in the coming decades.
Chiueh Tzi-cker is a joint appointment professor in the Institute of Information Security at National Tsing Hua University.
From May 31 to June 2, 37 ministers of defense attended the 21st International Institute for Strategic Studies Shangri-La Dialogue in Singapore, including Chinese Minister of National Defense Dong Jun (董軍). Anyone who tried to separate Taiwan from China would be “crushed to pieces,” he said during the premier defense summit. In response to the threat, US Indo-Pacific Commander Admiral Samuel Paparo revealed the US military’s “Hellscape” strategy, with the aim of thwarting a potential Chinese invasion of Taiwan. The strategy involves turning the Taiwan Strait into an “unmanned hellscape” before Chinese forces can cross it, Paparo said in an
Since Nvidia Corp chief executive officer Jensen Huang’s (黃仁勳) arrival in Taiwan on May 26, he has dominated headlines across multiple local news outlets. Rather than speaking English, he has been seen several times conversing with locals in Hoklo (commonly known as Taiwanese), a local language no longer commonly used by the public. Due to his growing popularity and use of Hoklo, issues surrounding the preservation of native languages have resurfaced. Contrary to the stigmatizing belief that Hoklo is merely a language spoken by the uneducated, Huang’s actions have inspired many of his fans to revive their respective mother tongues. Unfortunately, even
The pro-China camp in Taiwan is apparently displeased with Nvidia Corp founder and CEO Jensen Huang (黃仁勳), and an Internet celebrity even searched for and disclosed his personal information online. Such disapproval was not only due to Huang using the word “country” to describe Taiwan or his praise for the nation’s technology industry, but also because his very existence implies support for Taiwan. After reforms in the Tang (唐) and Song (宋) dynasties, the class system of the “four occupations” — academic, farmer, worker and businessperson — took shape in China. Prior to the changes, businesspeople held influential roles in China. The
Beijing’s goals in last month’s China-Japan-South Korea Ninth Trilateral Summit were to repair and strengthen its relations with Seoul and Tokyo, as a way of counterbalancing US influence. In a climate where public sentiment is shifting against the Chinese Communist Party, Chinese President Xi Jinping (習近平) is attempting to break up the US alliances in Asia and Europe. The outcome of the trilateral summit is more symbolic than substantive, as both South Korea and Japan remain under threat from Beijing and are unlikely to pivot away from the US. This was evidenced by a statement after the US-Japan-South Korea Trilateral Ministerial