The technical foundation of ChatGPT is a large language model (LLM), which at its heart is a “next word” prediction engine: Given a preceding word sequence, it constructs a probability distribution for the immediate next word based on a training text corpus.
The corpus used to train such a large-scale language model typically consists of documents collected from the cyber and physical worlds, including Web pages, books, periodicals, one-time publications, e-mails and instant messages, etc.
During training, each document in the corpus is scanned, word by word, from the beginning to the end.
When word X is scanned, the words preceding X serve as a contextual sequence. X is treated as the prediction target, and a training data pair “contextual text sequence, prediction target” is established.
As the scan goes on, myriad such pairs are thus formed. From these training data pairs, a neural network is used to learn the mathematical correlation model between the contextual text sequences and the words immediately following them. The beauty of this particular approach of building language models is that its training data does not require manual labeling.
Although ChatGPT is based on word-by-word prediction, the quality of its responses to user prompts is surprisingly high, because most of the sentences it produces are grammatically correct, semantically relevant, structurally fluent and sometimes even featuring (not necessarily correct) fresh new ideas.
As far as reading comprehension is concerned, ChatGPT seems able to extract key ideas from each individual article, to compare and contrast the similar and different ideas present in multiple articles, and even synthesize novel ideas for situations that are similar, but not exactly identical, to those explored in the training corpus.
Exactly because ChatGPT generates each word in response to a user prompt by consulting with the “text sequence to word” prediction model, narratives included in the response can sometimes contain factual errors or even be completely fabricated.
For example, asked about The Eagles’ works, ChatGPT might quote a fragment of the lyrics of their famous song Hotel California, and the actual quotation included in the response might turn out to be ChatGPT’s own fabrication.
Nevertheless, it is still quite remarkable that just by simply extracting and applying the co-occurrence relationships between text sequences and words in the training text corpus, ChatGPT is able to respond to a wide variety of user prompts with often jaw-dropping quality.
This seems to validate the famous saying by linguist John Firth: “You shall know a word by the company it keeps.”
ChatGPT’s underlying language model is closely tied with its training text corpus. That is why when the question “What is the relationship between China and Taiwan?” is input into ChatGPT in traditional and simplified characters, the answers it outputs are diametrically different.
This suggests that, if future Taiwanese ChatGPT-based applications need a Chinese LLM, they cannot depend on one developed by China, for ideological considerations and national security concerns.
However, the training material that OpenAI uses to train its Chinese LLM might not be sufficiently comprehensive or frequently refreshed. For example, if one intends to use ChatGPT to create scripts for Taiwanese TV dramas, then its underlying LLM must be augmented with additional training based on Taiwanese dialogue data.
Similarly, if one wants to apply ChatGPT to analyzing Taiwanese court judgements to automatically identify abnormal or inconsistent ones, the underlying LLM must be further trained on a corpus made up of past court judgements. These cases suggest that Taiwan should own its LLM to guarantee it is fully localized and always kept up to date.
It is expected that ChatGPT-based applications will pop up all over the place in Taiwan. If they are all built on OpenAI’s ChatGPT, the economic cost associated with the application programming interface calls to OpenAI is going to be enormous, especially when accumulated over multiple decades.
If the government develops its own Chinese LLM based on text materials of Taiwanese origin and make it available for domestic artificial intelligence (AI) text application developers, this infrastructural investment would form the backbone of, and make a gargantuan contribution to, the effective development of its digital industry in the coming decades.
Chiueh Tzi-cker is a joint appointment professor in the Institute of Information Security at National Tsing Hua University.
In their recent op-ed “Trump Should Rein In Taiwan” in Foreign Policy magazine, Christopher Chivvis and Stephen Wertheim argued that the US should pressure President William Lai (賴清德) to “tone it down” to de-escalate tensions in the Taiwan Strait — as if Taiwan’s words are more of a threat to peace than Beijing’s actions. It is an old argument dressed up in new concern: that Washington must rein in Taipei to avoid war. However, this narrative gets it backward. Taiwan is not the problem; China is. Calls for a so-called “grand bargain” with Beijing — where the US pressures Taiwan into concessions
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s
During the “426 rally” organized by the Chinese Nationalist Party (KMT) and the Taiwan People’s Party under the slogan “fight green communism, resist dictatorship,” leaders from the two opposition parties framed it as a battle against an allegedly authoritarian administration led by President William Lai (賴清德). While criticism of the government can be a healthy expression of a vibrant, pluralistic society, and protests are quite common in Taiwan, the discourse of the 426 rally nonetheless betrayed troubling signs of collective amnesia. Specifically, the KMT, which imposed 38 years of martial law in Taiwan from 1949 to 1987, has never fully faced its