The technical foundation of ChatGPT is a large language model (LLM), which at its heart is a “next word” prediction engine: Given a preceding word sequence, it constructs a probability distribution for the immediate next word based on a training text corpus.
The corpus used to train such a large-scale language model typically consists of documents collected from the cyber and physical worlds, including Web pages, books, periodicals, one-time publications, e-mails and instant messages, etc.
During training, each document in the corpus is scanned, word by word, from the beginning to the end.
When word X is scanned, the words preceding X serve as a contextual sequence. X is treated as the prediction target, and a training data pair “contextual text sequence, prediction target” is established.
As the scan goes on, myriad such pairs are thus formed. From these training data pairs, a neural network is used to learn the mathematical correlation model between the contextual text sequences and the words immediately following them. The beauty of this particular approach of building language models is that its training data does not require manual labeling.
Although ChatGPT is based on word-by-word prediction, the quality of its responses to user prompts is surprisingly high, because most of the sentences it produces are grammatically correct, semantically relevant, structurally fluent and sometimes even featuring (not necessarily correct) fresh new ideas.
As far as reading comprehension is concerned, ChatGPT seems able to extract key ideas from each individual article, to compare and contrast the similar and different ideas present in multiple articles, and even synthesize novel ideas for situations that are similar, but not exactly identical, to those explored in the training corpus.
Exactly because ChatGPT generates each word in response to a user prompt by consulting with the “text sequence to word” prediction model, narratives included in the response can sometimes contain factual errors or even be completely fabricated.
For example, asked about The Eagles’ works, ChatGPT might quote a fragment of the lyrics of their famous song Hotel California, and the actual quotation included in the response might turn out to be ChatGPT’s own fabrication.
Nevertheless, it is still quite remarkable that just by simply extracting and applying the co-occurrence relationships between text sequences and words in the training text corpus, ChatGPT is able to respond to a wide variety of user prompts with often jaw-dropping quality.
This seems to validate the famous saying by linguist John Firth: “You shall know a word by the company it keeps.”
ChatGPT’s underlying language model is closely tied with its training text corpus. That is why when the question “What is the relationship between China and Taiwan?” is input into ChatGPT in traditional and simplified characters, the answers it outputs are diametrically different.
This suggests that, if future Taiwanese ChatGPT-based applications need a Chinese LLM, they cannot depend on one developed by China, for ideological considerations and national security concerns.
However, the training material that OpenAI uses to train its Chinese LLM might not be sufficiently comprehensive or frequently refreshed. For example, if one intends to use ChatGPT to create scripts for Taiwanese TV dramas, then its underlying LLM must be augmented with additional training based on Taiwanese dialogue data.
Similarly, if one wants to apply ChatGPT to analyzing Taiwanese court judgements to automatically identify abnormal or inconsistent ones, the underlying LLM must be further trained on a corpus made up of past court judgements. These cases suggest that Taiwan should own its LLM to guarantee it is fully localized and always kept up to date.
It is expected that ChatGPT-based applications will pop up all over the place in Taiwan. If they are all built on OpenAI’s ChatGPT, the economic cost associated with the application programming interface calls to OpenAI is going to be enormous, especially when accumulated over multiple decades.
If the government develops its own Chinese LLM based on text materials of Taiwanese origin and make it available for domestic artificial intelligence (AI) text application developers, this infrastructural investment would form the backbone of, and make a gargantuan contribution to, the effective development of its digital industry in the coming decades.
Chiueh Tzi-cker is a joint appointment professor in the Institute of Information Security at National Tsing Hua University.
On May 7, 1971, Henry Kissinger planned his first, ultra-secret mission to China and pondered whether it would be better to meet his Chinese interlocutors “in Pakistan where the Pakistanis would tape the meeting — or in China where the Chinese would do the taping.” After a flicker of thought, he decided to have the Chinese do all the tape recording, translating and transcribing. Fortuitously, historians have several thousand pages of verbatim texts of Dr. Kissinger’s negotiations with his Chinese counterparts. Paradoxically, behind the scenes, Chinese stenographers prepared verbatim English language typescripts faster than they could translate and type them
More than 30 years ago when I immigrated to the US, applied for citizenship and took the 100-question civics test, the one part of the naturalization process that left the deepest impression on me was one question on the N-400 form, which asked: “Have you ever been a member of, involved in or in any way associated with any communist or totalitarian party anywhere in the world?” Answering “yes” could lead to the rejection of your application. Some people might try their luck and lie, but if exposed, the consequences could be much worse — a person could be fined,
Xiaomi Corp founder Lei Jun (雷軍) on May 22 made a high-profile announcement, giving online viewers a sneak peek at the company’s first 3-nanometer mobile processor — the Xring O1 chip — and saying it is a breakthrough in China’s chip design history. Although Xiaomi might be capable of designing chips, it lacks the ability to manufacture them. No matter how beautifully planned the blueprints are, if they cannot be mass-produced, they are nothing more than drawings on paper. The truth is that China’s chipmaking efforts are still heavily reliant on the free world — particularly on Taiwan Semiconductor Manufacturing
Keelung Mayor George Hsieh (謝國樑) of the Chinese Nationalist Party (KMT) on Tuesday last week apologized over allegations that the former director of the city’s Civil Affairs Department had illegally accessed citizens’ data to assist the KMT in its campaign to recall Democratic Progressive Party (DPP) councilors. Given the public discontent with opposition lawmakers’ disruptive behavior in the legislature, passage of unconstitutional legislation and slashing of the central government’s budget, civic groups have launched a massive campaign to recall KMT lawmakers. The KMT has tried to fight back by initiating campaigns to recall DPP lawmakers, but the petition documents they