The technical foundation of ChatGPT is a large language model (LLM), which at its heart is a “next word” prediction engine: Given a preceding word sequence, it constructs a probability distribution for the immediate next word based on a training text corpus.
The corpus used to train such a large-scale language model typically consists of documents collected from the cyber and physical worlds, including Web pages, books, periodicals, one-time publications, e-mails and instant messages, etc.
During training, each document in the corpus is scanned, word by word, from the beginning to the end.
When word X is scanned, the words preceding X serve as a contextual sequence. X is treated as the prediction target, and a training data pair “contextual text sequence, prediction target” is established.
As the scan goes on, myriad such pairs are thus formed. From these training data pairs, a neural network is used to learn the mathematical correlation model between the contextual text sequences and the words immediately following them. The beauty of this particular approach of building language models is that its training data does not require manual labeling.
Although ChatGPT is based on word-by-word prediction, the quality of its responses to user prompts is surprisingly high, because most of the sentences it produces are grammatically correct, semantically relevant, structurally fluent and sometimes even featuring (not necessarily correct) fresh new ideas.
As far as reading comprehension is concerned, ChatGPT seems able to extract key ideas from each individual article, to compare and contrast the similar and different ideas present in multiple articles, and even synthesize novel ideas for situations that are similar, but not exactly identical, to those explored in the training corpus.
Exactly because ChatGPT generates each word in response to a user prompt by consulting with the “text sequence to word” prediction model, narratives included in the response can sometimes contain factual errors or even be completely fabricated.
For example, asked about The Eagles’ works, ChatGPT might quote a fragment of the lyrics of their famous song Hotel California, and the actual quotation included in the response might turn out to be ChatGPT’s own fabrication.
Nevertheless, it is still quite remarkable that just by simply extracting and applying the co-occurrence relationships between text sequences and words in the training text corpus, ChatGPT is able to respond to a wide variety of user prompts with often jaw-dropping quality.
This seems to validate the famous saying by linguist John Firth: “You shall know a word by the company it keeps.”
ChatGPT’s underlying language model is closely tied with its training text corpus. That is why when the question “What is the relationship between China and Taiwan?” is input into ChatGPT in traditional and simplified characters, the answers it outputs are diametrically different.
This suggests that, if future Taiwanese ChatGPT-based applications need a Chinese LLM, they cannot depend on one developed by China, for ideological considerations and national security concerns.
However, the training material that OpenAI uses to train its Chinese LLM might not be sufficiently comprehensive or frequently refreshed. For example, if one intends to use ChatGPT to create scripts for Taiwanese TV dramas, then its underlying LLM must be augmented with additional training based on Taiwanese dialogue data.
Similarly, if one wants to apply ChatGPT to analyzing Taiwanese court judgements to automatically identify abnormal or inconsistent ones, the underlying LLM must be further trained on a corpus made up of past court judgements. These cases suggest that Taiwan should own its LLM to guarantee it is fully localized and always kept up to date.
It is expected that ChatGPT-based applications will pop up all over the place in Taiwan. If they are all built on OpenAI’s ChatGPT, the economic cost associated with the application programming interface calls to OpenAI is going to be enormous, especially when accumulated over multiple decades.
If the government develops its own Chinese LLM based on text materials of Taiwanese origin and make it available for domestic artificial intelligence (AI) text application developers, this infrastructural investment would form the backbone of, and make a gargantuan contribution to, the effective development of its digital industry in the coming decades.
Chiueh Tzi-cker is a joint appointment professor in the Institute of Information Security at National Tsing Hua University.
Labubu, an elf-like plush toy with pointy ears and nine serrated teeth, has become a global sensation, worn by celebrities including Rihanna and Dua Lipa. These dolls are sold out in stores from Singapore to London; a human-sized version recently fetched a whopping US$150,000 at an auction in Beijing. With all the social media buzz, it is worth asking if we are witnessing the rise of a new-age collectible, or whether Labubu is a mere fad destined to fade. Investors certainly want to know. Pop Mart International Group Ltd, the Chinese manufacturer behind this trendy toy, has rallied 178 percent
Life as we know it will probably not come to an end in Japan this weekend, but what if it does? That is the question consuming a disaster-prone country ahead of a widely spread prediction of disaster that one comic book suggests would occur tomorrow. The Future I Saw, a manga by Ryo Tatsuki about her purported ability to see the future in dreams, was first published in 1999. It would have faded into obscurity, but for the mention of a tsunami and the cover that read “Major disaster in March 2011.” Years later, when the most powerful earthquake ever
My youngest son attends a university in Taipei. Throughout the past two years, whenever I have brought him his luggage or picked him up for the end of a semester or the start of a break, I have stayed at a hotel near his campus. In doing so, I have noticed a strange phenomenon: The hotel’s TV contained an unusual number of Chinese channels, filled with accents that would make a person feel as if they are in China. It is quite exhausting. A few days ago, while staying in the hotel, I found that of the 50 available TV channels,
There is no such thing as a “silicon shield.” This trope has gained traction in the world of Taiwanese news, likely with the best intentions. Anything that breaks the China-controlled narrative that Taiwan is doomed to be conquered is welcome, but after observing its rise in recent months, I now believe that the “silicon shield” is a myth — one that is ultimately working against Taiwan. The basic silicon shield idea is that the world, particularly the US, would rush to defend Taiwan against a Chinese invasion because they do not want Beijing to seize the nation’s vital and unique chip industry. However,