People are fascinated when they observe Web site-based conversational artificial intelligence (AI) systems answer questions in ways that resemble human writing or speech. These systems attract media attention because Google, Microsoft and other large technology companies are competing for dominance.
However, serious flaws exist in conversational AI systems, and errors that are sometimes fatal are easy to find. There is a big debate: Do they “understand” questions and the meaning of words?
ChatGPT became a worldwide sensation when it was released in November last year by OpenAI, a US-based AI research laboratory. It received so much attention that Google moved up its release date for its own chatbot, Bard. ChatGPT is considered more advanced, but there is crossover between the two systems. ChatGPT uses an AI-based language model built with a neural network architecture called Transformer that was developed by Google.
Google’s conversational AI system can be viewed as a teacher with a competing former student named OpenAI.
However, there is also a third party: Microsoft, which added a conversational AI system based on OpenAI technology in February. Microsoft, which is a major shareholder of OpenAI, said that its system can summarize and refine documents that are several pages long.
A major challenge is that large language models such as ChatGPT must reread huge amounts of information every time it updates its content. This raises questions about information timeliness and accuracy.
“I only have knowledge through 2021,” ChatGPT responded to a question at a demo last year.
Google was embarrassed when Bard mistakenly reported that NASA’s James Webb Space Telescope had successfully taken the first-ever photo of an exoplanet.
The systems clearly do not “understand” concepts, meanings and cause-and-effect relationships, resulting in factual misunderstandings.
I recently asked ChatGPT: “What’s the difference between older brothers and older sisters?”
“Although sibling relationships differ depending on family structure and birth order, older brothers are usually older than older sisters,” it answered.
These kinds of errors occur because most language models arrange words and phrases found in existing texts. Machines calculate the occurrence probabilities of words or strings of words, and display those with the highest probabilities without understanding their contextual meanings.
This makes it difficult for ChatGPT to solve mathematical “word problems” commonly taught in junior-high school. For example, comparing two trains going to the same destination at different speeds. These problems require multiple steps of inference that ChatGPT cannot handle.
In short, existing conversational AI systems should only be used for tasks involving text columns of natural speech, not for tasks requiring high levels of understanding or content accuracy.
Media love to talk about the potential for human-like machine intelligence. It is unlikely to arise for many decades.
However, “research on next-generation AI that features logical thinking, common sense and cognition has been advancing for several years,” Japan’s Science and Technology Promotion Organization said.
There have been three waves of AI technology. In the first two waves during the 1960s and 1980s, researchers focused on pre-programmed data analyses and human-like logic, and concluded that compiling the huge amount of data required to represent reality was not possible.
In the third wave during the 2010s, researchers emphasized machine learning rather than trying to mimic human thinking. The Internet as well as semiconductor advancements increased the potential for the combination of data and “deep learning” software to perform tasks once considered impossible.
A simple example already available is facial recognition software for unlocking smartphones.
Media are also constantly discussing the potential for multifunctional self-regulating robots that are capable of identifying objects and situations, and of “understanding” new conditions.
To do that, machine learning models require enormous amounts of data describing past examples to make inferences that resemble logic and “common sense.”
However, “Google, Tesla and Apple are still having a hard time bringing self-driving cars to practical use, suggesting that there are limits to AI that relies on machine learning,” Digital Garage director Joi Ito said.
Some researchers believe that common sense and logical thinking could eventually be realized in AI systems. They would require interdisciplinary research from fields such as brain and cognitive science, as well as software that duplicates processes that humans use to learn language, spatial awareness and social relationship skills.
One day we might revisit logic and common sense topics associated with second-wave AI research, but this time with the addition of deep learning tools.
However, there is a long way to go to narrow the gap between AI technologies and human-like intelligence in machines.
Meta’s chief AI scientist, Yann LeCun, a pioneer in deep learning technology, wants us to remember that current conversational AI systems “are far from the intelligence of dogs and cats,” not to mention humans.
Huang Chung-yuan is a professor in the Department of Computer Science and Information Engineering, Department of Artificial Intelligence and the Artificial Intelligence Research Center at Chang Gung University.
From the Iran war and nuclear weapons to tariffs and artificial intelligence, the agenda for this week’s Beijing summit between US President Donald Trump and Chinese President Xi Jinping (習近平) is packed. Xi would almost certainly bring up Taiwan, if only to demonstrate his inflexibility on the matter. However, no one needs to meet with Xi face-to-face to understand his stance. A visit to the National Museum of China in Beijing — in particular, the “Road to Rejuvenation” exhibition, which chronicles the rise and rule of the Chinese Communist Party — might be even more revealing. Xi took the members
A Pale View of Hills, a movie released last year, follows the story of a Japanese woman from Nagasaki who moved to Britain in the 1950s with her British husband and daughter from a previous marriage. The daughter was born at a time when memories of the US atomic bombing of Nagasaki during World War II and anxiety over the effects of nuclear radiation still haunted the community. It is a reflection on the legacy of the local and national trauma of the bombing that ended the period of Japanese militarism. A central theme of the movie is the need, at
The Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP) on Friday used their legislative majority to push their version of a special defense budget bill to fund the purchase of US military equipment, with the combined spending capped at NT$780 billion (US$24.78 billion). The bill, which fell short of the Executive Yuan’s NT$1.25 trillion request, was passed by a 59-0 margin with 48 abstentions in the 113-seat legislature. KMT Chairwoman Cheng Li-wun (鄭麗文), who reportedly met with TPP Chairman Huang Kuo-chang (黃國昌) for a private meeting before holding a joint post-vote news conference, was said to have mobilized her
Before the Chinese Communist Party (CCP) and its People’s Liberation Army (PLA) can blockade, invade, and destroy the democracy on Taiwan, the CCP seeks to make the world an accomplice to Taiwan’s subjugation by harassing any government that confers any degree of marginal recognition, or defies the CCP’s “One China Principle” diktat that there is no free nation of Taiwan. For United States President Donald Trump’s upcoming May 14, 2026 visit to China, the CCP’s top wish has nothing to do with Trump’s ongoing dismantling of the CCP’s Axis of Evil. The CCP’s first demand is for Trump to cease US