People are fascinated when they observe Web site-based conversational artificial intelligence (AI) systems answer questions in ways that resemble human writing or speech. These systems attract media attention because Google, Microsoft and other large technology companies are competing for dominance.
However, serious flaws exist in conversational AI systems, and errors that are sometimes fatal are easy to find. There is a big debate: Do they “understand” questions and the meaning of words?
ChatGPT became a worldwide sensation when it was released in November last year by OpenAI, a US-based AI research laboratory. It received so much attention that Google moved up its release date for its own chatbot, Bard. ChatGPT is considered more advanced, but there is crossover between the two systems. ChatGPT uses an AI-based language model built with a neural network architecture called Transformer that was developed by Google.
Google’s conversational AI system can be viewed as a teacher with a competing former student named OpenAI.
However, there is also a third party: Microsoft, which added a conversational AI system based on OpenAI technology in February. Microsoft, which is a major shareholder of OpenAI, said that its system can summarize and refine documents that are several pages long.
A major challenge is that large language models such as ChatGPT must reread huge amounts of information every time it updates its content. This raises questions about information timeliness and accuracy.
“I only have knowledge through 2021,” ChatGPT responded to a question at a demo last year.
Google was embarrassed when Bard mistakenly reported that NASA’s James Webb Space Telescope had successfully taken the first-ever photo of an exoplanet.
The systems clearly do not “understand” concepts, meanings and cause-and-effect relationships, resulting in factual misunderstandings.
I recently asked ChatGPT: “What’s the difference between older brothers and older sisters?”
“Although sibling relationships differ depending on family structure and birth order, older brothers are usually older than older sisters,” it answered.
These kinds of errors occur because most language models arrange words and phrases found in existing texts. Machines calculate the occurrence probabilities of words or strings of words, and display those with the highest probabilities without understanding their contextual meanings.
This makes it difficult for ChatGPT to solve mathematical “word problems” commonly taught in junior-high school. For example, comparing two trains going to the same destination at different speeds. These problems require multiple steps of inference that ChatGPT cannot handle.
In short, existing conversational AI systems should only be used for tasks involving text columns of natural speech, not for tasks requiring high levels of understanding or content accuracy.
Media love to talk about the potential for human-like machine intelligence. It is unlikely to arise for many decades.
However, “research on next-generation AI that features logical thinking, common sense and cognition has been advancing for several years,” Japan’s Science and Technology Promotion Organization said.
There have been three waves of AI technology. In the first two waves during the 1960s and 1980s, researchers focused on pre-programmed data analyses and human-like logic, and concluded that compiling the huge amount of data required to represent reality was not possible.
In the third wave during the 2010s, researchers emphasized machine learning rather than trying to mimic human thinking. The Internet as well as semiconductor advancements increased the potential for the combination of data and “deep learning” software to perform tasks once considered impossible.
A simple example already available is facial recognition software for unlocking smartphones.
Media are also constantly discussing the potential for multifunctional self-regulating robots that are capable of identifying objects and situations, and of “understanding” new conditions.
To do that, machine learning models require enormous amounts of data describing past examples to make inferences that resemble logic and “common sense.”
However, “Google, Tesla and Apple are still having a hard time bringing self-driving cars to practical use, suggesting that there are limits to AI that relies on machine learning,” Digital Garage director Joi Ito said.
Some researchers believe that common sense and logical thinking could eventually be realized in AI systems. They would require interdisciplinary research from fields such as brain and cognitive science, as well as software that duplicates processes that humans use to learn language, spatial awareness and social relationship skills.
One day we might revisit logic and common sense topics associated with second-wave AI research, but this time with the addition of deep learning tools.
However, there is a long way to go to narrow the gap between AI technologies and human-like intelligence in machines.
Meta’s chief AI scientist, Yann LeCun, a pioneer in deep learning technology, wants us to remember that current conversational AI systems “are far from the intelligence of dogs and cats,” not to mention humans.
Huang Chung-yuan is a professor in the Department of Computer Science and Information Engineering, Department of Artificial Intelligence and the Artificial Intelligence Research Center at Chang Gung University.
The conflict in the Middle East has been disrupting financial markets, raising concerns about rising inflationary pressures and global economic growth. One market that some investors are particularly worried about has not been heavily covered in the news: the private credit market. Even before the joint US-Israeli attacks on Iran on Feb. 28, global capital markets had faced growing structural pressure — the deteriorating funding conditions in the private credit market. The private credit market is where companies borrow funds directly from nonbank financial institutions such as asset management companies, insurance companies and private lending platforms. Its popularity has risen since
The Donald Trump administration’s approach to China broadly, and to cross-Strait relations in particular, remains a conundrum. The 2025 US National Security Strategy prioritized the defense of Taiwan in a way that surprised some observers of the Trump administration: “Deterring a conflict over Taiwan, ideally by preserving military overmatch, is a priority.” Two months later, Taiwan went entirely unmentioned in the US National Defense Strategy, as did military overmatch vis-a-vis China, giving renewed cause for concern. How to interpret these varying statements remains an open question. In both documents, the Indo-Pacific is listed as a second priority behind homeland defense and
In an op-ed published in Foreign Affairs on Tuesday, Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) said that Taiwan should not have to choose between aligning with Beijing or Washington, and advocated for cooperation with Beijing under the so-called “1992 consensus” as a form of “strategic ambiguity.” However, Cheng has either misunderstood the geopolitical reality and chosen appeasement, or is trying to fool an international audience with her doublespeak; nonetheless, it risks sending the wrong message to Taiwan’s democratic allies and partners. Cheng stressed that “Taiwan does not have to choose,” as while Beijing and Washington compete, Taiwan is strongest when
US Secretary of the Treasury Scott Bessent and Chinese Vice Premier He Lifeng (何立峰) are expected to meet this month in Paris to prepare for a meeting between US President Donald Trump and Chinese President Xi Jinping (習近平). According to media reports, the two sides would discuss issues such as the potential purchase of Boeing aircraft by China, increasing imports of US soybeans and the latest impacts of Trump’s reciprocal tariffs. However, recent US military action against Iran has added uncertainty to the Trump-Xi summit. Chinese Minister of Foreign Affairs Wang Yi (王毅) called the joint US-Israeli airstrikes and the