People are fascinated when they observe Web site-based conversational artificial intelligence (AI) systems answer questions in ways that resemble human writing or speech. These systems attract media attention because Google, Microsoft and other large technology companies are competing for dominance.
However, serious flaws exist in conversational AI systems, and errors that are sometimes fatal are easy to find. There is a big debate: Do they “understand” questions and the meaning of words?
ChatGPT became a worldwide sensation when it was released in November last year by OpenAI, a US-based AI research laboratory. It received so much attention that Google moved up its release date for its own chatbot, Bard. ChatGPT is considered more advanced, but there is crossover between the two systems. ChatGPT uses an AI-based language model built with a neural network architecture called Transformer that was developed by Google.
Google’s conversational AI system can be viewed as a teacher with a competing former student named OpenAI.
However, there is also a third party: Microsoft, which added a conversational AI system based on OpenAI technology in February. Microsoft, which is a major shareholder of OpenAI, said that its system can summarize and refine documents that are several pages long.
A major challenge is that large language models such as ChatGPT must reread huge amounts of information every time it updates its content. This raises questions about information timeliness and accuracy.
“I only have knowledge through 2021,” ChatGPT responded to a question at a demo last year.
Google was embarrassed when Bard mistakenly reported that NASA’s James Webb Space Telescope had successfully taken the first-ever photo of an exoplanet.
The systems clearly do not “understand” concepts, meanings and cause-and-effect relationships, resulting in factual misunderstandings.
I recently asked ChatGPT: “What’s the difference between older brothers and older sisters?”
“Although sibling relationships differ depending on family structure and birth order, older brothers are usually older than older sisters,” it answered.
These kinds of errors occur because most language models arrange words and phrases found in existing texts. Machines calculate the occurrence probabilities of words or strings of words, and display those with the highest probabilities without understanding their contextual meanings.
This makes it difficult for ChatGPT to solve mathematical “word problems” commonly taught in junior-high school. For example, comparing two trains going to the same destination at different speeds. These problems require multiple steps of inference that ChatGPT cannot handle.
In short, existing conversational AI systems should only be used for tasks involving text columns of natural speech, not for tasks requiring high levels of understanding or content accuracy.
Media love to talk about the potential for human-like machine intelligence. It is unlikely to arise for many decades.
However, “research on next-generation AI that features logical thinking, common sense and cognition has been advancing for several years,” Japan’s Science and Technology Promotion Organization said.
There have been three waves of AI technology. In the first two waves during the 1960s and 1980s, researchers focused on pre-programmed data analyses and human-like logic, and concluded that compiling the huge amount of data required to represent reality was not possible.
In the third wave during the 2010s, researchers emphasized machine learning rather than trying to mimic human thinking. The Internet as well as semiconductor advancements increased the potential for the combination of data and “deep learning” software to perform tasks once considered impossible.
A simple example already available is facial recognition software for unlocking smartphones.
Media are also constantly discussing the potential for multifunctional self-regulating robots that are capable of identifying objects and situations, and of “understanding” new conditions.
To do that, machine learning models require enormous amounts of data describing past examples to make inferences that resemble logic and “common sense.”
However, “Google, Tesla and Apple are still having a hard time bringing self-driving cars to practical use, suggesting that there are limits to AI that relies on machine learning,” Digital Garage director Joi Ito said.
Some researchers believe that common sense and logical thinking could eventually be realized in AI systems. They would require interdisciplinary research from fields such as brain and cognitive science, as well as software that duplicates processes that humans use to learn language, spatial awareness and social relationship skills.
One day we might revisit logic and common sense topics associated with second-wave AI research, but this time with the addition of deep learning tools.
However, there is a long way to go to narrow the gap between AI technologies and human-like intelligence in machines.
Meta’s chief AI scientist, Yann LeCun, a pioneer in deep learning technology, wants us to remember that current conversational AI systems “are far from the intelligence of dogs and cats,” not to mention humans.
Huang Chung-yuan is a professor in the Department of Computer Science and Information Engineering, Department of Artificial Intelligence and the Artificial Intelligence Research Center at Chang Gung University.
A gap appears to be emerging between Washington’s foreign policy elites and the broader American public on how the United States should respond to China’s rise. From my vantage working at a think tank in Washington, DC, and through regular travel around the United States, I increasingly experience two distinct discussions. This divergence — between America’s elite hawkishness and public caution — may become one of the least appreciated and most consequential external factors influencing Taiwan’s security environment in the years ahead. Within the American policy community, the dominant view of China has grown unmistakably tough. Many members of Congress, as
After declaring Iran’s military “gone,” US President Donald Trump appealed to the UK, France, Japan and South Korea — as well as China, Iran’s strategic partner — to send minesweepers and naval forces to reopen the Strait of Hormuz. When allies balked, the request turned into a warning: NATO would face “a very bad” future if it refused. The prevailing wisdom is that Trump faces a credibility problem: having spent years insulting allies, he finds they would not rally when he needs them. That is true, but superficial, as though a structural collapse could be caused by wounded feelings. Something
Former Taipei mayor and Taiwan People’s Party (TPP) founding chairman Ko Wen-je (柯文哲) was sentenced to 17 years in prison on Thursday, making headlines across major media. However, another case linked to the TPP — the indictment of Chinese immigrant Xu Chunying (徐春鶯) for alleged violations of the Anti-Infiltration Act (反滲透法) on Tuesday — has also stirred up heated discussions. Born in Shanghai, Xu became a resident of Taiwan through marriage in 1993. Currently the director of the Taiwan New Immigrant Development Association, she was elected to serve as legislator-at-large for the TPP in 2023, but was later charged with involvement
Out of 64 participating universities in this year’s Stars Program — through which schools directly recommend their top students to universities for admission — only 19 filled their admissions quotas. There were 922 vacancies, down more than 200 from last year; top universities had 37 unfilled places, 40 fewer than last year. The original purpose of the Stars Program was to expand admissions to a wider range of students. However, certain departments at elite universities that failed to meet their admissions quotas are not improving. Vacancies at top universities are linked to students’ program preferences on their applications, but inappropriate admission