People are fascinated when they observe Web site-based conversational artificial intelligence (AI) systems answer questions in ways that resemble human writing or speech. These systems attract media attention because Google, Microsoft and other large technology companies are competing for dominance.
However, serious flaws exist in conversational AI systems, and errors that are sometimes fatal are easy to find. There is a big debate: Do they “understand” questions and the meaning of words?
ChatGPT became a worldwide sensation when it was released in November last year by OpenAI, a US-based AI research laboratory. It received so much attention that Google moved up its release date for its own chatbot, Bard. ChatGPT is considered more advanced, but there is crossover between the two systems. ChatGPT uses an AI-based language model built with a neural network architecture called Transformer that was developed by Google.
Google’s conversational AI system can be viewed as a teacher with a competing former student named OpenAI.
However, there is also a third party: Microsoft, which added a conversational AI system based on OpenAI technology in February. Microsoft, which is a major shareholder of OpenAI, said that its system can summarize and refine documents that are several pages long.
A major challenge is that large language models such as ChatGPT must reread huge amounts of information every time it updates its content. This raises questions about information timeliness and accuracy.
“I only have knowledge through 2021,” ChatGPT responded to a question at a demo last year.
Google was embarrassed when Bard mistakenly reported that NASA’s James Webb Space Telescope had successfully taken the first-ever photo of an exoplanet.
The systems clearly do not “understand” concepts, meanings and cause-and-effect relationships, resulting in factual misunderstandings.
I recently asked ChatGPT: “What’s the difference between older brothers and older sisters?”
“Although sibling relationships differ depending on family structure and birth order, older brothers are usually older than older sisters,” it answered.
These kinds of errors occur because most language models arrange words and phrases found in existing texts. Machines calculate the occurrence probabilities of words or strings of words, and display those with the highest probabilities without understanding their contextual meanings.
This makes it difficult for ChatGPT to solve mathematical “word problems” commonly taught in junior-high school. For example, comparing two trains going to the same destination at different speeds. These problems require multiple steps of inference that ChatGPT cannot handle.
In short, existing conversational AI systems should only be used for tasks involving text columns of natural speech, not for tasks requiring high levels of understanding or content accuracy.
Media love to talk about the potential for human-like machine intelligence. It is unlikely to arise for many decades.
However, “research on next-generation AI that features logical thinking, common sense and cognition has been advancing for several years,” Japan’s Science and Technology Promotion Organization said.
There have been three waves of AI technology. In the first two waves during the 1960s and 1980s, researchers focused on pre-programmed data analyses and human-like logic, and concluded that compiling the huge amount of data required to represent reality was not possible.
In the third wave during the 2010s, researchers emphasized machine learning rather than trying to mimic human thinking. The Internet as well as semiconductor advancements increased the potential for the combination of data and “deep learning” software to perform tasks once considered impossible.
A simple example already available is facial recognition software for unlocking smartphones.
Media are also constantly discussing the potential for multifunctional self-regulating robots that are capable of identifying objects and situations, and of “understanding” new conditions.
To do that, machine learning models require enormous amounts of data describing past examples to make inferences that resemble logic and “common sense.”
However, “Google, Tesla and Apple are still having a hard time bringing self-driving cars to practical use, suggesting that there are limits to AI that relies on machine learning,” Digital Garage director Joi Ito said.
Some researchers believe that common sense and logical thinking could eventually be realized in AI systems. They would require interdisciplinary research from fields such as brain and cognitive science, as well as software that duplicates processes that humans use to learn language, spatial awareness and social relationship skills.
One day we might revisit logic and common sense topics associated with second-wave AI research, but this time with the addition of deep learning tools.
However, there is a long way to go to narrow the gap between AI technologies and human-like intelligence in machines.
Meta’s chief AI scientist, Yann LeCun, a pioneer in deep learning technology, wants us to remember that current conversational AI systems “are far from the intelligence of dogs and cats,” not to mention humans.
Huang Chung-yuan is a professor in the Department of Computer Science and Information Engineering, Department of Artificial Intelligence and the Artificial Intelligence Research Center at Chang Gung University.
When 17,000 troops from the US, the Philippines, Australia, Japan, Canada, France and New Zealand spread across the Philippine archipelago for the Balikatan military exercise, running from tomorrow through May 8, the official language would be about interoperability, readiness and regional peace. However, the strategic subtext is becoming harder to ignore: The exercises are increasingly about the military geography around Taiwan. Balikatan has always carried political weight. This year, however, the exercise looks different in ways that matter not only to Manila and Washington, but also to Taipei. What began in 2023 as a shift toward a more serious deterrence posture
Reports about Elon Musk planning his own semiconductor fab have sparked anxiety, with some warning that Taiwan Semiconductor Manufacturing Co (TSMC) could lose key customers to vertical integration. A closer reading suggests a more measured conclusion: Musk is advancing a strategic vision of in-house chip manufacturing, but remains far from replacing the existing foundry ecosystem. For TSMC, the short-term impact is limited; the medium-term challenge lies in supply diversification and pricing pressure, only in the long term could it evolve into a structural threat. The clearest signal is Musk’s announcement that Tesla and SpaceX plan to develop a fab project dubbed “Terafab”
China’s AI ecosystem has one defining difference from Silicon Valley: It is embrace of open source. While the US’ biggest companies race to build ever more powerful systems and insist only they can control them, Chinese labs have been giving the technology away for free. Open source — making a model available for anyone to use, download and build on — once seemed a niche, nerdy topic that no one besides developers cared about. However, when a new technology is driving trillions of dollars of investments and leading to immense concentrations of power, it offered an antidote. That is part of
In late January, Taiwan’s first indigenous submarine, the Hai Kun (海鯤, or Narwhal), completed its first submerged dive, reaching a depth of roughly 50m during trials in the waters off Kaohsiung. By March, it had managed a fifth dive, still well short of the deep-water and endurance tests required before the navy could accept the vessel. The original delivery deadline of November last year passed months ago. CSBC Corp, Taiwan, the lead contractor, now targets June and the Ministry of National Defense is levying daily penalties for every day the submarine remains unfinished. The Hai Kun was supposed to be