As misinformation exploded during India’s four-day conflict with Pakistan, social media users turned to an artificial intelligence (AI) chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.
With tech platforms reducing the number of human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI’s Grok, OpenAI’s ChatGPT and Google’s Gemini — in search of reliable information.
“Hey @Grok, is this true?” has become a common query on Elon Musk’s platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunking on social media.
However, the responses are often riddled with misinformation themselves.
Grok — now under renewed scrutiny for inserting “white genocide,” a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase during the country’s recent conflict with India.
Unrelated footage of a building on fire in Nepal was misidentified as “likely” showing Pakistan’s military response to Indian strikes.
“The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,” said McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard.
“Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,” she said.
NewsGuard’s research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.
In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were “generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.”
When Agence France-Presse (AFP) fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity, but fabricated details about her identity and where the image was likely taken.
Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as “genuine,” even citing credible-sounding scientific expeditions to support its false claim.
In reality, the video was AI-generated, with many users citing Grok’s assessment as evidence that the clip was real, AFP fact-checkers in Latin America said.
Such findings have raised concerns, as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification.
The shift also comes as Meta announced earlier this year that it was ending its third-party fact-checking program in the US, turning over the task of debunking falsehoods to ordinary users under a model known as “Community Notes,” popularized by X.
However, researchers have repeatedly questioned the effectiveness of “Community Notes” in combating falsehoods.
Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the US, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject.
The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output might be subject to political influence or control.
Musk’s xAI recently blamed an “unauthorized modification” for causing Grok to generate unsolicited posts referencing “white genocide” in South Africa.
When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the “most likely” culprit.
Musk, the South African-born billionaire backer of US President Donald Trump, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people.
“We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,” International Fact-Checking Network director Angie Holan said.
“I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers,” she said.
On May 7, 1971, Henry Kissinger planned his first, ultra-secret mission to China and pondered whether it would be better to meet his Chinese interlocutors “in Pakistan where the Pakistanis would tape the meeting — or in China where the Chinese would do the taping.” After a flicker of thought, he decided to have the Chinese do all the tape recording, translating and transcribing. Fortuitously, historians have several thousand pages of verbatim texts of Dr. Kissinger’s negotiations with his Chinese counterparts. Paradoxically, behind the scenes, Chinese stenographers prepared verbatim English language typescripts faster than they could translate and type them
More than 30 years ago when I immigrated to the US, applied for citizenship and took the 100-question civics test, the one part of the naturalization process that left the deepest impression on me was one question on the N-400 form, which asked: “Have you ever been a member of, involved in or in any way associated with any communist or totalitarian party anywhere in the world?” Answering “yes” could lead to the rejection of your application. Some people might try their luck and lie, but if exposed, the consequences could be much worse — a person could be fined,
Xiaomi Corp founder Lei Jun (雷軍) on May 22 made a high-profile announcement, giving online viewers a sneak peek at the company’s first 3-nanometer mobile processor — the Xring O1 chip — and saying it is a breakthrough in China’s chip design history. Although Xiaomi might be capable of designing chips, it lacks the ability to manufacture them. No matter how beautifully planned the blueprints are, if they cannot be mass-produced, they are nothing more than drawings on paper. The truth is that China’s chipmaking efforts are still heavily reliant on the free world — particularly on Taiwan Semiconductor Manufacturing
Last week, Nvidia chief executive officer Jensen Huang (黃仁勳) unveiled the location of Nvidia’s new Taipei headquarters and announced plans to build the world’s first large-scale artificial intelligence (AI) supercomputer in Taiwan. In Taipei, Huang’s announcement was welcomed as a milestone for Taiwan’s tech industry. However, beneath the excitement lies a significant question: Can Taiwan’s electricity infrastructure, especially its renewable energy supply, keep up with growing demand from AI chipmaking? Despite its leadership in digital hardware, Taiwan lags behind in renewable energy adoption. Moreover, the electricity grid is already experiencing supply shortages. As Taiwan’s role in AI manufacturing expands, it is critical that