As misinformation exploded during India’s four-day conflict with Pakistan, social media users turned to an artificial intelligence (AI) chatbot for verification — only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.
With tech platforms reducing the number of human fact-checkers, users are increasingly relying on AI-powered chatbots — including xAI’s Grok, OpenAI’s ChatGPT and Google’s Gemini — in search of reliable information.
“Hey @Grok, is this true?” has become a common query on Elon Musk’s platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunking on social media.
However, the responses are often riddled with misinformation themselves.
Grok — now under renewed scrutiny for inserting “white genocide,” a far-right conspiracy theory, into unrelated queries — wrongly identified old video footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase during the country’s recent conflict with India.
Unrelated footage of a building on fire in Nepal was misidentified as “likely” showing Pakistan’s military response to Indian strikes.
“The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,” said McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard.
“Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,” she said.
NewsGuard’s research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.
In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were “generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.”
When Agence France-Presse (AFP) fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity, but fabricated details about her identity and where the image was likely taken.
Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as “genuine,” even citing credible-sounding scientific expeditions to support its false claim.
In reality, the video was AI-generated, with many users citing Grok’s assessment as evidence that the clip was real, AFP fact-checkers in Latin America said.
Such findings have raised concerns, as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification.
The shift also comes as Meta announced earlier this year that it was ending its third-party fact-checking program in the US, turning over the task of debunking falsehoods to ordinary users under a model known as “Community Notes,” popularized by X.
However, researchers have repeatedly questioned the effectiveness of “Community Notes” in combating falsehoods.
Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the US, where conservative advocates maintain it suppresses free speech and censors right-wing content — something professional fact-checkers vehemently reject.
The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output might be subject to political influence or control.
Musk’s xAI recently blamed an “unauthorized modification” for causing Grok to generate unsolicited posts referencing “white genocide” in South Africa.
When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the “most likely” culprit.
Musk, the South African-born billionaire backer of US President Donald Trump, has previously peddled the unfounded claim that South Africa’s leaders were “openly pushing for genocide” of white people.
“We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions,” International Fact-Checking Network director Angie Holan said.
“I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers,” she said.
When US budget carrier Southwest Airlines last week announced a new partnership with China Airlines, Southwest’s social media were filled with comments from travelers excited by the new opportunity to visit China. Of course, China Airlines is not based in China, but in Taiwan, and the new partnership connects Taiwan Taoyuan International Airport with 30 cities across the US. At a time when China is increasing efforts on all fronts to falsely label Taiwan as “China” in all arenas, Taiwan does itself no favors by having its flagship carrier named China Airlines. The Ministry of Foreign Affairs is eager to jump at
The muting of the line “I’m from Taiwan” (我台灣來欸), sung in Hoklo (commonly known as Taiwanese), during a performance at the closing ceremony of the World Masters Games in New Taipei City on May 31 has sparked a public outcry. The lyric from the well-known song All Eyes on Me (世界都看見) — originally written and performed by Taiwanese hip-hop group Nine One One (玖壹壹) — was muted twice, while the subtitles on the screen showed an alternate line, “we come here together” (阮作伙來欸), which was not sung. The song, performed at the ceremony by a cheerleading group, was the theme
Secretary of State Marco Rubio raised eyebrows recently when he declared the era of American unipolarity over. He described America’s unrivaled dominance of the international system as an anomaly that was created by the collapse of the Soviet Union at the end of the Cold War. Now, he observed, the United States was returning to a more multipolar world where there are great powers in different parts of the planet. He pointed to China and Russia, as well as “rogue states like Iran and North Korea” as examples of countries the United States must contend with. This all begs the question:
In China, competition is fierce, and in many cases suppliers do not get paid on time. Rather than improving, the situation appears to be deteriorating. BYD Co, the world’s largest electric vehicle manufacturer by production volume, has gained notoriety for its harsh treatment of suppliers, raising concerns about the long-term sustainability. The case also highlights the decline of China’s business environment, and the growing risk of a cascading wave of corporate failures. BYD generally does not follow China’s Negotiable Instruments Law when settling payments with suppliers. Instead the company has created its own proprietary supply chain finance system called the “D-chain,” through which