Hours after the Israel-Hamas conflict erupted on Oct. 7, Bharat Nayak, a fact-checker in the east Indian state of Jharkhand, noticed a surge of disinformation and hate speech directed at Muslims on his dashboard of WhatsApp messages.
The viral messages from hundreds of public WhatsApp groups in India contained graphic images and videos, including many from Syria and Afghanistan falsely labeled as being from Israel, with captions in Hindi that called Muslims evil.
“They are using the crisis to spread misinformation against Muslims, saying they will attack Hindus in a similar way, and to falsely accuse opposition parties and others of supporting Hamas, and calling for their elimination,” Nayak said. “The content is very graphic, the messaging is extreme, and it gets forwarded many times, as there is no content moderation on WhatsApp.”
The conflict, which has killed more than 1,400 people in Israel and more than 8,000 in the Gaza Strip, has triggered a surge in disinformation and hate speech against Muslims and Jews across social media platforms from India to China to the US.
Meta and X, formerly known as Twitter, said they have removed tens of thousands of posts, but the volume of disinformation and hate speech underlines the failure of social media platforms to boost content moderation, particularly in languages other than English, digital rights experts say.
“We’ve tirelessly drawn their attention to these issues over the years, but social media platforms continue to fall short when it comes to combating hate speech, incitement and disinformation,” said Mona Shtaya, a nonresident fellow at the non-profit Tahrir Institute for Middle East Policy.
“The recent layoffs in trust and safety teams across platforms underscore this deficiency,” she said. “Additionally, their resource allocation — based on market size, rather than assessed risks — exacerbates the challenges faced by marginalized communities including Palestinians and others.”
In a blog post, Meta — which owns Facebook, Instagram and WhatsApp — wrote that it had “quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers,” and that it is working with third-party fact-checkers in the region “to debunk false claims.”
X did not respond to a request for comment.
Failures of content moderation are not limited to the decades-long Israel-Palestine conflict.
UN human rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fueled violence against the ethnic Rohingya community in Myanmar in 2017.
Rohingya refugees in 2021 sued Meta for US$150 billion over allegations that the company’s failures to police content, and its platform’s design contributed to the real-world violence.
Meta has acknowledged being “too slow” to act in Myanmar.
Last year, a lawsuit against Meta filed in Kenya accused the platform of allowing violent and hateful posts from Ethiopia on Facebook, and its recommendation systems of amplifying violent posts that inflamed the Ethiopian civil war.
The company has faced similar accusations related to violence in Sri Lanka, India, Indonesia and Cambodia.
The surge in disinformation during the Israel-Hamas conflict underscores that “platforms do not have the right systems in place,” said Sabhanaz Rashid Diya, a former head of policy at Meta for Bangladesh and founding board director of the Tech Global Institute think tank.
Diya said that “the historical under-investment in specific parts of the world and specific languages is now being tested in this crisis.”
“Some of the challenges we’re seeing around the information ecosystem are consequences of not building capacity; these are consequences of automated systems, staffing issues; not having sufficient fact-checkers in these markets; not having policies that are contextualized for local regions,” Diya said.
The Arab Center for Social Media Advancement, or 7amleh, has documented more than 500,000 instances in Hebrew of hate speech and incitement to violence against Palestinians and their supporters.
There is also a more than 50-fold increase in the absolute volume of anti-Semitic comments on YouTube videos, the Institute for Strategic Dialogue in London said in a report this week.
State-affiliated accounts of Iran, Russia and China are also spreading disinformation and hate speech on Facebook and X, it said, adding that it could contribute to “popularization and deepening mistrust towards democratic institutions and the media.”
Reports of anti-Semitic and Islamophobic incidents have surged worldwide, including assaults, vandalism and the fatal stabbing of a six-year-old Palestinian boy in the US.
They are a result of the hate speech online, said Marc Owen Jones, an associate professor who researches disinformation in the Middle East at Hamad bin Khalifa University in Qatar.
“Much of the disinformation is violent, graphic and highly emotive — designed to provoke polarization and turn people against each other,” Jones said.
It is “driving a sense of righteousness and tribalism that contributes to violence, as we’ve seen as far away as Dagestan and Illinois. The upshot is dire,” Jones said.
Yet despite heated conversations around the need for better content moderation, trust and safety is “resource-intensive, meaning that tackling the issue is a challenge for any platform,” said Yu-lan Scholliers, head of product at Checkstep, a UK-based content moderation services firm.
With easy access to artificial intelligence, “it’s now much easier to generate real-looking but fake content — requiring more advanced detection mechanisms,” said Scholliers, who previously worked in Meta’s product data science team.
However, even if platforms invest heavily in their trust and safety teams, the main challenge “is and will be adversarial behavior — users always find more and more creative ways to avoid detection,” she said. “It is a whack-a-mole that can never be fully solved.”
With additional reporting by Avi Asher-Schapiro
Taiwan has lost Trump. Or so a former State Department official and lobbyist would have us believe. Writing for online outlet Domino Theory in an article titled “How Taiwan lost Trump,” Christian Whiton provides a litany of reasons that the William Lai (賴清德) and Donald Trump administrations have supposedly fallen out — and it’s all Lai’s fault. Although many of Whiton’s claims are misleading or ill-informed, the article is helpfully, if unintentionally, revealing of a key aspect of the MAGA worldview. Whiton complains of the ruling Democratic Progressive Party’s “inability to understand and relate to the New Right in America.” Many
US lobbyist Christian Whiton has published an update to his article, “How Taiwan Lost Trump,” discussed on the editorial page on Sunday. His new article, titled “What Taiwan Should Do” refers to the three articles published in the Taipei Times, saying that none had offered a solution to the problems he identified. That is fair. The articles pushed back on points Whiton made that were felt partisan, misdirected or uninformed; in this response, he offers solutions of his own. While many are on point and he would find no disagreement here, the nuances of the political and historical complexities in
Taiwan is to hold a referendum on Saturday next week to decide whether the Ma-anshan Nuclear Power Plant, which was shut down in May after 40 years of service, should restart operations for as long as another 20 years. The referendum was proposed by the opposition Taiwan People’s Party (TPP) and passed in the legislature with support from the opposition Chinese Nationalist Party (KMT). Its question reads: “Do you agree that the Ma-anshan Nuclear Power Plant should continue operations upon approval by the competent authority and confirmation that there are no safety concerns?” Supporters of the proposal argue that nuclear power
The Centers for Disease Control and Prevention (CDC) earlier this month raised its travel alert for China’s Guangdong Province to Level 2 “Alert,” advising travelers to take enhanced precautions amid a chikungunya outbreak in the region. More than 8,000 cases have been reported in the province since June. Chikungunya is caused by the chikungunya virus and transmitted to humans through bites from infected mosquitoes, most commonly Aedes aegypti and Aedes albopictus. These species thrive in warm, humid climates and are also major vectors for dengue, Zika and yellow fever. The disease is characterized by high fever and severe, often incapacitating joint pain.