From deepfake videos of Indonesia’s presidential contenders to online hate speech directed at India’s Muslims, social media misinformation has been rising ahead of a bumper election year, and experts say tech platforms are not ready for the challenge.
Voters in Bangladesh, Indonesia, Pakistan and India go to the polls this year as more than 50 nations hold elections, including the US where former president Donald Trump is looking to make a comeback.
Despite the high stakes and evidence from previous polls of how fake online content can influence voters, digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinformation and hate speech.
Illustration: Yusha
Recent layoffs at big tech firms, new laws to police online content that have tied up moderators and artificial intelligence (AI) tools that make it easier to spread misinformation could hurt poorer countries more, said Sabhanaz Rashid Diya, an expert in platform safety.
“Things have actually gotten worse since the last election cycle for many countries: The actors who abuse the platforms have gotten more sophisticated, but the resources to tackle them haven’t increased,” said Diya, founder of Tech Global Institute.
“Because of the mass layoffs, priorities have shifted. Added to that is the large volume of new regulations ... platforms have to comply, so they don’t have resources to proactively address the broader content ecosystem [and] the election integrity ecosystem,” she said.
“That will disproportionately impact the Global South,” which generally gets fewer resources from tech firms, she said.
As generative AI tools, such as Midjourney, Stable Diffusion and DALL-E, make it cheap and easy to create convincing deepfakes, concern is growing about how such material could be used to mislead or confuse voters.
AI-generated deepfakes have already been used to deceive voters from New Zealand to Argentina and the US, and authorities are scrambling to keep up with the tech even as they pledge to crack down on misinformation.
The EU — where elections for the European Parliament is to take place in June — requires tech firms to clearly label political advertising and say who paid for it, while India’s IT Rules “explicitly prohibit the dissemination of misinformation,” the Ministry of Electronics and Information Technology said last month.
Alphabet’s Google has said it plans to attach labels to AI-generated content and political ads that use digitally altered material on its platforms, including on YouTube, and also limit election queries its Bard chatbot and AI-based search can answer.
YouTube’s “elections-focused teams are monitoring real-time developments ... including by detecting and monitoring trends in risky forms of content and addressing them appropriately before they become larger issues,” a spokesperson for YouTube said.
Meta Platforms — which owns Facebook, WhatsApp and Instagram — has said it would bar political campaigns and advertisers from using its generative AI products in advertisements.
Meta has a “comprehensive strategy in place for elections, which includes detecting and removing hate speech and content that incites violence, reducing the spread of misinformation, making political advertising more transparent [and] partnering with authorities to action content that violates local law,” a spokesperson said.
X, formerly known as Twitter, did not respond to a request for comment on its measures to tackle election-related misinformation. TikTok, which is banned in India, also did not respond.
Misinformation on social media has had devastating consequences ahead of, and after, previous elections in many of the nations where voters are going to the polls this year.
In Indonesia, which votes on Feb. 14, hoaxes and calls for violence on social media networks spiked after the 2019 election result. At least six people were killed in subsequent unrest.
In Pakistan, where a national vote is scheduled for Feb. 8, hate speech and misinformation were rife on social media ahead of a 2018 general election, which was marred by a series of bombings that killed scores across the country.
Last year, violent clashes following the arrests of supporters of jailed former Pakistani prime minister Imran Khan led to Internet shutdowns and the blocking of social media platforms. Former cricket hero Khan was arrested on corruption charges last year and given a three-year prison sentence.
While social media firms have developed advanced algorithms to tackle misinformation and disinformation, “the effectiveness of these tools can be limited by local nuances and the intricacies of languages other than English,” said Nuurrianti Jalli, an assistant professor at Oklahoma State University.
In addition, the critical US election and global events, such as the Israel-Hamas conflict and the Russia-Ukraine war, could “sap resources and focus that might otherwise be dedicated to preparing for elections in other locales,” she added.
In Bangladesh, violent protests erupted in the months ahead of the Jan. 7 election. The vote was boycotted by the main opposition party and Prime Minister Sheikh Hasina won a fourth straight term.
Political ads on Facebook — the biggest social media platform in the country, with more than 44 million users — are routinely mislabeled or lack disclaimers and key details, revealing gaps in the platform’s verification process, a recent study by tech research firm Digitally Right said.
Separately, a report published last month by Tech Global Institute revealed how difficult it was to determine the affiliation between Facebook pages and groups and Bangladesh’s two leading political parties or to figure out what constitutes “authoritative information” from either party.
Facebook has not commented on the studies.
In the past year, Meta, X and Alphabet have rolled back at least 17 major policies designed to curb hate speech and misinformation, and laid off more than 40,000 people, including teams that maintained platform integrity, the US non-profit Free Press said in a report last month.
“With dozens of national elections happening around the world in 2024, platform-integrity commitments are more important than ever. However, major social media companies are not remotely prepared for the upcoming election cycle,” civil rights lawyer Nora Benavidez wrote in the report.
“Without the policies and teams they need to moderate violative content, platforms risk amplifying confusion, discouraging voter engagement and creating opportunities for network manipulation to erode democratic institutions,” she wrote.
Some governments have responded to this perceived lack of control by introducing restrictive laws on online speech and expression, and these could lead social media platforms to over-enforce content moderation, tech experts said.
India — where Prime Minister Narendra Modi is widely expected to win a third term — has stepped up content removal demands, introduced individual liability provisions for firms and warned companies could lose safe harbor protections that protect them from liability for third-party content if they do not comply.
“The legal obligation puts additional strains on platforms ... if safe harbor is at risk, the platform will inadvertently over-enforce, so it will end up taking down a lot more content,” Diya said.
For Raman Jit Singh Chima, Asia policy director at non-profit Access Now, the issue is preparation; he says big tech firms have failed to engage with civil society ahead of elections and have not provided enough information in local languages.
“Digital platforms are even more important for this election cycle, but they are not set up to handle the problems around elections, and they are not being transparent about their measures to mitigate harms,” he said.
“It’s very worrying,” he added.
As Taiwan’s domestic political crisis deepens, the opposition Chinese Nationalist Party (KMT) and Taiwan People’s Party (TPP) have proposed gutting the country’s national spending, with steep cuts to the critical foreign and defense ministries. While the blue-white coalition alleges that it is merely responding to voters’ concerns about corruption and mismanagement, of which there certainly has been plenty under Democratic Progressive Party (DPP) and KMT-led governments, the rationales for their proposed spending cuts lay bare the incoherent foreign policy of the KMT-led coalition. Introduced on the eve of US President Donald Trump’s inauguration, the KMT’s proposed budget is a terrible opening
The Chinese Nationalist Party (KMT) caucus in the Legislative Yuan has made an internal decision to freeze NT$1.8 billion (US$54.7 million) of the indigenous submarine project’s NT$2 billion budget. This means that up to 90 percent of the budget cannot be utilized. It would only be accessible if the legislature agrees to lift the freeze sometime in the future. However, for Taiwan to construct its own submarines, it must rely on foreign support for several key pieces of equipment and technology. These foreign supporters would also be forced to endure significant pressure, infiltration and influence from Beijing. In other words,
“I compare the Communist Party to my mother,” sings a student at a boarding school in a Tibetan region of China’s Qinghai province. “If faith has a color,” others at a different school sing, “it would surely be Chinese red.” In a major story for the New York Times this month, Chris Buckley wrote about the forced placement of hundreds of thousands of Tibetan children in boarding schools, where many suffer physical and psychological abuse. Separating these children from their families, the Chinese Communist Party (CCP) aims to substitute itself for their parents and for their religion. Buckley’s reporting is
Last week, the Chinese Nationalist Party (KMT) and the Taiwan People’s Party (TPP), together holding more than half of the legislative seats, cut about NT$94 billion (US$2.85 billion) from the yearly budget. The cuts include 60 percent of the government’s advertising budget, 10 percent of administrative expenses, 3 percent of the military budget, and 60 percent of the international travel, overseas education and training allowances. In addition, the two parties have proposed freezing the budgets of many ministries and departments, including NT$1.8 billion from the Ministry of National Defense’s Indigenous Defense Submarine program — 90 percent of the program’s proposed