In an election-stuffed year, should people be worried about the kind of role social media apps like TikTok play in the democratic process?
It only took me a few minutes to find out.
As an experiment in the run-up to last week’s European elections, I created a TikTok account, clicked on a video on French President Emmanuel Macron’s official feed and then let the app’s suggested content from other users take over.
The algorithm took me down what can only be described as a rabbit hole very friendly toward Russian President Vladimir Putin: The first video said in bold letters that Macron had “declared war on Russia” by offering to train military instructors in Ukraine, while several others attacked him for not inviting Russia to D-Day ceremonies.
One falsely claimed that an infamous civilian massacre in Nazi-occupied France in 1944 had been perpetrated almost entirely by Ukrainians. I was also fed a rant about Jews controlling France by a convicted antisemite.
When I reached out to TikTok, it confirmed that these videos were in breach of its guidelines that forbid disinformation and took them down, but this gives an idea of just how huge the pressure on the platform’s 6,000-plus EU human moderators and its automated tech tools seems to be.
My subsequent search for “Xinjiang” saw me fall into a comparable China rabbit hole, stuffed with upbeat videos about a region accused of forced labor, one of which was posted by a government-controlled media account.
A Rutgers University report last year found that topics often suppressed by the Chinese government within its borders appeared to be underrepresented on TikTok, owned by China-headquartered ByteDance Ltd. TikTok contested the findings and says the state has no influence over the app.
Other experiments suggest an algorithmic propensity to create echo chambers and misinformation minefields.
Vox reporter Christian Paz tells the tale of two test accounts, one that searched for terms including “Donald Trump” for the former US president and the other “Joe Biden” for the incumbent and how they morphed into ideologically opposite feeds.
The app’s explosive growth — ByteDance’s profit rose 60 percent last year to more than US$40 billion — also seems to have made it vulnerable to false advertising.
Global Witness, a non-governmental organization, recently created 16 mock ads containing false information designed to suppress turnout ahead of the EU elections; it sent them to several social media apps, but only TikTok approved them all.
TikTok responded by blaming the “leakage” on one of its human moderators and said it would keep improving its controls.
TikTok says it is not a go-to hub for breaking news, and that is true — the app’s 1.5 billion-strong user base was largely built on cute kitties, lip-syncing stars and the kind of content that actually benefits from tunnel vision. It would be ludicrous to purely blame technology for democratic ills like polarization or misinformation, which were around long before the first selfie was taken.
However, judging by the growing share of young people who now get their news primarily from TikTok, and the notion that democracy benefits from an informed electorate and responsible speech, it is hard to shrug it off as neutral either.
The response from governments, while clearly less relaxed than in the heyday of Meta Platform Inc’s Facebook or Instagram, has been all over the place. Governments that only two years ago signed a “declaration for the future of the Internet” to protect human rights and the free flow of information are now resorting to haphazard bans, from the Biden administration to Macron, who temporarily blocked access to TikTok during civil unrest in New Caledonia — a decision viewed on the ground as an attack on civil society, disinformation expert Stephanie Lamy says.
Macron has accused the app of pumping out Russian propaganda, but the cure can be worse than the disease — as India’s 2020 ban has shown, the disinformation merely shifts to other apps.
The charge of hypocrisy is also valid: Politicians cannot seem to live without TikTok either.
What is needed is a focus on gathering the willpower needed to regulate such platforms properly instead of crude bans or micromanaging content.
New EU rules under the Digital Services Act go in the right direction, forcing platforms to submit risk assessments and potentially alter the algorithms that curate content or pay a fine; TikTok halted its Lite rewards program over fears of children becoming addicted.
However, we also need a more complete vision of the endgame: A 21st-century version of how we get from the equivalent of bigoted pamphlets and yellow journalism of the late 19th century to trusted media voices.
In their book The Business of Hate, entrepreneur Henri Verdier and former Paris City Hall official Jean-Louis Missika lay out some good recommendations, from investing in regulatory resources to demanding algorithmic transparency and accountability.
However, their key idea has little to do with technology. It suggests a new version of the “fairness doctrine” that existed in the US until the late 1980s, which mandated balanced views on controversial subjects. That might sound stuffy and retrograde, but it might be one way to avoid the context-free segregation of content driven solely by business models and opaque algorithmic curation.
Social media users are maybe more aligned with regulators than we think.
Earlier this year, Macron invited 100 French citizens to debate the question: “How can we create a media that strengthens democracy?”
Over four days, in the time it would have taken to passively consume thousands of TikTok videos, they came up with clear calls for a more level playing field between traditional and social media, harsher punishments to deter lawbreakers and new measures to guarantee pluralism.
Maybe TikTokers should be asked for their opinion, not just their eyeballs.
Lionel Laurent is a Bloomberg Opinion columnist writing about the future of money and the future of Europe. Previously, he was a reporter for Reuters and Forbes. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
I came to Taiwan to pursue my degree thinking that Taiwanese are “friendly,” but I was welcomed by Taiwanese classmates laughing at my friend’s name, Maria (瑪莉亞). At the time, I could not understand why they were mocking the name of Jesus’ mother. Later, I learned that “Maria” had become a stereotype — a shorthand for Filipino migrant workers. That was because many Filipino women in Taiwan, especially those who became house helpers, happen to have that name. With the rapidly increasing number of foreigners coming to Taiwan to work or study, more Taiwanese are interacting, socializing and forming relationships with
Earlier signs suggest that US President Donald Trump’s policy on Taiwan is set to move in a more resolute direction, as his administration begins to take a tougher approach toward America’s main challenger at the global level, China. Despite its deepening economic woes, China continues to flex its muscles, including conducting provocative military drills off Taiwan, Australia and Vietnam recently. A recent Trump-signed memorandum on America’s investment policy was more about the China threat than about anything else. Singling out the People’s Republic of China (PRC) as a foreign adversary directing investments in American companies to obtain cutting-edge technologies, it said
The recent termination of Tibetan-language broadcasts by Voice of America (VOA) and Radio Free Asia (RFA) is a significant setback for Tibetans both in Tibet and across the global diaspora. The broadcasts have long served as a vital lifeline, providing uncensored news, cultural preservation and a sense of connection for a community often isolated by geopolitical realities. For Tibetans living under Chinese rule, access to independent information is severely restricted. The Chinese government tightly controls media and censors content that challenges its narrative. VOA and RFA broadcasts have been among the few sources of uncensored news available to Tibetans, offering insights
Chinese social media influencer “Yaya in Taiwan” (亞亞在台灣), whose real name is Liu Zhenya (劉振亞), made statements advocating for “reunifying Taiwan [with China] through military force.” After verifying that Liu did indeed make such statements, the National Immigration Agency revoked her dependency-based residency permit. She must now either leave the country voluntarily or be deported. Operating your own page and becoming an influencer require a certain amount of support and user traffic. You must successfully gain approval for your views and attract an audience. Although Liu must leave the country, I cannot help but wonder how many more “Yayas” are still