The term “fake news” has become an epithet that US President Donald Trump attaches to any unfavorable story, but it is also an analytical term that describes deliberate disinformation presented in the form of a conventional news report.
The problem is not completely novel. In 1925, Harper’s Magazine published an article about the dangers of “fake news.” However, today two-thirds of US adults get some of their news from social media, which rest on a business model that lends itself to outside manipulation and where algorithms can easily be gamed for profit or malign purposes.
Whether amateur, criminal, or governmental, many organizations — both domestic and foreign — are skilled at reverse engineering how tech platforms parse information. To give Russia credit, it was one of the first governments to understand how to weaponize social media and use the US’ own companies against it.
Overwhelmed with the sheer volume of information available online, people find it difficult to know what to focus on. Attention, rather than information, becomes the scarce resource to capture. Big data and artificial intelligence allow microtargeting of communication so that the information people receive is limited to a “filter bubble” of the like-minded.
The “free” services offered by social media are based on a profit model in which users’ information and attention are actually the products, which are sold to advertisers. Algorithms are designed to learn what keeps users engaged so that they can be served more ads and produce more revenue.
Emotions such as outrage stimulate engagement and news that is outrageous, but false, has been shown to engage more viewers than accurate news.
One study found that such falsehoods on Twitter were 70 percent more likely to be retweeted than accurate news.
Likewise, a study of demonstrations in Germany earlier this year found that YouTube’s algorithm systematically directed users toward extremist content because that was where the “clicks” and revenue were greatest.
Fact-checking by conventional news media is often unable to keep up and sometimes can even be counterproductive by drawing more attention to the falsehood.
By its nature, the social media profit model can be weaponized by states and non-state actors alike. Facebook has been under heavy criticism for its cavalier record on protecting users’ privacy.
Facebook CEO Mark Zuckerberg in 2016 admitted that Facebook was “not prepared for the coordinated information operations we regularly face.”
However, the company had “learned a lot since then” and has “developed sophisticated systems that combine technology and people to prevent election interference on our services,” Zuckerberg said.
Such efforts include automated programs to find and remove fake accounts; featuring Facebook pages that spread disinformation less prominently than in the past; issuing a transparency report on the number of false accounts removed; verifying the nationality of those who place political advertisements; hiring 10,000 additional people to work on security; and improving coordination with law enforcement and other companies to address suspicious activity.
Yet the problem is not solved.
An arms race will continue between the social media companies, and the states and non-state actors who invest in ways to exploit their systems.
Technological solutions like artificial intelligence are not a silver bullet.
Because it is often more sensational and outrageous, fake news travels farther and faster than real news. False information on Twitter is retweeted by many more people and far more rapidly than true information, and repeating it, even in a fact-checking context, might increase an individual’s likelihood of accepting it as true.
In preparing for the 2016 US presidential election, the Internet Research Agency in St Petersburg, Russia, spent more than a year creating dozens of social media accounts masquerading as local US news outlets. Sometimes the reports favored a candidate, but often they were designed simply to give an impression of chaos and disgust with democracy, and to suppress voter turnout.
When the US Congress passed the Communications Decency Act in 1996, then-infant social media companies were treated as neutral telecoms providers that enabled customers to interact with one another.
This model is clearly outdated. Under political pressure, the major companies have begun to police their networks more carefully and take down obvious fakes, including those propagated by botnets.
However, imposing limits on free speech, protected by the first amendment of the US constitution, raises difficult practical problems. While machines and non-US actors have no first amendment rights — and private companies are not bound by the first amendment in any case — abhorrent domestic groups and individuals do, and they can serve as intermediaries for foreign influencers.
In any case, the damage done by foreign actors might be less than the damage Americans do to themselves. The problem of fake news and foreign impersonation of real news sources is difficult to resolve, because it involves trade-offs among their important values.
The social media companies, wary of coming under attack for censorship, want to avoid regulation by legislators who criticize them for both sins of omission and commission.
Experience from European elections suggests that investigative journalism and alerting the public in advance can help inoculate voters against disinformation campaigns. Yet the battle with fake news is likely to remain a cat-and-mouse game between its purveyors and the companies whose platforms they exploit.
It will become part of the background noise of elections everywhere. Constant vigilance will be the price of protecting our democracies.
Joseph Nye Jr is a professor at Harvard University and author of Is the American Century Over?
Copyright: Project Syndicate
Victory in conflict requires mastery of two “balances”: First, the balance of power, and second, the balance of error, or making sure that you do not make the most mistakes, thus helping your enemy’s victory. The Chinese Communist Party (CCP) has made a decisive and potentially fatal error by making an enemy of the Jewish Nation, centered today in the State of Israel but historically one of the great civilizations extending back at least 3,000 years. Mind you, no Israeli leader has ever publicly declared that “China is our enemy,” but on October 28, 2025, self-described Chinese People’s Armed Police (PAP) propaganda
Chinese Consul General in Osaka Xue Jian (薛劍) on Saturday last week shared a news article on social media about Japanese Prime Minister Sanae Takaichi’s remarks on Taiwan, adding that “the dirty neck that sticks itself in must be cut off.” The previous day in the Japanese House of Representatives, Takaichi said that a Chinese attack on Taiwan could constitute “a situation threatening Japan’s survival,” a reference to a legal legal term introduced in 2015 that allows the prime minister to deploy the Japan Self-Defense Forces. The violent nature of Xue’s comments is notable in that it came from a diplomat,
China’s third aircraft carrier, the Fujian, entered service this week after a commissioning ceremony in China’s Hainan Province on Wednesday last week. Chinese state media reported that the Fujian would be deployed to the Taiwan Strait, the South China Sea and the western Pacific. It seemed that the Taiwan Strait being one of its priorities meant greater military pressure on Taiwan, but it would actually put the Fujian at greater risk of being compromised. If the carrier were to leave its home port of Sanya and sail to the East China Sea or the Yellow Sea, it would have to transit the
The artificial intelligence (AI) boom, sparked by the arrival of OpenAI’s ChatGPT, took the world by storm. Within weeks, everyone was talking about it, trying it and had an opinion. It has transformed the way people live, work and think. The trend has only accelerated. The AI snowball continues to roll, growing larger and more influential across nearly every sector. Higher education has not been spared. Universities rushed to embrace this technological wave, eager to demonstrate that they are keeping up with the times. AI literacy is now presented as an essential skill, a key selling point to attract prospective students.