Algorithms are not value-neutral. However, for over a decade, we have allowed big tech to deploy them as gatekeepers to our information ecosystem, without demanding transparency or accountability in return.
The consequences have ranged from the amplification of polarizing and sensationalist content to veiled personalized advertising, the proliferation of monopolistic behaviors, and forms of influence over public discourse that are antithetical to democratic deliberation.
Even though we had to learn the hard way what happens when critical information infrastructure is handed over to corporate interests without oversight, we are now repeating the same mistake with AI chatbots — and the stakes could be far greater. Chatbots do not simply curate existing information; they generate and frame it. Facebook and Google decided which news articles to show you, whereas tools like ChatGPT, Claude and Gemini synthesize that information into authoritative-sounding answers.
Illustration: Tania Cho
This distinction matters because the shift from curator to editor is making undue influence even less visible and more pernicious. We are once again ceding unprecedented power over the information infrastructure of the future to private corporations, without even demanding independent oversight. The most pressing threat is not that these AI systems could go rogue, but that a handful of self-interested parties are quickly becoming the information gatekeepers for a large and growing share of the population.
The current chatbots are not simply large language models (LLMs). Rather, they rest on several opaque algorithmic layers that factor into a model’s development and deployment, and each can be an entry point for platforms or other parties to shape information according to their interests.
There are at least five layers to this “algorithmic influence stack.” The first is training data curation.
In determining which data is included or excluded during training, platforms make opaque decisions about sources, how to weigh different perspectives, and what content to filter out. These choices then shape the model’s worldview. For example, in October 2025, Elon Musk launched Grokipedia to provide training data for his Grok chatbot. A corporate-controlled encyclopedia, its purpose is to provide an “anti-woke” alternative to Wikipedia and its community governance model, which has long served as a widely trusted source of information on the internet.
The second layer is reinforcement learning through human and AI feedback, the process that transformed LLMs from unpredictable text generators into usable “assistants.” During this “post-training” stage of a model’s development, human reviewers rate outputs to guide the system toward desired behaviors, like helpfulness or politeness. For now, these human evaluations remain a major, largely invisible, part of the AI industry. But they are increasingly being replaced by specialized AI “teachers” that are supposed to align the core model with predefined principles that have been encoded in a “constitution.”
The third layer is Web search. When chatbots search online or access digital databases, retrieval-augmented-generation (RAG) systems determine which pieces of information to feed into the model’s response. This function mirrors that of traditional search engines, which prioritize certain sources over others.
As with search engines, the introduction of advertisements in chatbot responses — which ChatGPT has announced for 2026 — would raise additional concerns about objectivity.
The fourth layer comprises system prompts. Since these kick in when a chatbot is generating an answer, they allow platforms to alter a chatbot’s behavior without retraining it. For example, because Grok’s system prompt was made public last year, we know that it includes directives such as “do not shy away from making claims which are politically incorrect.” ChatGPT, Claude, and Gemini also use system prompts, but these remain secret.
The final layer is safety filters. Before a chatbot query reaches the model, input filters determine whether it is “acceptable.” Similarly, after the model generates a response, output filters can modify, censor, or sanitize content before you see it. While platforms have legitimate reasons to block certain queries (like those seeking instructions on how to make a bomb), the fact that these filters are opaque leaves open questions. Model developers could create the infrastructure for systematic censorship, and we would not know it. Chinese chatbots’ “safety” filters censor all references to the Tiananmen Square massacre.
Political and corporate interests are already shaping this algorithmic influence stack, just as chatbots are being deployed on a global scale. Following US President Donald Trump’s second inauguration, Apple updated its AI training instructions to avoid labeling MAGA supporters as “radical” or “extreme.”
Last summer, Reuters discovered that Meta had updated its internal AI guidelines to loosen safeguards preventing its chatbots from making racist statements or engaging in “flirty” behavior with minors, among other things.
In May last year, Grok began amplifying unsubstantiated and out-of-context claims of “white genocide” in South Africa (Musk himself is a white South African). While the company blamed “unauthorized modifications,” such “bugs” are common, and they all seem to be ideologically consistent with Musk’s own views.
Political manipulation through chatbots has already proven to be effective. A Nature study last year showed that chatbots trained to argue for a specific candidate could sway moderate and undecided voters (the cohorts that decide most elections) with remarkable ease.
Unlike authoritarian systems that exert explicit control over information, democracies depend on a plurality of sources and transparent, accountable information ecosystems.
To allow centralized, unaccountable power over AI infrastructure is to invite techno-authoritarian drift, because it is easy to see how each layer of the algorithmic influence stack can be instrumentalized to amplify or suppress certain views without the need for overt censorship.
In December last year, the European Commission fined X 120 million euros (US$138 million) for “breaching its transparency obligations under the Digital Services Act.” Predictably, X and its defenders framed the move as an attack on free speech. However, transparency is central to the defense of free expression. Without it, we cannot know who is being censored or what influences are being brought to bear on the media we all consume.
The rise of social media taught us what happens when accountability lags behind adoption. We cannot afford to repeat the same mistakes with systems that hold even greater power over public knowledge.
Marc Faddoul is director and co-founder of AI Forensics.
Copyright: Project Syndicate
On March 22, 2023, at the close of their meeting in Moscow, media microphones were allowed to record Chinese Communist Party (CCP) dictator Xi Jinping (習近平) telling Russia’s dictator Vladimir Putin, “Right now there are changes — the likes of which we haven’t seen for 100 years — and we are the ones driving these changes together.” Widely read as Xi’s oath to create a China-Russia-dominated world order, it can be considered a high point for the China-Russia-Iran-North Korea (CRINK) informal alliance, which also included the dictatorships of Venezuela and Cuba. China enables and assists Russia’s war against Ukraine and North Korea’s
After thousands of Taiwanese fans poured into the Tokyo Dome to cheer for Taiwan’s national team in the World Baseball Classic’s (WBC) Pool C games, an image of food and drink waste left at the stadium said to have been left by Taiwanese fans began spreading on social media. The image sparked wide debate, only later to be revealed as an artificially generated image. The image caption claimed that “Taiwanese left trash everywhere after watching the game in Tokyo Dome,” and said that one of the “three bad habits” of Taiwanese is littering. However, a reporter from a Japanese media outlet
Taiwanese pragmatism has long been praised when it comes to addressing Chinese attempts to erase Taiwan from the international stage. “Taipei” and the even more inaccurate and degrading “Chinese Taipei,” imposed titles required to participate in international events, are loathed by Taiwanese. That is why there was huge applause in Taiwan when Japanese public broadcaster NHK referred to the Taiwanese Olympic team as “Taiwan,” instead of “Chinese Taipei” during the opening ceremony of the Tokyo Olympics. What is standard protocol for most nations — calling a national team by the name their country is commonly known by — is impossible for
India is not China, and many of its residents fear it never will be. It is hard to imagine a future in which the subcontinent’s manufacturing dominates the world, its foreign investment shapes nations’ destinies, and the challenge of its economic system forces the West to reshape its own policies and principles. However, that is, apparently, what the US administration fears. Speaking in New Delhi last week, US Deputy Secretary of State Christopher Landau warned that “we will not make the same mistakes with India that we did with China 20 years ago.” Although he claimed the recently agreed framework