Algorithms are not value-neutral. However, for over a decade, we have allowed big tech to deploy them as gatekeepers to our information ecosystem, without demanding transparency or accountability in return.
The consequences have ranged from the amplification of polarizing and sensationalist content to veiled personalized advertising, the proliferation of monopolistic behaviors, and forms of influence over public discourse that are antithetical to democratic deliberation.
Even though we had to learn the hard way what happens when critical information infrastructure is handed over to corporate interests without oversight, we are now repeating the same mistake with AI chatbots — and the stakes could be far greater. Chatbots do not simply curate existing information; they generate and frame it. Facebook and Google decided which news articles to show you, whereas tools like ChatGPT, Claude and Gemini synthesize that information into authoritative-sounding answers.
Illustration: Tania Cho
This distinction matters because the shift from curator to editor is making undue influence even less visible and more pernicious. We are once again ceding unprecedented power over the information infrastructure of the future to private corporations, without even demanding independent oversight. The most pressing threat is not that these AI systems could go rogue, but that a handful of self-interested parties are quickly becoming the information gatekeepers for a large and growing share of the population.
The current chatbots are not simply large language models (LLMs). Rather, they rest on several opaque algorithmic layers that factor into a model’s development and deployment, and each can be an entry point for platforms or other parties to shape information according to their interests.
There are at least five layers to this “algorithmic influence stack.” The first is training data curation.
In determining which data is included or excluded during training, platforms make opaque decisions about sources, how to weigh different perspectives, and what content to filter out. These choices then shape the model’s worldview. For example, in October 2025, Elon Musk launched Grokipedia to provide training data for his Grok chatbot. A corporate-controlled encyclopedia, its purpose is to provide an “anti-woke” alternative to Wikipedia and its community governance model, which has long served as a widely trusted source of information on the internet.
The second layer is reinforcement learning through human and AI feedback, the process that transformed LLMs from unpredictable text generators into usable “assistants.” During this “post-training” stage of a model’s development, human reviewers rate outputs to guide the system toward desired behaviors, like helpfulness or politeness. For now, these human evaluations remain a major, largely invisible, part of the AI industry. But they are increasingly being replaced by specialized AI “teachers” that are supposed to align the core model with predefined principles that have been encoded in a “constitution.”
The third layer is Web search. When chatbots search online or access digital databases, retrieval-augmented-generation (RAG) systems determine which pieces of information to feed into the model’s response. This function mirrors that of traditional search engines, which prioritize certain sources over others.
As with search engines, the introduction of advertisements in chatbot responses — which ChatGPT has announced for 2026 — would raise additional concerns about objectivity.
The fourth layer comprises system prompts. Since these kick in when a chatbot is generating an answer, they allow platforms to alter a chatbot’s behavior without retraining it. For example, because Grok’s system prompt was made public last year, we know that it includes directives such as “do not shy away from making claims which are politically incorrect.” ChatGPT, Claude, and Gemini also use system prompts, but these remain secret.
The final layer is safety filters. Before a chatbot query reaches the model, input filters determine whether it is “acceptable.” Similarly, after the model generates a response, output filters can modify, censor, or sanitize content before you see it. While platforms have legitimate reasons to block certain queries (like those seeking instructions on how to make a bomb), the fact that these filters are opaque leaves open questions. Model developers could create the infrastructure for systematic censorship, and we would not know it. Chinese chatbots’ “safety” filters censor all references to the Tiananmen Square massacre.
Political and corporate interests are already shaping this algorithmic influence stack, just as chatbots are being deployed on a global scale. Following US President Donald Trump’s second inauguration, Apple updated its AI training instructions to avoid labeling MAGA supporters as “radical” or “extreme.”
Last summer, Reuters discovered that Meta had updated its internal AI guidelines to loosen safeguards preventing its chatbots from making racist statements or engaging in “flirty” behavior with minors, among other things.
In May last year, Grok began amplifying unsubstantiated and out-of-context claims of “white genocide” in South Africa (Musk himself is a white South African). While the company blamed “unauthorized modifications,” such “bugs” are common, and they all seem to be ideologically consistent with Musk’s own views.
Political manipulation through chatbots has already proven to be effective. A Nature study last year showed that chatbots trained to argue for a specific candidate could sway moderate and undecided voters (the cohorts that decide most elections) with remarkable ease.
Unlike authoritarian systems that exert explicit control over information, democracies depend on a plurality of sources and transparent, accountable information ecosystems.
To allow centralized, unaccountable power over AI infrastructure is to invite techno-authoritarian drift, because it is easy to see how each layer of the algorithmic influence stack can be instrumentalized to amplify or suppress certain views without the need for overt censorship.
In December last year, the European Commission fined X 120 million euros (US$138 million) for “breaching its transparency obligations under the Digital Services Act.” Predictably, X and its defenders framed the move as an attack on free speech. However, transparency is central to the defense of free expression. Without it, we cannot know who is being censored or what influences are being brought to bear on the media we all consume.
The rise of social media taught us what happens when accountability lags behind adoption. We cannot afford to repeat the same mistakes with systems that hold even greater power over public knowledge.
Marc Faddoul is director and co-founder of AI Forensics.
Copyright: Project Syndicate
In the event of a war with China, Taiwan has some surprisingly tough defenses that could make it as difficult to tackle as a porcupine: A shoreline dotted with swamps, rocks and concrete barriers; conscription for all adult men; highways and airports that are built to double as hardened combat facilities. This porcupine has a soft underbelly, though, and the war in Iran is exposing it: energy. About 39,000 ships dock at Taiwan’s ports each year, more than the 30,000 that transit the Strait of Hormuz. About one-fifth of their inbound tonnage is coal, oil, refined fuels and liquefied natural gas (LNG),
On Monday, the day before Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) departed on her visit to China, the party released a promotional video titled “Only with peace can we ‘lie flat’” to highlight its desire to have peace across the Taiwan Strait. However, its use of the expression “lie flat” (tang ping, 躺平) drew sarcastic comments, with critics saying it sounded as if the party was “bowing down” to the Chinese Communist Party (CCP). Amid the controversy over the opposition parties blocking proposed defense budgets, Cheng departed for China after receiving an invitation from the CCP, with a meeting with
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文) is leading a delegation to China through Sunday. She is expected to meet with Chinese President Xi Jinping (習近平) in Beijing tomorrow. That date coincides with the anniversary of the signing of the Taiwan Relations Act (TRA), which marked a cornerstone of Taiwan-US relations. Staging their meeting on this date makes it clear that the Chinese Communist Party (CCP) intends to challenge the US and demonstrate its “authority” over Taiwan. Since the US severed official diplomatic relations with Taiwan in 1979, it has relied on the TRA as a legal basis for all
To counter the CCP’s escalating threats, Taiwan must build a national consensus and demonstrate the capability and the will to fight. The Chinese Communist Party (CCP) often leans on a seductive mantra to soften its threats, such as “Chinese do not kill Chinese.” The slogan is designed to frame territorial conquest (annexation) as a domestic family matter. A look at the historical ledger reveals a different truth. For the CCP, being labeled “family” has never been a guarantee of safety; it has been the primary prerequisite for state-sanctioned slaughter. From the forced starvation of 150,000 civilians at the Siege of Changchun