Artificial intelligence (AI) is revolutionizing the gathering, processing and dissemination of information. The outcome of this revolution will depend on our technological choices. To ensure that AI supports the right to information, we at Reporters without Borders think that ethics must govern technological innovation in the news and information media.
AI is radically transforming the world of journalism. How can we ensure information integrity when most Web content will be AI-generated? How do we maintain editorial independence when opaque language models, driven by private-sector interests or arbitrary criteria, are used by newsrooms? How can we prevent the fragmentation of the information ecosystem into numerous streams fueled by chatbots?
Predicting the full extent of AI’s impacts on the media is a challenging task. Yet, one thing is clear: Innovation per se does not automatically lead to progress. It must be accompanied by sensible regulation and ethical guardrails to truly benefit humanity.
History offers numerous examples, such as the ban on human cloning, nuclear non-proliferation treaties and drug safety controls, where technological development has been responsibly curtailed, regulated or directed in the name of ethics. Likewise, in journalism, innovation should be governed by clear ethical rules. This is crucial to protect the right to information, which underpins our fundamental freedoms of opinion and expression.
In the summer of last year, Reporters Without Borders convened an international commission to draft what became the first global ethical reference for media in the AI era. The commission brought together 32 prominent figures from 20 countries, specialists in journalism or AI. To chair it, none other than Maria Ressa, winner of the 2021 Nobel Peace Prize, who embodies both the challenges of press freedom and commitment to addressing technological upheavals (she denounced the “invisible atomic bomb” of digital technology from the podium in Oslo).
The goal was clear: Establish a set of fundamental ethical principles to protect information integrity in the AI era, as these technologies transform the media industry. After five months of meetings, 700 comments and an international consultation, the discussions revealed consensus and differences. Aligning the views of journalism defense non-governmental organizations, media organizations, investigative journalism consortia and a major journalists’ federation was challenging — but an unprecedented alliance gathered around this digital table.
In response to the upheavals caused by AI in the information arena, the charter that was published in Paris in November last year outlines 10 essential principles to ensure information integrity and preserve journalism’s social function. It is crucial that the international community cooperates to ensure that AI systems uphold human rights and democracy, but this does not absolve journalism of particular ethical and professional responsibilities in using these technologies.
Of the charter’s core principles, we will mention just four.
First, ethics must guide technological choices in the media. The pace of adopting one of history’s most transformative technologies should not be dictated by the pressure of economic competition. Polls suggest that an overwhelming majority of citizens would prefer a slower, safer deployment of AI. Let us listen to them.
Second, human judgement must remain central in editorial decisions. Generative AI systems are more than mere tools; they acquire a form of agency and interfere with our intentions. Though lacking will, AI is full of certainties, reflecting its data and training process. Each automated decision is a missed opportunity for human judgement. We aspire to augmented journalism, not diminished human judgement.
Third, the media must help society to confidently discern authentic from synthetic content. Generative AI, more than any past technology, is capable of crafting the illusion of facts and the artifice of evidence. The media have a special responsibility to help society confidently discern fact from fiction. Trust is built, not decreed. Source verification, evidence authentication, content traceability and editorial responsibility are crucial in the AI era. To avoid contributing to general confusion, the media must maintain a clear distinction between authentic material (captured in the real world) and synthetic material (material generated or significantly altered by AI).
Finally, in their negotiations with technology companies, media outlets and rights holders should prioritize journalism’s societal mission, placing public interest above private profit. Chatbots are likely to become a primary method for accessing news in the near future. It is therefore imperative to ensure that their owners provide fair compensation to content creators and rights holders.
Additionally, solid guarantees must be demanded concerning the quality, pluralism and reliability of the information disseminated. This becomes even more crucial as media entities start to form their initial partnerships with AI providers and engage in legal battles with tech companies over copyright infringement.
The media stand at a crossroads. Used ethically and discerningly, AI offers unprecedented opportunities to enrich our understanding of a complex world. As deepfakes potentially amplify disinformation and erode public trust in all audiovisual content, and language models promise increased productivity at the expense of information integrity, this charter affirms an approach where human discernment and journalistic ethics are the pillars of journalism’s social trust function.
In a noisy world, there are only two ways to gain attention: extort it or earn it. Social media, aided by recommendation algorithms, chose the former, with known consequences in terms of misinformation and the polarization of opinion. In a field where anything goes, quality journalism has no chance unless it abandons its defining traits: the pursuit of factual truth, nuance and impartiality.
The media must therefore earn our attention by focusing their practice on trust, authenticity and human experience.
We encourage media and information professionals to embrace the principles of the Paris Charter on AI and Journalism.
Charlie Beckett, professor at the London School of Economics (LSE) and director of the LSE Journalism and AI Project.
Christophe Deloire, secretary-general at Reporters Without Borders and chair of the Forum on Information and Democracy.
Gary Marcus, founder and CEO of the Center for the Advancement of Trustworthy AI and professor emeritus at New York University.
Maria Ressa, 2021 Nobel Peace Prize laureate, journalist and cofounder of Rappler media, chair of the Committee of the Paris Charter on AI and Journalism.
Stuart Russell, distinguished professor of computer science at the University of California, Berkeley and founder of the Center for Human-compatible AI.
Anya Schiffrin, senior lecturer in discipline of international and public affairs, Columbia University School of International and Public Affairs.
The Chinese Communist Party (CCP) has long been expansionist and contemptuous of international law. Under Chinese President Xi Jinping (習近平), the CCP regime has become more despotic, coercive and punitive. As part of its strategy to annex Taiwan, Beijing has sought to erase the island democracy’s international identity by bribing countries to sever diplomatic ties with Taipei. One by one, China has peeled away Taiwan’s remaining diplomatic partners, leaving just 12 countries (mostly small developing states) and the Vatican recognizing Taiwan as a sovereign nation. Taiwan’s formal international space has shrunk dramatically. Yet even as Beijing has scored diplomatic successes, its overreach
In her article in Foreign Affairs, “A Perfect Storm for Taiwan in 2026?,” Yun Sun (孫韻), director of the China program at the Stimson Center in Washington, said that the US has grown indifferent to Taiwan, contending that, since it has long been the fear of US intervention — and the Chinese People’s Liberation Army’s (PLA) inability to prevail against US forces — that has deterred China from using force against Taiwan, this perceived indifference from the US could lead China to conclude that a window of opportunity for a Taiwan invasion has opened this year. Most notably, she observes that
For Taiwan, the ongoing US and Israeli strikes on Iranian targets are a warning signal: When a major power stretches the boundaries of self-defense, smaller states feel the tremors first. Taiwan’s security rests on two pillars: US deterrence and the credibility of international law. The first deters coercion from China. The second legitimizes Taiwan’s place in the international community. One is material. The other is moral. Both are indispensable. Under the UN Charter, force is lawful only in response to an armed attack or with UN Security Council authorization. Even pre-emptive self-defense — long debated — requires a demonstrably imminent
Since being re-elected, US President Donald Trump has consistently taken concrete action to counter China and to safeguard the interests of the US and other democratic nations. The attacks on Iran, the earlier capture of deposed of Venezuelan president Nicolas Maduro and efforts to remove Chinese influence from the Panama Canal all demonstrate that, as tensions with Beijing intensify, Washington has adopted a hardline stance aimed at weakening its power. Iran and Venezuela are important allies and major oil suppliers of China, and the US has effectively decapitated both. The US has continuously strengthened its military presence in the Philippines. Japanese Prime