Artificial intelligence (AI) is revolutionizing the gathering, processing and dissemination of information. The outcome of this revolution will depend on our technological choices. To ensure that AI supports the right to information, we at Reporters without Borders think that ethics must govern technological innovation in the news and information media.
AI is radically transforming the world of journalism. How can we ensure information integrity when most Web content will be AI-generated? How do we maintain editorial independence when opaque language models, driven by private-sector interests or arbitrary criteria, are used by newsrooms? How can we prevent the fragmentation of the information ecosystem into numerous streams fueled by chatbots?
Predicting the full extent of AI’s impacts on the media is a challenging task. Yet, one thing is clear: Innovation per se does not automatically lead to progress. It must be accompanied by sensible regulation and ethical guardrails to truly benefit humanity.
History offers numerous examples, such as the ban on human cloning, nuclear non-proliferation treaties and drug safety controls, where technological development has been responsibly curtailed, regulated or directed in the name of ethics. Likewise, in journalism, innovation should be governed by clear ethical rules. This is crucial to protect the right to information, which underpins our fundamental freedoms of opinion and expression.
In the summer of last year, Reporters Without Borders convened an international commission to draft what became the first global ethical reference for media in the AI era. The commission brought together 32 prominent figures from 20 countries, specialists in journalism or AI. To chair it, none other than Maria Ressa, winner of the 2021 Nobel Peace Prize, who embodies both the challenges of press freedom and commitment to addressing technological upheavals (she denounced the “invisible atomic bomb” of digital technology from the podium in Oslo).
The goal was clear: Establish a set of fundamental ethical principles to protect information integrity in the AI era, as these technologies transform the media industry. After five months of meetings, 700 comments and an international consultation, the discussions revealed consensus and differences. Aligning the views of journalism defense non-governmental organizations, media organizations, investigative journalism consortia and a major journalists’ federation was challenging — but an unprecedented alliance gathered around this digital table.
In response to the upheavals caused by AI in the information arena, the charter that was published in Paris in November last year outlines 10 essential principles to ensure information integrity and preserve journalism’s social function. It is crucial that the international community cooperates to ensure that AI systems uphold human rights and democracy, but this does not absolve journalism of particular ethical and professional responsibilities in using these technologies.
Of the charter’s core principles, we will mention just four.
First, ethics must guide technological choices in the media. The pace of adopting one of history’s most transformative technologies should not be dictated by the pressure of economic competition. Polls suggest that an overwhelming majority of citizens would prefer a slower, safer deployment of AI. Let us listen to them.
Second, human judgement must remain central in editorial decisions. Generative AI systems are more than mere tools; they acquire a form of agency and interfere with our intentions. Though lacking will, AI is full of certainties, reflecting its data and training process. Each automated decision is a missed opportunity for human judgement. We aspire to augmented journalism, not diminished human judgement.
Third, the media must help society to confidently discern authentic from synthetic content. Generative AI, more than any past technology, is capable of crafting the illusion of facts and the artifice of evidence. The media have a special responsibility to help society confidently discern fact from fiction. Trust is built, not decreed. Source verification, evidence authentication, content traceability and editorial responsibility are crucial in the AI era. To avoid contributing to general confusion, the media must maintain a clear distinction between authentic material (captured in the real world) and synthetic material (material generated or significantly altered by AI).
Finally, in their negotiations with technology companies, media outlets and rights holders should prioritize journalism’s societal mission, placing public interest above private profit. Chatbots are likely to become a primary method for accessing news in the near future. It is therefore imperative to ensure that their owners provide fair compensation to content creators and rights holders.
Additionally, solid guarantees must be demanded concerning the quality, pluralism and reliability of the information disseminated. This becomes even more crucial as media entities start to form their initial partnerships with AI providers and engage in legal battles with tech companies over copyright infringement.
The media stand at a crossroads. Used ethically and discerningly, AI offers unprecedented opportunities to enrich our understanding of a complex world. As deepfakes potentially amplify disinformation and erode public trust in all audiovisual content, and language models promise increased productivity at the expense of information integrity, this charter affirms an approach where human discernment and journalistic ethics are the pillars of journalism’s social trust function.
In a noisy world, there are only two ways to gain attention: extort it or earn it. Social media, aided by recommendation algorithms, chose the former, with known consequences in terms of misinformation and the polarization of opinion. In a field where anything goes, quality journalism has no chance unless it abandons its defining traits: the pursuit of factual truth, nuance and impartiality.
The media must therefore earn our attention by focusing their practice on trust, authenticity and human experience.
We encourage media and information professionals to embrace the principles of the Paris Charter on AI and Journalism.
Charlie Beckett, professor at the London School of Economics (LSE) and director of the LSE Journalism and AI Project.
Christophe Deloire, secretary-general at Reporters Without Borders and chair of the Forum on Information and Democracy.
Gary Marcus, founder and CEO of the Center for the Advancement of Trustworthy AI and professor emeritus at New York University.
Maria Ressa, 2021 Nobel Peace Prize laureate, journalist and cofounder of Rappler media, chair of the Committee of the Paris Charter on AI and Journalism.
Stuart Russell, distinguished professor of computer science at the University of California, Berkeley and founder of the Center for Human-compatible AI.
Anya Schiffrin, senior lecturer in discipline of international and public affairs, Columbia University School of International and Public Affairs.
A series of strong earthquakes in Hualien County not only caused severe damage in Taiwan, but also revealed that China’s power has permeated everywhere. A Taiwanese woman posted on the Internet that she found clips of the earthquake — which were recorded by the security camera in her home — on the Chinese social media platform Xiaohongshu. It is spine-chilling that the problem might be because the security camera was manufactured in China. China has widely collected information, infringed upon public privacy and raised information security threats through various social media platforms, as well as telecommunication and security equipment. Several former TikTok employees revealed
The bird flu outbreak at US dairy farms keeps finding alarming new ways to surprise scientists. Last week, the US Department of Agriculture (USDA) confirmed that H5N1 is spreading not just from birds to herds, but among cows. Meanwhile, media reports say that an unknown number of cows are asymptomatic. Although the risk to humans is still low, it is clear that far more work needs to be done to get a handle on the reach of the virus and how it is being transmitted. That would require the USDA and the Centers for Disease Control and Prevention (CDC) to get
For the incoming Administration of President-elect William Lai (賴清德), successfully deterring a Chinese Communist Party (CCP) attack or invasion of democratic Taiwan over his four-year term would be a clear victory. But it could also be a curse, because during those four years the CCP’s People’s Liberation Army (PLA) will grow far stronger. As such, increased vigilance in Washington and Taipei will be needed to ensure that already multiplying CCP threat trends don’t overwhelm Taiwan, the United States, and their democratic allies. One CCP attempt to overwhelm was announced on April 19, 2024, namely that the PLA had erred in combining major missions
On April 11, Japanese Prime Minister Fumio Kishida delivered a speech at a joint meeting of the US Congress in Washington, in which he said that “China’s current external stance and military actions present an unprecedented and the greatest strategic challenge … to the peace and stability of the international community.” Kishida emphasized Japan’s role as “the US’ closest ally.” “The international order that the US worked for generations to build is facing new challenges,” Kishida said. “I understand it is a heavy burden to carry such hopes on your shoulders,” he said. “Japan is already standing shoulder to shoulder