Artificial intelligence (AI) is revolutionizing the gathering, processing and dissemination of information. The outcome of this revolution will depend on our technological choices. To ensure that AI supports the right to information, we at Reporters without Borders think that ethics must govern technological innovation in the news and information media.
AI is radically transforming the world of journalism. How can we ensure information integrity when most Web content will be AI-generated? How do we maintain editorial independence when opaque language models, driven by private-sector interests or arbitrary criteria, are used by newsrooms? How can we prevent the fragmentation of the information ecosystem into numerous streams fueled by chatbots?
Predicting the full extent of AI’s impacts on the media is a challenging task. Yet, one thing is clear: Innovation per se does not automatically lead to progress. It must be accompanied by sensible regulation and ethical guardrails to truly benefit humanity.
History offers numerous examples, such as the ban on human cloning, nuclear non-proliferation treaties and drug safety controls, where technological development has been responsibly curtailed, regulated or directed in the name of ethics. Likewise, in journalism, innovation should be governed by clear ethical rules. This is crucial to protect the right to information, which underpins our fundamental freedoms of opinion and expression.
In the summer of last year, Reporters Without Borders convened an international commission to draft what became the first global ethical reference for media in the AI era. The commission brought together 32 prominent figures from 20 countries, specialists in journalism or AI. To chair it, none other than Maria Ressa, winner of the 2021 Nobel Peace Prize, who embodies both the challenges of press freedom and commitment to addressing technological upheavals (she denounced the “invisible atomic bomb” of digital technology from the podium in Oslo).
The goal was clear: Establish a set of fundamental ethical principles to protect information integrity in the AI era, as these technologies transform the media industry. After five months of meetings, 700 comments and an international consultation, the discussions revealed consensus and differences. Aligning the views of journalism defense non-governmental organizations, media organizations, investigative journalism consortia and a major journalists’ federation was challenging — but an unprecedented alliance gathered around this digital table.
In response to the upheavals caused by AI in the information arena, the charter that was published in Paris in November last year outlines 10 essential principles to ensure information integrity and preserve journalism’s social function. It is crucial that the international community cooperates to ensure that AI systems uphold human rights and democracy, but this does not absolve journalism of particular ethical and professional responsibilities in using these technologies.
Of the charter’s core principles, we will mention just four.
First, ethics must guide technological choices in the media. The pace of adopting one of history’s most transformative technologies should not be dictated by the pressure of economic competition. Polls suggest that an overwhelming majority of citizens would prefer a slower, safer deployment of AI. Let us listen to them.
Second, human judgement must remain central in editorial decisions. Generative AI systems are more than mere tools; they acquire a form of agency and interfere with our intentions. Though lacking will, AI is full of certainties, reflecting its data and training process. Each automated decision is a missed opportunity for human judgement. We aspire to augmented journalism, not diminished human judgement.
Third, the media must help society to confidently discern authentic from synthetic content. Generative AI, more than any past technology, is capable of crafting the illusion of facts and the artifice of evidence. The media have a special responsibility to help society confidently discern fact from fiction. Trust is built, not decreed. Source verification, evidence authentication, content traceability and editorial responsibility are crucial in the AI era. To avoid contributing to general confusion, the media must maintain a clear distinction between authentic material (captured in the real world) and synthetic material (material generated or significantly altered by AI).
Finally, in their negotiations with technology companies, media outlets and rights holders should prioritize journalism’s societal mission, placing public interest above private profit. Chatbots are likely to become a primary method for accessing news in the near future. It is therefore imperative to ensure that their owners provide fair compensation to content creators and rights holders.
Additionally, solid guarantees must be demanded concerning the quality, pluralism and reliability of the information disseminated. This becomes even more crucial as media entities start to form their initial partnerships with AI providers and engage in legal battles with tech companies over copyright infringement.
The media stand at a crossroads. Used ethically and discerningly, AI offers unprecedented opportunities to enrich our understanding of a complex world. As deepfakes potentially amplify disinformation and erode public trust in all audiovisual content, and language models promise increased productivity at the expense of information integrity, this charter affirms an approach where human discernment and journalistic ethics are the pillars of journalism’s social trust function.
In a noisy world, there are only two ways to gain attention: extort it or earn it. Social media, aided by recommendation algorithms, chose the former, with known consequences in terms of misinformation and the polarization of opinion. In a field where anything goes, quality journalism has no chance unless it abandons its defining traits: the pursuit of factual truth, nuance and impartiality.
The media must therefore earn our attention by focusing their practice on trust, authenticity and human experience.
We encourage media and information professionals to embrace the principles of the Paris Charter on AI and Journalism.
Charlie Beckett, professor at the London School of Economics (LSE) and director of the LSE Journalism and AI Project.
Christophe Deloire, secretary-general at Reporters Without Borders and chair of the Forum on Information and Democracy.
Gary Marcus, founder and CEO of the Center for the Advancement of Trustworthy AI and professor emeritus at New York University.
Maria Ressa, 2021 Nobel Peace Prize laureate, journalist and cofounder of Rappler media, chair of the Committee of the Paris Charter on AI and Journalism.
Stuart Russell, distinguished professor of computer science at the University of California, Berkeley and founder of the Center for Human-compatible AI.
Anya Schiffrin, senior lecturer in discipline of international and public affairs, Columbia University School of International and Public Affairs.
The US Department of Defense recently released this year’s “Report on Military and Security Developments Involving the People’s Republic of China.” This annual report provides a comprehensive overview of China’s military capabilities, strategic objectives and evolving global ambitions. Taiwan features prominently in this year’s report, as capturing the nation remains central to Chinese President Xi Jinping’s (習近平) vision of the “great rejuvenation of the Chinese nation,” a goal he has set for 2049. The report underscores Taiwan’s critical role in China’s long-term strategy, highlighting its significance as a geopolitical flashpoint and a key target in China’s quest to assert dominance
The National Development Council (NDC) on Wednesday last week launched a six-month “digital nomad visitor visa” program, the Central News Agency (CNA) reported on Monday. The new visa is for foreign nationals from Taiwan’s list of visa-exempt countries who meet financial eligibility criteria and provide proof of work contracts, but it is not clear how it differs from other visitor visas for nationals of those countries, CNA wrote. The NDC last year said that it hoped to attract 100,000 “digital nomads,” according to the report. Interest in working remotely from abroad has significantly increased in recent years following improvements in
Monday was the 37th anniversary of former president Chiang Ching-kuo’s (蔣經國) death. Chiang — a son of former president Chiang Kai-shek (蔣介石), who had implemented party-state rule and martial law in Taiwan — has a complicated legacy. Whether one looks at his time in power in a positive or negative light depends very much on who they are, and what their relationship with the Chinese Nationalist Party (KMT) is. Although toward the end of his life Chiang Ching-kuo lifted martial law and steered Taiwan onto the path of democratization, these changes were forced upon him by internal and external pressures,
Chinese Nationalist Party (KMT) caucus whip Fu Kun-chi (傅?萁) has caused havoc with his attempts to overturn the democratic and constitutional order in the legislature. If we look at this devolution from the context of a transition to democracy from authoritarianism in a culturally Chinese sense — that of zhonghua (中華) — then we are playing witness to a servile spirit from a millennia-old form of totalitarianism that is intent on damaging the nation’s hard-won democracy. This servile spirit is ingrained in Chinese culture. About a century ago, Chinese satirist and author Lu Xun (魯迅) saw through the servile nature of