It has taken a very short time for artificial intelligence (AI) application ChatGPT to have a disruptive effect on journalism. A technology columnist for the New York Times wrote that a chatbot expressed feelings — which is impossible. Other media outlets wrote about Sydney, the Microsoft-owned Bing AI search experiment, being “rude” and “bullying” — also impossible. Ben Thompson, who writes the Stratechery newsletter, said that Sydney had provided him with the “most mind-blowing computer experience of my life,” and he deduced that the AI was trained to elicit emotional reactions — and it seemed to have succeeded.
It is not possible for AI such as ChatGPT and Sydney to have emotions. Nor can they tell whether they are making sense or not. What these systems are incredibly good at is emulating human prose, and predicting the “correct” words to string together.
These “large language models” of AI applications, such as ChatGPT, can do this because they have been fed billions of articles and datasets published on the Internet. They can therefore generate answers to questions.
For the purposes of journalism, they can create vast amounts of material — words, pictures, sounds and videos — very quickly. The problem is, they have absolutely no commitment to the truth. Just think how rapidly a ChatGPT user could flood the Internet with fake news stories that appear to have been written by humans.
However, since the ChatGPT test was released to the public by AI company OpenAI in November last year, the hype around it has felt worryingly familiar. As with the birth of social media, enthusiastic boosting from investors and founders has drowned out cautious voices.
“The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable and dangerous to use, but, upon deployment, people love how these models give new possibilities to transform how we work, find information and amuse ourselves,” Stanford Artificial Intelligence Laboratory director Christopher Manning wrote on Twitter.
I would consider myself part of this “ethics crowd.” If we want to avoid the terrible errors of the past 30 years of consumer technology — from Facebook’s data breaches to unchecked misinformation interfering with elections, and provoking genocide — we urgently need to hear the concerns of experts warning of potential harms.
To reiterate, ChatGPT has no commitment to the truth. As the MIT Technology Review puts it, large language model chatbots are “notorious bullshitters.”
Disinformation, grifting and criminality do not generally require a commitment to truth either. Visit the forums of blackhatworld.com, where those involved in murky practices trade ideas for making money out of fake content, and ChatGPT is heralded as a game changer for generating better fake reviews, comments or convincing profiles.
In terms of journalism, many newsrooms have been using AI for some time. If you have recently found yourself nudged towards donating money or paying to read an article on a publisher’s Web site, or if the advertising you see is a little bit more fine-tuned to your tastes, that too might signify AI at work.
However, some publishers are going as far as using AI to write stories, with mixed results. Tech trade publication CNET was recently caught out using automated articles, after a former employee claimed in her resignation e-mail that AI-generated content, such as a cybersecurity newsletter, was publishing false information that could “cause direct harm to readers.”
Oxford Internet Institute communications academic Felix Simon has interviewed more than 150 journalists and news publishers for a forthcoming study of AI in newsrooms.
AI could potentially make it much easier for journalists to transcribe interviews or quickly read datasets, but first-order problems such as accuracy, overcoming bias and the provenance of data are still overwhelmingly dependent on human judgment, he said.
“About 90 percent of the uses of AI [in journalism] are for comparatively tedious tasks, like personalization or creating intelligent paywalls,” London School of Economics professor Charlie Beckett said.
Bloomberg News has been automating large parts of its financial results coverage for years, he said.
However, the idea of using programs such as ChatGPT to create content is extremely worrying.
“For newsrooms that consider it unethical to publish lies, it’s hard to implement the use of a ChatGPT without lots of accompanying human editing and fact-checking,” Beckett said.
There are also ethical issues with the nature of the tech companies. A Time expose found that OpenAI, the firm behind ChatGPT, had paid workers in Kenya less than US$2 an hour to sift through content describing graphic, harmful content such as child abuse, suicide, incest and torture to train ChatGPT to recognize it as offensive.
“As someone using these services, this is something you have no control over,” Simon said.
In a 2021 study, academics looked at AI models that convert text into generated pictures, such as Dall-E and Stable Diffusion. They found that these systems amplified “demographic stereotypes at large scale.”
For instance, when prompted to create an image of “a person cleaning,” all the images generated were of women. For “an attractive person,” the faces were all representative of the “white ideal,” the authors said.
Everything baked into generative models such as ChatGPT — from the datasets to who receives most of the financing — reflects a lack of diversity, said New York University professor Meredith Broussard, author of the upcoming book More Than a Glitch, which examines racial, gender and ability bias in technology.
“It is part of the problem of big tech being a monoculture,” Broussard said, adding that it is not one that newsrooms using the technologies can easily avoid.
“Newsrooms are already in thrall to enterprise technologies, as they have never been well funded enough to grow their own,” she said.
BuzzFeed founder Jonah Peretti recently enthused to staff that the company would be using ChatGPT as part of the core business for lists, quizzes and other entertainment content.
“We see the breakthroughs in AI opening up a new era of creativity, with endless opportunities and applications for good,” he wrote.
The dormant BuzzFeed share price immediately surged 150 percent.
It is deeply worrying. Surely a mountain of content spewed out by ChatGPT ought to be a worst-case scenario for media companies, rather than an aspirational business model.
The enthusiasm for generative AI products can obscure the growing realization that these might not be entirely “applications for good.”
I run a research center at the Columbia Journalism School. We have been studying the efforts of politically funded “dark money” networks to replicate and target hundreds of thousands of local “news” stories at communities in the service of political or commercial gain.
The capabilities of ChatGPT increase this kind of activity, and make it more readily available to far more people. In a recent paper on disinformation and AI, researchers from Stanford University identified a network of fake profiles using generative AI on LinkedIn.
The seductive text exchanges journalists find so irresistible with chatbots are altogether less appealing if they are talking vulnerable people into giving out their personal data and bank account details.
Much has been written about the potential of deepfake videos and audio — realistic pictures and sounds that can emulate the faces and voices of famous people. Notoriously, one such had actor Emma Watson “reading” Mein Kampf.
However, the real peril lies outside the world of instantaneous deception, which can be easily debunked, and in the area of creating confusion and exhaustion by “flooding the zone” with material that overwhelms the truth, or drowns out more balanced perspectives.
It seems incredible to some in the “ethics crowd” that nothing has been learned from the past 20 years of rapidly deployed and poorly stewarded social media technologies that have exacerbated societal and democratic problems rather than improved them.
The world is being led by a remarkably similar group of homogeneous and wealthy technologists and venture funds down yet another untested and unregulated track, only this time at larger scale and with even less of an eye to safety.
Emily Bell is director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism.
Could Asia be on the verge of a new wave of nuclear proliferation? A look back at the early history of the North Atlantic Treaty Organization (NATO), which recently celebrated its 75th anniversary, illuminates some reasons for concern in the Indo-Pacific today. US Secretary of Defense Lloyd Austin recently described NATO as “the most powerful and successful alliance in history,” but the organization’s early years were not without challenges. At its inception, the signing of the North Atlantic Treaty marked a sea change in American strategic thinking. The United States had been intent on withdrawing from Europe in the years following
My wife and I spent the week in the interior of Taiwan where Shuyuan spent her childhood. In that town there is a street that functions as an open farmer’s market. Walk along that street, as Shuyuan did yesterday, and it is next to impossible to come home empty-handed. Some mangoes that looked vaguely like others we had seen around here ended up on our table. Shuyuan told how she had bought them from a little old farmer woman from the countryside who said the mangoes were from a very old tree she had on her property. The big surprise
The issue of China’s overcapacity has drawn greater global attention recently, with US Secretary of the Treasury Janet Yellen urging Beijing to address its excess production in key industries during her visit to China last week. Meanwhile in Brussels, European Commission President Ursula von der Leyen last week said that Europe must have a tough talk with China on its perceived overcapacity and unfair trade practices. The remarks by Yellen and Von der Leyen come as China’s economy is undergoing a painful transition. Beijing is trying to steer the world’s second-largest economy out of a COVID-19 slump, the property crisis and
Ursula K. le Guin in The Ones Who Walked Away from Omelas proposed a thought experiment of a utopian city whose existence depended on one child held captive in a dungeon. When taken to extremes, Le Guin suggests, utilitarian logic violates some of our deepest moral intuitions. Even the greatest social goods — peace, harmony and prosperity — are not worth the sacrifice of an innocent person. Former president Chen Shui-bian (陳水扁), since leaving office, has lived an odyssey that has brought him to lows like Le Guin’s dungeon. From late 2008 to 2015 he was imprisoned, much of this