It has taken a very short time for artificial intelligence (AI) application ChatGPT to have a disruptive effect on journalism. A technology columnist for the New York Times wrote that a chatbot expressed feelings — which is impossible. Other media outlets wrote about Sydney, the Microsoft-owned Bing AI search experiment, being “rude” and “bullying” — also impossible. Ben Thompson, who writes the Stratechery newsletter, said that Sydney had provided him with the “most mind-blowing computer experience of my life,” and he deduced that the AI was trained to elicit emotional reactions — and it seemed to have succeeded.
It is not possible for AI such as ChatGPT and Sydney to have emotions. Nor can they tell whether they are making sense or not. What these systems are incredibly good at is emulating human prose, and predicting the “correct” words to string together.
These “large language models” of AI applications, such as ChatGPT, can do this because they have been fed billions of articles and datasets published on the Internet. They can therefore generate answers to questions.
For the purposes of journalism, they can create vast amounts of material — words, pictures, sounds and videos — very quickly. The problem is, they have absolutely no commitment to the truth. Just think how rapidly a ChatGPT user could flood the Internet with fake news stories that appear to have been written by humans.
However, since the ChatGPT test was released to the public by AI company OpenAI in November last year, the hype around it has felt worryingly familiar. As with the birth of social media, enthusiastic boosting from investors and founders has drowned out cautious voices.
“The AI Ethics crowd continues to promote a narrative of generative AI models being too biased, unreliable and dangerous to use, but, upon deployment, people love how these models give new possibilities to transform how we work, find information and amuse ourselves,” Stanford Artificial Intelligence Laboratory director Christopher Manning wrote on Twitter.
I would consider myself part of this “ethics crowd.” If we want to avoid the terrible errors of the past 30 years of consumer technology — from Facebook’s data breaches to unchecked misinformation interfering with elections, and provoking genocide — we urgently need to hear the concerns of experts warning of potential harms.
To reiterate, ChatGPT has no commitment to the truth. As the MIT Technology Review puts it, large language model chatbots are “notorious bullshitters.”
Disinformation, grifting and criminality do not generally require a commitment to truth either. Visit the forums of blackhatworld.com, where those involved in murky practices trade ideas for making money out of fake content, and ChatGPT is heralded as a game changer for generating better fake reviews, comments or convincing profiles.
In terms of journalism, many newsrooms have been using AI for some time. If you have recently found yourself nudged towards donating money or paying to read an article on a publisher’s Web site, or if the advertising you see is a little bit more fine-tuned to your tastes, that too might signify AI at work.
However, some publishers are going as far as using AI to write stories, with mixed results. Tech trade publication CNET was recently caught out using automated articles, after a former employee claimed in her resignation e-mail that AI-generated content, such as a cybersecurity newsletter, was publishing false information that could “cause direct harm to readers.”
Oxford Internet Institute communications academic Felix Simon has interviewed more than 150 journalists and news publishers for a forthcoming study of AI in newsrooms.
AI could potentially make it much easier for journalists to transcribe interviews or quickly read datasets, but first-order problems such as accuracy, overcoming bias and the provenance of data are still overwhelmingly dependent on human judgment, he said.
“About 90 percent of the uses of AI [in journalism] are for comparatively tedious tasks, like personalization or creating intelligent paywalls,” London School of Economics professor Charlie Beckett said.
Bloomberg News has been automating large parts of its financial results coverage for years, he said.
However, the idea of using programs such as ChatGPT to create content is extremely worrying.
“For newsrooms that consider it unethical to publish lies, it’s hard to implement the use of a ChatGPT without lots of accompanying human editing and fact-checking,” Beckett said.
There are also ethical issues with the nature of the tech companies. A Time expose found that OpenAI, the firm behind ChatGPT, had paid workers in Kenya less than US$2 an hour to sift through content describing graphic, harmful content such as child abuse, suicide, incest and torture to train ChatGPT to recognize it as offensive.
“As someone using these services, this is something you have no control over,” Simon said.
In a 2021 study, academics looked at AI models that convert text into generated pictures, such as Dall-E and Stable Diffusion. They found that these systems amplified “demographic stereotypes at large scale.”
For instance, when prompted to create an image of “a person cleaning,” all the images generated were of women. For “an attractive person,” the faces were all representative of the “white ideal,” the authors said.
Everything baked into generative models such as ChatGPT — from the datasets to who receives most of the financing — reflects a lack of diversity, said New York University professor Meredith Broussard, author of the upcoming book More Than a Glitch, which examines racial, gender and ability bias in technology.
“It is part of the problem of big tech being a monoculture,” Broussard said, adding that it is not one that newsrooms using the technologies can easily avoid.
“Newsrooms are already in thrall to enterprise technologies, as they have never been well funded enough to grow their own,” she said.
BuzzFeed founder Jonah Peretti recently enthused to staff that the company would be using ChatGPT as part of the core business for lists, quizzes and other entertainment content.
“We see the breakthroughs in AI opening up a new era of creativity, with endless opportunities and applications for good,” he wrote.
The dormant BuzzFeed share price immediately surged 150 percent.
It is deeply worrying. Surely a mountain of content spewed out by ChatGPT ought to be a worst-case scenario for media companies, rather than an aspirational business model.
The enthusiasm for generative AI products can obscure the growing realization that these might not be entirely “applications for good.”
I run a research center at the Columbia Journalism School. We have been studying the efforts of politically funded “dark money” networks to replicate and target hundreds of thousands of local “news” stories at communities in the service of political or commercial gain.
The capabilities of ChatGPT increase this kind of activity, and make it more readily available to far more people. In a recent paper on disinformation and AI, researchers from Stanford University identified a network of fake profiles using generative AI on LinkedIn.
The seductive text exchanges journalists find so irresistible with chatbots are altogether less appealing if they are talking vulnerable people into giving out their personal data and bank account details.
Much has been written about the potential of deepfake videos and audio — realistic pictures and sounds that can emulate the faces and voices of famous people. Notoriously, one such had actor Emma Watson “reading” Mein Kampf.
However, the real peril lies outside the world of instantaneous deception, which can be easily debunked, and in the area of creating confusion and exhaustion by “flooding the zone” with material that overwhelms the truth, or drowns out more balanced perspectives.
It seems incredible to some in the “ethics crowd” that nothing has been learned from the past 20 years of rapidly deployed and poorly stewarded social media technologies that have exacerbated societal and democratic problems rather than improved them.
The world is being led by a remarkably similar group of homogeneous and wealthy technologists and venture funds down yet another untested and unregulated track, only this time at larger scale and with even less of an eye to safety.
Emily Bell is director of the Tow Center for Digital Journalism at Columbia University’s Graduate School of Journalism.
The US Senate’s passage of the 2026 National Defense Authorization Act (NDAA), which urges Taiwan’s inclusion in the Rim of the Pacific (RIMPAC) exercise and allocates US$1 billion in military aid, marks yet another milestone in Washington’s growing support for Taipei. On paper, it reflects the steadiness of US commitment, but beneath this show of solidarity lies contradiction. While the US Congress builds a stable, bipartisan architecture of deterrence, US President Donald Trump repeatedly undercuts it through erratic decisions and transactional diplomacy. This dissonance not only weakens the US’ credibility abroad — it also fractures public trust within Taiwan. For decades,
In 1976, the Gang of Four was ousted. The Gang of Four was a leftist political group comprising Chinese Communist Party (CCP) members: Jiang Qing (江青), its leading figure and Mao Zedong’s (毛澤東) last wife; Zhang Chunqiao (張春橋); Yao Wenyuan (姚文元); and Wang Hongwen (王洪文). The four wielded supreme power during the Cultural Revolution (1966-1976), but when Mao died, they were overthrown and charged with crimes against China in what was in essence a political coup of the right against the left. The same type of thing might be happening again as the CCP has expelled nine top generals. Rather than a
Taiwan Retrocession Day is observed on Oct. 25 every year. The Democratic Progressive Party (DPP) government removed it from the list of annual holidays immediately following the first successful transition of power in 2000, but the Chinese Nationalist Party (KMT)-led opposition reinstated it this year. For ideological reasons, it has been something of a political football in the democratic era. This year, the Chinese Communist Party (CCP) designated yesterday as “Commemoration Day of Taiwan’s Restoration,” turning the event into a conceptual staging post for its “restoration” to the People’s Republic of China (PRC). The Mainland Affairs Council on Friday criticized
The topic of increased intergenerational conflict has been making headlines in the past few months, showcasing a problem that would only grow as Taiwan approaches “super-aged society” status. A striking example of that tension erupted on the Taipei MRT late last month, when an apparently able-bodied passenger kicked a 73-year-old woman across the width of the carriage. The septuagenarian had berated and hit the young commuter with her bag for sitting in a priority seat, despite regular seats being available. A video of the incident went viral online. Altercations over the yielding of MRT seats are not common, but they are