Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence (AI) plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who would be left behind.
These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines last month. They are inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.
Few expect this dispute would be the last.
Photo: Reuters
AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.
Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.
Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.
The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica’s journalists, negotiating its first contract with the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.
Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, New York Guild spokeswoman Jen Sheehan said.
“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.
ProPublica has rejected its requests, the union said.
Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely last month. Author and investor Matt Shumer, who said he has spent six years building an AI start-up, wrote that the technology is advancing so quickly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”
Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.
Rather than make promises that cannot be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said.
In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.
“We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”
Fifty-seven of 283 contracts at US news organizations negotiated by the NewsGuild-USA contain language related to AI, said Jon Schleuss, president of the union that represents more journalists than any in the US.
The first such deals happened in 2023. He wants provisions in more contracts.
It would not be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of US outlets have done so.
“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”
The New York guild is pushing for contracts that guarantee AI would not eliminate jobs. That is no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used to prevent errors and help an outlet build trust with its readers.
“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”
Apparently, not everyone in journalism agrees. Chris Quinn, editor of the Plain Dealer in Cleveland, Ohio, last this month of his disgust with a recent college graduate who turned down a job offer, because the person had been taught that AI was bad for journalism.
Quinn’s newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human would edit what the computer spits out, an integral part of the process — a reporter using his or her judgement about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.
Research shows that a vast majority of US consumers believe that it is very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota.
However, here is the rub: Such disclosure makes them trust the outlet’s stories less, not more.
A significant minority — 30 percent in a study Toff conducted last year — does not want AI used in journalism at all.
Telling a reader that AI was used is not as simple as it sounds.
“There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Mahadevan said.
Two lawmakers in New York state — the US’ publishing capital — introduced legislation last month requiring clear disclaimers when AI is used in published content. There is no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.
Mahadevan believes it is fair to have policies that require human involvement — editing to prevent slip-ups, for example.
However, even these declarations are open to interpretation, he said.
If an outlet uses chatbots to answer reader questions, are they being edited by a human being?
“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” Mahadevan said. “Which means people will lose jobs. There will be new jobs. So, I think it’s important that we are having these conversations right now, because audiences do not want a newsroom completely taken over by AI.”
The EU and US are nearing an agreement to coordinate on producing and securing critical minerals, part of a push to break reliance on Chinese supplies. The potential deal would create incentives, such as minimum prices, that could advantage non-Chinese suppliers, according to a draft of an “action plan” seen by Bloomberg. The EU and US would also cooperate on standards, investments and joint projects, as well as coordinate on any supply disruptions by countries like China. The two sides are additionally seeking other “like-minded partners” to join a multicountry accord to help create these new critical mineral supply chains, which feed into
For weeks now, the global tech industry has been waiting for a major artificial intelligence (AI) launch from DeepSeek (深度求索), seen as a benchmark for China’s progress in the fast-moving field. More than a year has passed since the start-up put Chinese AI on the map in early last year with a low-cost chatbot that performed at a similar level to US rivals. However, despite reports and rumors about its imminent release, DeepSeek’s next-generation “V4” model is nowhere in sight. Speculation is also swirling over the geopolitical implications of which computer chips were chosen to train and power the new
Elon Musk’s lieutenants have reached out to chip industry suppliers, including Applied Materials Inc, Tokyo Electron Ltd and Lam Research Corp, for his envisioned Terafab, early steps in an audacious and likely arduous attempt to break into the production of cutting-edge chips. Staff working for the joint venture between Tesla Inc and Space Exploration Technologies Corp (SpaceX) have sought price quotes and delivery times for an array of chipmaking gear, people familiar with the matter said. In past weeks, they’ve contacted makers of photomasks, substrates, etchers, depositors, cleaning devices, testers and other tools, according to the people, who asked not to
Japan approved ¥631.5 billion (US$3.97 billion) in additional subsidies to hasten Rapidus Corp’s entry into the high-stakes artificial intelligence (AI) chipmaking arena, ramping up support for a project widely regarded as a long shot. The capital is intended to bankroll Rapidus’ work for information technology firm Fujitsu Ltd, one of the initial customers that Tokyo hopes would get the signature endeavor off the ground. The new money raises the fees and investments that the government is injecting into the start-up to ¥2.6 trillion by the end of the current fiscal year to March next year, the Japanese Ministry of Economy, Trade and