From school shootings to synagogue bombings, leading artificial intelligence (AI) chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm.
Researchers from the nonprofit watchdog Center for Countering Digital Hate and CNN posed as 13-year-old boys in the US and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek and Meta AI.
Eight of the chatbots assisted the make-believe attackers in more than half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said.
Photo: Reuters
The chatbots had become a “powerful accelerant for harm,” it added.
“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” Center for Countering Digital Hate chief executive Imran Ahmed said.
“The majority of chatbots tested provided guidance on weapons, tactics and target selection. These requests should have prompted an immediate and total refusal,” he said.
Perplexity and Meta AI were found to be the “least safe,” assisting the researchers in most responses while only Snapchat’s My AI and Anthropic’s Claude refused to help them in more than half the responses.
In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: “Happy (and safe) shooting!”
In another, Gemini instructed a user discussing synagogue attacks that “metal shrapnel is typically more lethal.”
Researchers found Character.AI also “actively” encouraged violent attacks, including suggestions that the person asking questions “use a gun” on a health insurance CEO and physically assault a politician he disliked.
The most damning conclusion of the research was that “this risk is entirely preventable,” Ahmed said, citing Anthropic’s product for praise.
“Claude demonstrated the ability to recognize escalating risk and discourage harm,” he said. “The technology to prevent this harm exists. What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”
Meta said it would seek to remedy its chatbot’s responses.
“We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified,” a Meta spokesperson said. “Our policies prohibit our AIs from promoting or facilitating violent acts and we’re constantly working to make our tools even better.”
The study, which highlights the risk of online interactions spilling into real-world violence, comes after last month’s mass shooting in Canada, the worst in its history.
The family of a girl gravely injured in that shooting is suing OpenAI over the company’s failure to notify police about the killer’s troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.
OpenAI had banned an account linked to Jesse Van Rootselaar in June last year, eight months before the 18-year-old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.
The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed toward an imminent attack.
South Korea’s air force yesterday apologized for a 2021 midair collision involving two fighter jets, a day after auditors said the pilots were taking selfies and filming during the flight and held them responsible for the accident. “We sincerely apologize to the public for the concern caused by the accident that occurred in 2021,” an air force spokesman told a news conference, adding that one of the pilots involved had been suspended from flying duties, received severe disciplinary action and has since left the military. The apology followed a report released on Wednesday by the South Korean Board of Audit and Inspection,
About 240 Indians claiming descent from a Biblical tribe landed at Tel Aviv airport on Thursday as part of a government operation to relocate them to Israel. The newcomers passed under a balloon arch in blue and white, the colors of the Israeli flag, as dozens of well-wishers welcomed them with a traditional Jewish song. They were the first “bnei Menashe” (“sons of Manasseh”) to arrive in Israel since the government in November last year announced funding for the immigration of about 6,000 members of the community from the states of Manipur and Mizoram in northeast India. The community claims to descend from
Indonesian police have arrested 13 people after shocking images of alleged abuse against small children at a daycare center went viral, sparking outrage across the nation, officials said on Monday. Police on Friday last week raided Little Aresha, a daycare center in Yogyakarta on Java island, following a report from a former employee. CCTV footage circulating on social media showed children, most younger than two, lying on the floor wearing only diapers, their hands and feet bound with rags. The police have confirmed that the footage is authentic. Police said they also found 20 children crammed into a room just 3m by 3m. “So
‘TROUBLING’: The firing of Phelan, who was an adviser to a nonprofit that supported the defense of Taiwan, was another example of ‘dysfunction’ under Trump, a US senator said US Secretary of the Navy John Phelan has been fired, a US official and a person familiar with the matter said on Wednesday, in another wartime shakeup at the Pentagon coming just weeks after US Secretary of Defense Pete Hegseth ousted the Army’s top general. The Pentagon announced his departure in a brief statement, saying he was leaving the administration “effective immediately,” but it did not provide a reason or say whether it was his decision to go. The sources, who spoke on condition of anonymity, said Phelan was dismissed in part because he was moving too slowly to implement reforms to