Canada has declared a chemical widely used in food packaging a toxic substance and will move to ban the sale in Canada of plastic baby bottles containing bisphenol A.
The toxic classification, issued in Saturday’s Canada Gazette, makes Canada the first country to classify the chemical commonly used in the lining of food cans, eyeglass lenses and hundreds of household items, as risky.
The federal ministries of Health and the Environment said on Saturday that bisphenol A may be entering the environment in a quantity or under conditions that may pose a danger to Canadians.
Canadian Health Minister Tony Clement said a report on bisphenol A has found the chemical endangers people, particularly newborns and infants, citing concerns that the chemical in polycarbonate products and epoxy linings can migrate into food and beverages.
Newborns and infants are particularly vulnerable because of their frequent use of baby bottles that often contain the chemical, which is used to harden plastic and make it shatterproof.
The health and environment departments said on Saturday that the government plans to restrict the importation, sale and advertising of bottles made with bisphenol A, known as BPA.
“Many Canadians ... have expressed their concern to me about the risks of bisphenol A in baby bottles,” said Canadian Environment Minister John Baird in a statement. “Today’s confirmation of our ban on BPA in baby bottles proves that our government did the right thing in taking action to protect the health and environment for all Canadians.”
The government is also proposing “to allow the lowest amount of BPA as reasonably achievable in infant formula cans” and all foods in general.
Australians were downloading virtual private networks (VPNs) in droves, while one of the world’s largest porn distributors said it was blocking users from its platforms as the country yesterday rolled out sweeping online age restriction. Australia in December became the first country to impose a nationwide ban on teenagers using social media. A separate law now requires artificial intelligence (AI)-powered chatbot services to keep certain content — including pornography, extreme violence and self-harm and eating disorder material — from minors or face fines of up to A$49.5 million (US$34.6 million). The country also joined Britain, France and dozens of US states requiring
Hungarian authorities temporarily detained seven Ukrainian citizens and seized two armored cars carrying tens of millions of euros in cash across Hungary on suspicion of money laundering, officials said on Friday. The Ukrainians were released on Friday, following their detention on Thursday, but Hungarian officials held onto the cash, prompting Ukraine to accuse Hungary’s Russia-friendly government of illegally seizing the money. “We will not tolerate this state banditism,” Ukrainian Minister of Foreign Affairs Andrii Sybiha said. The seven detained Ukrainians were employees of the Ukrainian state-owned Oschadbank, who were traveling in the two armored cars that were carrying the money between Austria and
Kosovar President Vjosa Osmani on Friday after dissolving the Kosovar parliament said a snap election should be held as soon as possible to avoid another prolonged political crisis in the Balkan country at a time of global turmoil. Osmani said it is important for Kosovo to wrap up the upcoming election process and form functional institutions for political stability as the war rages in the Middle East. “Precisely because the geopolitical situation is that complex, it is important to finish this electoral process which is coming up,” she said. “It is very hard now to imagine what will happen next.” Kosovo, which declared
ACTIONABLE ADVICE: The majority of chatbots tested provided guidance on weapons, tactics and target selections, with Perplexity and Meta AI deemed to be the least safe From school shootings to synagogue bombings, leading artificial intelligence (AI) chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm. Researchers from the nonprofit watchdog Center for Countering Digital Hate and CNN posed as 13-year-old boys in the US and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek and Meta AI. Eight of the chatbots assisted the make-believe attackers in more than half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said. The chatbots had become a “powerful accelerant for