A Japanese ruling party official has called into question a government plan to let people who fled from the 2011 disaster at the Fukushima Dai-ichi nuclear power plant go home, saying the government should identify which areas will never be habitable.
The Fukushima plant north of Tokyo was battered by an earthquake and tsunami in March 2011, leading to meltdowns and explosions that sent plumes of radiation into the air and sea.
About 150,000 people were evacuated from the surrounding areas and a large area of land is off-limits because of radiation.
Despite this, the Japanese government is hoping to eventually allow everyone to go home.
However, Liberal Democratic Party Secretary-General Shigeru Ishiba said it was inevitable that some people would never get to go back.
“The time will definitely come that someone must say: ‘They cannot live in this area, but they would be compensated,’” Ishiba was quoted as saying in the Asahi newspaper.
The question of letting people go home is politically sensitive for the government and it would not want to have to tell thousands of residents that cannot go back.
The plant’s operator, Tokyo Electric Power Co, has been struggling to stop radiation leaks from the wrecked plant.
It is now preparing to remove 400 tonnes of highly irradiated spent fuel from a damaged reactor building, a very dangerous operation that has never been attempted before on this scale.
Ishiba also said authorities might have to relax limits for radiation exposure if anything was ever going to be done in terms of rebuilding the area.
“Unless we come up with answer as to what to do with a measure for decontamination, [the] reconstruction of Fukushima won’t ever make progress,” Ishiba was quoted as saying.
Australians were downloading virtual private networks (VPNs) in droves, while one of the world’s largest porn distributors said it was blocking users from its platforms as the country yesterday rolled out sweeping online age restriction. Australia in December became the first country to impose a nationwide ban on teenagers using social media. A separate law now requires artificial intelligence (AI)-powered chatbot services to keep certain content — including pornography, extreme violence and self-harm and eating disorder material — from minors or face fines of up to A$49.5 million (US$34.6 million). The country also joined Britain, France and dozens of US states requiring
Hungarian authorities temporarily detained seven Ukrainian citizens and seized two armored cars carrying tens of millions of euros in cash across Hungary on suspicion of money laundering, officials said on Friday. The Ukrainians were released on Friday, following their detention on Thursday, but Hungarian officials held onto the cash, prompting Ukraine to accuse Hungary’s Russia-friendly government of illegally seizing the money. “We will not tolerate this state banditism,” Ukrainian Minister of Foreign Affairs Andrii Sybiha said. The seven detained Ukrainians were employees of the Ukrainian state-owned Oschadbank, who were traveling in the two armored cars that were carrying the money between Austria and
Kosovar President Vjosa Osmani on Friday after dissolving the Kosovar parliament said a snap election should be held as soon as possible to avoid another prolonged political crisis in the Balkan country at a time of global turmoil. Osmani said it is important for Kosovo to wrap up the upcoming election process and form functional institutions for political stability as the war rages in the Middle East. “Precisely because the geopolitical situation is that complex, it is important to finish this electoral process which is coming up,” she said. “It is very hard now to imagine what will happen next.” Kosovo, which declared
ACTIONABLE ADVICE: The majority of chatbots tested provided guidance on weapons, tactics and target selections, with Perplexity and Meta AI deemed to be the least safe From school shootings to synagogue bombings, leading artificial intelligence (AI) chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm. Researchers from the nonprofit watchdog Center for Countering Digital Hate and CNN posed as 13-year-old boys in the US and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek and Meta AI. Eight of the chatbots assisted the make-believe attackers in more than half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said. The chatbots had become a “powerful accelerant for