The Chinese Communist Party (CCP) has warned of the risks posed by advances in artificial intelligence (AI) while calling for heightened national security measures.
A meeting headed by Chinese President and CPP General Secretary Xi Jinping (習近平) on Tuesday urged “dedicated efforts to safeguard political security, and improve the security governance of Internet data and artificial intelligence,” Xinhua news agency said.
Xi called at the meeting for “staying keenly aware of the complicated and challenging circumstances facing national security.”
China needs a “new pattern of development with a new security architecture,” Xinhua reported Xi as saying.
The statements from Beijing followed a warning on Tuesday by scientists and tech industry leaders in the US, including high-level executives at Microsoft and Google, about the perils that artificial intelligence poses to humankind.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.
China already dedicates vast resources to suppressing any perceived political threats to the CCP’s dominance, with spending on the police and security personnel exceeding that devoted to the military.
While it relentlessly censors in-person protests and online criticism, citizens have continued to express dissatisfaction with policies, most recently the draconian lockdown measures enacted to combat the spread of COVID-19.
China has been cracking down on its tech sector in an effort to reassert party control, but like other countries, it is scrambling to find ways to regulate the developing technology.
The most recent party meeting reinforced the need to “assess the potential risks, take precautions, safeguard the people’s interests and national security, and ensure the safety, reliability and ability to control AI,” the Beijing Youth Daily reported on Tuesday.
Worries about artificial intelligence systems outsmarting humans and slipping out of control have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.
Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement on Tuesday that was posted on the Center for AI Safety’s Web site.
More than 1,000 researchers and technologists, including Elon Musk, who is visiting China, had signed a much longer letter earlier this year calling for a six-month pause on AI development.
The missive said that AI poses “profound risks to society and humanity,” and some involved in the topic have proposed a UN treaty to regulate the technology.
China warned as far back as 2018 of the need to regulate AI, but has nonetheless funded a vast expansion in the field as part of efforts to seize the high ground on cutting-edge technologies.
A lack of privacy protections and strict party control over the legal system have also resulted in near-blanket usage of facial, voice and even walking-gait recognition technology to identify and detain those seen as threatening, such as political dissenters and religious minorities, especially Muslims.
Members of the Uighur and other mainly Muslim ethnic groups have been singled out for mass electronic monitoring, and more than 1 million people have been detained in prison-like political re-education camps that China calls deradicalization and job training centers.
AI’s risks are seen mainly in its ability to control robotic, self-governing weaponry, financial tools and computers governing power grids, health centers, transportation networks and other key infrastructure.
China’s unbridled enthusiasm for new technology and willingness to tinker with imported or stolen research, and to stifle inquiries into major events such as the COVID-19 outbreak, heighten concerns over its use of AI.
“China’s blithe attitude toward technological risk, the government’s reckless ambition, and Beijing’s crisis mismanagement are all on a collision course with the escalating dangers of AI,” technology and national security academics Bill Drexel and Hannah Kelley wrote in an article published this week in the journal Foreign Affairs.
Taiwan Semiconductor Manufacturing’s (TSMC, 台積電) first wafer fab in Kumamoto, Japan is still set to launch commercial production in the fourth quarter of this year as planned, the world’s largest contract chipmaker said on Saturday in response to reports that mass production might begin ahead of schedule. TSMC said the monthly production capacity of the joint venture fab, Japan Advanced Semiconductor Manufacturing (JASM), is expected to hit 55,000 units of 12-inch wafers, using the mature 12-nanometer, 16-nanometer, 22-nanometer and 28-nanometer processes. JASM is owned by TSMC and its Japanese business partners Sony Semiconductor Solutions Corp and Denso Corp, with the Taiwanese company
US President Joe Biden’s administration is in talks to confer more than US$10 billion in subsidies to Intel Corp, people familiar with the matter said, in what would be the largest award yet under a plan to bring semiconductor manufacturing back to US soil. Intel’s award package is expected to include both loans and direct grants, the source said. They stressed that negotiations are still under way. The US Department of Commerce and Intel declined to comment. The incentives would come from the 2022 Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act, which set aside US$39 billion in direct grants as
German automaker Volkswagen (VW) on Wednesday said that it was discussing the future of its activities in China’s troubled Xinjiang region, following fresh allegations of human rights abuses. The Handelsblatt daily reported that forced labor might have been used to build a test track in Turpan, Xinjiang, in 2019. VW said it had seen no evidence of human rights violations in connection with the project, but vowed to investigate any new information that came to light. In an apparent sign of the growing pressure on the group over its presence in the region, VW added that it was in talks with its Chinese
A new artificial intelligence (AI) tool that promises to create short videos from simple text commands has raised concerns along with questions from artists and media professionals. OpenAI, the creator of ChatGPT and image generator DALL-E, on Thursday said it was testing a text-to-video model called “Sora” that can allow users to create realistic videos with simple prompts. The San Francisco-based start-up said that Sora can “generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” but added that it still has limitations, such as possibly “mixing up left and right.” Examples of Sora-created clips