Earlier this month, OpenAI released its most advanced models yet, saying they had the ability to “reason” and solve complex math and coding problems. The industry-leading startup, valued at about US$150 billion, also acknowledged that it raised the risk artificial intelligence (AI) could be misused to create biological weapons.
You would think the potential of such a consequential outcome would raise alarm bells that stricter oversight of AI is critical, but despite almost two years of existential warnings from industry leaders, academics and other experts about the technology’s potential to wreak catastrophe, the US has not enacted any federal regulation.
A chorus of voices inside and outside the tech industry dismiss these doomsday warnings as distractions from AI’s more near-term harms, such as potential copyright infringement, the proliferation of deepfakes and misinformation, or job displacement, but lawmakers have done little to address these current risks, either.
Illustration: Yusha
One of the core arguments leveled against regulation is that it would impede innovation and could result in the US losing the AI race to China. However, China has been rapidly advancing in spite of heavy-handed oversight — and all-out US efforts to block it from accessing critical components and equipment.
The export controls have hampered China’s progress, but one area where it leads the US has been in setting standards for how the most sweeping technology of our time can be created and used.
China’s autocratic regime makes imposing strict rules much easier, as suffocating as they might seem for its tech industry,
The Chinese government obviously has different motives, including maintaining social stability and party power, but Beijing also sees AI as a priority, therefore it is working with the private sector to boost innovation while still maintaining supervision.
Despite political differences, there are some lessons the US can learn. For starters, China is tackling the near-term concerns through a combination of new laws and court precedents. Cyber regulators rolled out laws on deepfakes in 2022, protecting victims whose likeness was used without consent and requiring labels on digitally altered content. Chinese courts have also set standards on how AI tools can be used, issuing rulings that protect artists from copyright infringement and voice actors from exploitation.
Broader interim rules on generative AI require developers to share details with the government about how algorithms are trained, and pass stringent safety tests. (Part of these assessments is to ensure the outputs align with socialist values).
However, regulators have also shown balance and rolled back some of the most daunting requirements after feedback from the industry.
The revisions send a signal that they are willing to work with the tech sector while maintaining supervision.
This stands in stark contrast to efforts in the US. Lawsuits over current AI harms are slowly making their way through the courts, but the absence of federal action has been stark. A lack of guidelines also creates uncertainty for business leaders. US regulators could take a leaf out of China’s playbook and narrowly target laws focused on known risks while working more closely with the industry to set up guardrails for the far-off existential dangers.
In the absence of federal regulation, some states are taking matters into their own hands. Californian lawmakers last month approved an AI safety bill that would hold companies liable if their tools are used to cause “severe harm,” such as to unleash a biological weapon.
Many tech companies, including OpenAI, have fiercely opposed the bill, saying that such legislation should be left to the US Congress.
An open letter from AI entrepreneurs and researchers also said that the bill would be “catastrophic” for innovation and would let “places like China take the lead in development of this powerful tool.”
It would be wise for policymakers to remember that loud voices in the tech sector have used this line of argument to fend off regulation long before the AI frenzy, and the fact that the US cannot even seem to agree on laws to prevent worse-case AI scenarios — let alone address the more immediate harms — is concerning.
Ultimately, using China as an excuse to avoid meaningful oversight is not a valid argument. Approaching AI safety as a zero-sum game between the US and China leaves no winners.
Mutual suspicion and mounting geopolitical tensions mean we would not likely see the two working together to mitigate the risks anytime soon, but it does not have to be this way.
Some of the most vocal proponents for regulation are the pioneers who helped create the technology. A few so-called AI godfathers, including Turing Award winners Yoshua Bengio, Geoffrey Hinton and Andrew Yao, sat down earlier this month in Italy and called for global cooperation across jurisdictions.
They acknowledged the competitive geopolitical climate, but also implored that loss of control or malicious use of AI could “lead to catastrophic outcomes for all of humanity.” They offered a framework for a global system of governance.
Many people say they are wrong, but the risks seem too high to entirely write them off. Policymakers from Washington to Beijing should learn from these scientists, who have at least shown it is possible to find some common ground.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
In the US’ National Security Strategy (NSS) report released last month, US President Donald Trump offered his interpretation of the Monroe Doctrine. The “Trump Corollary,” presented on page 15, is a distinctly aggressive rebranding of the more than 200-year-old foreign policy position. Beyond reasserting the sovereignty of the western hemisphere against foreign intervention, the document centers on energy and strategic assets, and attempts to redraw the map of the geopolitical landscape more broadly. It is clear that Trump no longer sees the western hemisphere as a peaceful backyard, but rather as the frontier of a new Cold War. In particular,
As the Chinese People’s Liberation Army (PLA) races toward its 2027 modernization goals, most analysts fixate on ship counts, missile ranges and artificial intelligence. Those metrics matter — but they obscure a deeper vulnerability. The true future of the PLA, and by extension Taiwan’s security, might hinge less on hardware than on whether the Chinese Communist Party (CCP) can preserve ideological loyalty inside its own armed forces. Iran’s 1979 revolution demonstrated how even a technologically advanced military can collapse when the social environment surrounding it shifts. That lesson has renewed relevance as fresh unrest shakes Iran today — and it should
The last foreign delegation Nicolas Maduro met before he went to bed Friday night (January 2) was led by China’s top Latin America diplomat. “I had a pleasant meeting with Qiu Xiaoqi (邱小琪), Special Envoy of President Xi Jinping (習近平),” Venezuela’s soon-to-be ex-president tweeted on Telegram, “and we reaffirmed our commitment to the strategic relationship that is progressing and strengthening in various areas for building a multipolar world of development and peace.” Judging by how minutely the Central Intelligence Agency was monitoring Maduro’s every move on Friday, President Trump himself was certainly aware of Maduro’s felicitations to his Chinese guest. Just
On today’s page, Masahiro Matsumura, a professor of international politics and national security at St Andrew’s University in Osaka, questions the viability and advisability of the government’s proposed “T-Dome” missile defense system. Matsumura writes that Taiwan’s military budget would be better allocated elsewhere, and cautions against the temptation to allow politics to trump strategic sense. What he does not do is question whether Taiwan needs to increase its defense capabilities. “Given the accelerating pace of Beijing’s military buildup and political coercion ... [Taiwan] cannot afford inaction,” he writes. A rational, robust debate over the specifics, not the scale or the necessity,